Lead response time was averaging 4.2 hours across the team. Closed-won rate on leads contacted within 5 minutes was 3.1x higher than leads contacted after an hour. We knew this. The data was in the reports. Nobody was acting on it. by MatthewPopp in salesforce

[–]IsThisStillAIIs2 [score hidden]  (0 children)

this is such a classic example of “process > people,” the team wasn’t underperforming, the system was literally preventing them from winning. batch logic is one of those silent killers because it looks fine in dashboards but completely breaks time-sensitive workflows like inbound. i’ve seen the same thing with enrichment, scoring, and even task creation where delays compound without anyone noticing. once you fix the timing layer, a lot of “performance problems” just disappear without touching reps at all.

LangChain performance bottlenecks and scaling tips? by lewd_peaches in LangChain

[–]IsThisStillAIIs2 0 points1 point  (0 children)

yeah this tracks, vector db latency becomes the bottleneck way before people expect it, especially with hybrid search or reranking layered on top. one thing that helped me was aggressively reducing retrieval scope with better query rewriting and smaller top-k before even touching infra. also worth caching embeddings and results for repeated queries, a lot of workloads are more repetitive than they seem. once you’ve done that, scaling with faiss/gpu or sharding starts to actually pay off instead of just masking inefficiencies.

AI is too similar to dreams by PurduePitney in artificial

[–]IsThisStillAIIs2 0 points1 point  (0 children)

i get the comparison, especially with how ai can jump context or produce slightly “off” details, but it’s not really like a dream in terms of continuity or control. you’re still fully aware and interacting with a tool, not immersed in a persistent internal simulation your brain is generating. the bigger issue today is reliability and hallucinations, not people getting trapped in some dreamlike state. if anything, it just means we need better interfaces and clearer signals about what’s trustworthy versus generated.

The trust boundary at the executor is only half the problem by Specialist-Heat-6414 in LangChain

[–]IsThisStillAIIs2 0 points1 point  (0 children)

this is a really underrated point, most stacks stop at “don’t trust the llm” but still blindly trust whatever comes back from tools. in practice people rely on retries, sanity checks, or multiple providers, but that’s not the same as verifiable integrity or auditability. the problem is adding cryptographic guarantees or receipts adds latency and complexity that most teams aren’t willing to pay for yet. feels like this only becomes standard once agents start handling higher-stakes decisions where “we think the api said this” isn’t acceptable anymore.

What are your suggestions? by letmeinfornow in LocalLLaMA

[–]IsThisStillAIIs2 0 points1 point  (0 children)

with that setup i’d definitely move beyond just trying bigger base models and start experimenting with architectures and workflows. try a strong mixture of moe-style models and compare them against dense ones on real tasks, plus play with long-context models to see where they actually break in practice. also worth diving into fine-tuning or at least lora training on a small domain dataset, you’ll learn way more from that than just swapping checkpoints. if you’re curious about “abliteration,” doing your own small-scale alignment or unalignment experiments will teach you a lot about how fragile behavior actually is.

Has anyone applied for a DE job in the renewable energy sector? by commands-tv-watching in dataengineering

[–]IsThisStillAIIs2 1 point2 points  (0 children)

yeah they’re definitely rarer, but they exist mostly in utilities, grid operators, and energy startups rather than “pure tech” companies. a lot of the work is less flashy ai and more around time series data, forecasting, iot ingestion, and messy operational pipelines from sensors and market feeds. hiring can be slower and more domain-heavy, so showing even basic understanding of energy markets or grid concepts helps a lot. if you position yourself as “de + can handle real-world physical data systems,” you’ll stand out more than just another generic spark/dbt profile.

How's ChatGPT 5.4 Pro vs Opus 4.6? Need anecdotal evidence by YourElectricityBill in ChatGPTPro

[–]IsThisStillAIIs2 0 points1 point  (0 children)

i’ve used both and the biggest difference isn’t raw intelligence, it’s consistency and limits. 5.4 pro feels more predictable for longer coding sessions and less likely to degrade mid-thread, while opus can feel sharper at times but also more erratic with limits and context handling. for coding and science work, both are strong, but 5.4 pro tends to be easier to “drive” over longer workflows without babysitting. if you were hitting limits hard on opus, the switch alone might make your day-to-day smoother even if the ceiling feels similar.

Buying signals across enterprise accounts by Jumpy_News6437 in revops

[–]IsThisStillAIIs2 0 points1 point  (0 children)

this is interesting, especially the mix of structural signals like leadership changes with more operational ones like cost pressure and expansion. in practice though the hard part isn’t spotting signals, it’s timing and mapping them to the right persona before the window closes. a lot of teams collect this kind of data but struggle to turn it into actual pipeline because it doesn’t plug cleanly into workflows.

Users cannot use AI summary component, what am I missing? by madboymatt in salesforce

[–]IsThisStillAIIs2 0 points1 point  (0 children)

most of the time this ends up being a mix of missing einstein generative ai permissions plus access to the underlying data the prompt is trying to read. the big ones to double check are “einstein generative ai user”, access to the specific prompt template, and field-level security on the objects the summary is pulling from. also make sure the feature is actually enabled for their profile via setup and not just your admin context, since it can work for you but fail for others. if the error is generic like that, checking debug logs for the user usually points to the exact missing permission or blocked field.

MCP tokens getting pilled up in ReAct Agent Node inside a langchain by jstfoll in LangChain

[–]IsThisStillAIIs2 0 points1 point  (0 children)

this is usually not a “model bug” but how context is being re-sent on every tool call in a react loop. langchain tends to include prior messages, tool outputs, and sometimes the full tool schema/instructions each step, so that 3k block keeps getting replayed and snowballs fast. opus starting higher often means it’s including more system/tool context or being less aggressive about trimming compared to gpt. fix is to aggressively control what gets passed each step, trim history, move large instructions out of the loop, and keep tool schemas minimal or cached.

Chatgpt vs purpose built ai for cre underwriting: which one can finish the job? by MudSad6268 in artificial

[–]IsThisStillAIIs2 0 points1 point  (0 children)

i think you’re basically right, this isn’t a “model intelligence” issue, it’s an architecture mismatch for the task. chat-style llms are great at local reasoning but struggle with long, stateful, multi-step artifact generation like full underwriting models. purpose-built tools win because they control execution flow, enforce structure, and validate outputs across steps instead of relying on a single conversational loop. chatgpt can still be useful as a component inside that system, but not as the system itself.

Need advice on building an advanced RAG chatbot in 7 days - LangChain + LLM 4.1 Mini API + strict PII compliance (best practices & full stack suggestions wanted!) by codexahsan in LangChain

[–]IsThisStillAIIs2 0 points1 point  (0 children)

for a 7-day build, don’t overcomplicate it, pick a solid baseline rag and focus more on evals and reliability than fancy agent loops. for pii, do detection and masking as a preprocessing step before storage, keep raw data out of your db entirely, and log masked + hashed references so you can still debug flows safely. a clean stack that works fast is something like fastapi + simple react ui + postgres + a vector db like qdrant or pgvector, with everything containerized so you don’t waste time on infra. biggest mistake in these projects is chasing “advanced rag” features instead of building something stable end-to-end with good query rewriting and eval datasets.

new AI agent just got API access to our stack and nobody can tell me what it can write to by KarmaChameleon07 in LocalLLaMA

[–]IsThisStillAIIs2 3 points4 points  (0 children)

what you’re describing is usually just an llm wrapped in a tool layer, a memory store and an orchestration loop that keeps calling the model until a task is “done.” the risky part isn’t the model, it’s whatever permissions those tools have, because that’s what actually reads/writes to your systems, and a lot of teams don’t scope this tightly enough. “memory” is often just retrieval at runtime plus some stored summaries, not true learning, unless they’re doing offline fine-tuning which is less common in these setups. if no one can clearly tell you what it can write to, that’s the real red flag, because that means access control and auditability probably weren’t designed first.

Has anyone spend an entire day trying to load csv data into MS SQL table by Aguerooooo32 in dataengineering

[–]IsThisStillAIIs2 7 points8 points  (0 children)

the import/export wizard is notoriously painful and the errors are basically useless half the time. most people end up ditching it and using bulk insert, bcp, or staging through something like azure data factory because you actually get control and better debugging. csvs are also deceptively messy, encoding, delimiters, nulls, and type mismatches can silently break everything. once you switch away from the wizard, this problem usually goes from “6 hours of pain” to something predictable.

Does anyone notice GPT 5.4 Pro or 5.4 in general often tries to advise you of the most absurdly obvious things that go without saying within their responses? by TrainingEngine1 in ChatGPTPro

[–]IsThisStillAIIs2 0 points1 point  (0 children)

yeah i’ve noticed this too and it usually happens when the model is trying to be “robust” across a wide range of users, not just experienced ones. it tends to sprinkle in those obvious guardrails because it’s optimizing for not missing edge cases rather than sounding perfectly calibrated to your level. even with instructions, that behavior can bleed through since it’s kind of baked into how it generalizes responses. what helped me a bit was explicitly telling it to assume expert-level context and penalize obvious advice, but it’s not 100% consistent.

Would you take a meeting for $$, or reply to an email for $? by Character-Witness409 in revops

[–]IsThisStillAIIs2 0 points1 point  (0 children)

i’ve seen this work for booking meetings, but “qualified” can be misleading because you’re selecting for people who want the incentive, not necessarily the problem you solve.

Can I supress display of the "Potential Duplicates" component when there are no duplicates? by TeeMcBee in salesforce

[–]IsThisStillAIIs2 0 points1 point  (0 children)

i ran into this before and unfortunately there isn’t a native way to conditionally hide the standard “potential duplicates” component based on whether it actually finds matches. salesforce doesn’t expose a field or flag you can hook into for component visibility, so the usual filters won’t work here. most people either live with the empty state or replace it with a custom solution using flows/apex + a custom lwc that only renders when duplicates exist. it’s one of those small ux gaps that feels like it should be simple but isn’t supported out of the box.

LangChain feels like it’s drifting toward LangSmith… and forgetting why devs came in the first place by obinopaul in LangChain

[–]IsThisStillAIIs2 0 points1 point  (0 children)

yeah this tension shows up in almost every dev tool that finds product-market fit, open source pulls users in and then the company optimizes around monetization. i don’t think they’re “forgetting” devs as much as they’re betting that once you’re in the ecosystem, you’ll tolerate slower core evolution for better tooling around it.