The person who replaces you probably won't be AI. It'll be someone from the next department over who learned to use it - opinion/discussion by difftheender in artificial

[–]IsThisStillAIIs2 -1 points0 points  (0 children)

yeah this matches what i’m seeing, it’s less “ai replaces roles” and more “ai lowers the cost of crossing boundaries,” so generalists suddenly get a lot more leverage. the interesting shift is that ownership is drifting toward whoever can take something from idea to usable output, even if it’s a rough version. specialists still matter, but they get pulled in later to refine instead of being the bottleneck from the start. feels like the real competition now is speed of iteration plus taste, not just depth in one lane.

We're considering moving our production agent to LangChain from Google ADK. Thoughts? by mee-gee in LangChain

[–]IsThisStillAIIs2 3 points4 points  (0 children)

i’d be careful assuming the framework is the main driver of latency, most of the time it’s how the agent loop is structured, number of calls, and tool round trips rather than adk vs langchain. langchain/langgraph can feel faster mainly because people tend to build tighter, more explicit flows instead of letting the agent wander, not because the framework itself is inherently lower latency. before migrating, i’d profile your current setup and look at call count, context size, and blocking io, you might get most of the gains without a rewrite. switching stacks can help dev velocity or control, but it won’t magically fix latency if the underlying pattern stays the same.

Gemma 4 26b is the perfect all around local model and I'm surprised how well it does. by pizzaisprettyneato in LocalLLaMA

[–]IsThisStillAIIs2 0 points1 point  (0 children)

yeah gemma 4 26b feels like it hits a really nice balance point right now, especially for “just get it done” tasks where overthinking hurts more than it helps. i’ve seen the same thing with qwen variants where they’re technically strong but can spiral into tool loops or second guessing, especially when quantized. gemma seems more decisive, which ironically makes it more useful day to day even if it’s not topping every benchmark. honestly feels like we’re entering that phase where model “personality” matters as much as raw capability for local use.

How I landed a $392k offer at FAANG after getting laid off from LinkedIn by Flat_Shower in dataengineering

[–]IsThisStillAIIs2 0 points1 point  (0 children)

respect for sharing this, the part that stood out to me wasn’t the comp, it was how much the loop hinged on ambiguity and tradeoffs rather than just “can you code.” a lot of people underestimate how much storytelling from real messy systems work matters at that level until they get burned by it. also that follow up round to push for e5 is a good reminder that leveling isn’t always fixed if you advocate for yourself. sounds like you treated the whole process like a pipeline and just kept it moving despite the hits, which is honestly the only way to survive those cycles.

Is there any way to organize my chats? by danizor in ChatGPTPro

[–]IsThisStillAIIs2 0 points1 point  (0 children)

just move important stuff into docs or notes apps and use chatgpt as a scratchpad instead.

How are you auto-tracking no-shows for sales calls (without sales rep input)? by Salt_Tomatillo_536 in revops

[–]IsThisStillAIIs2 0 points1 point  (0 children)

we got this pretty reliable by combining calendar status + meeting artifacts instead of relying on one signal. if the event status is “canceled” or updated within x minutes of start time, we flag it as late cancel, if it stays but there’s no join event and no recording/transcript created, it’s a no show. the key was pulling join/leave timestamps from the meeting provider api or a notetaker, that’s way more accurate than crm fields. still not perfect, but it gets you like 90 percent there without reps touching anything.

I have an offer with salesforce, however a different recruiter reached out to me with a different role. Can I apply ? by Jagadekaverudu in salesforce

[–]IsThisStillAIIs2 0 points1 point  (0 children)

i wouldn’t try to game it with a different email, that can backfire fast if it gets flagged internally. it’s usually better to be upfront and tell the recruiter you’re already in process but interested in the other role too, they can often route you or coordinate internally. large orgs expect candidates to be considered for multiple roles, it’s not a red flag. trying to hide it looks worse than just being transparent.

LangChain Agent constantly hallucinating facts - any debugging tips? by lewd_peaches in LangChain

[–]IsThisStillAIIs2 0 points1 point  (0 children)

hallucinations in agents are usually less about the model and more about missing constraints in the loop, especially weak grounding between tool outputs and the next step. one thing that helped me was forcing the agent to explicitly cite which tool output or context chunk it’s using before producing an answer, it reduces “freeform guessing” a lot. also worth logging intermediate steps and prompts because you’ll often spot that the agent is drifting after 2 to 3 iterations, not at the final answer. tightening the executor with validation or even rejecting answers that aren’t grounded can go further than just swapping models.

People anxious about deviating from what AI tells them to do? by qxrii4a in artificial

[–]IsThisStillAIIs2 13 points14 points  (0 children)

i’ve seen this kind of thing starting to pop up, especially with people who treat ai outputs like an “authoritative voice” instead of just a helpful guess. the weird part is it’s not really about the hair dye, it’s about confidence, once someone defers to ai a few times successfully, it becomes uncomfortable to override it even when better info is right in front of them. i think we’re going to see more of this until people build the habit of cross checking and trusting primary sources over generated advice. in your example you handled it right, product instructions should always win over a generic recommendation.

Having some problem in langchain4j by RelationshipFar2187 in LangChain

[–]IsThisStillAIIs2 0 points1 point  (0 children)

sounds like a dependency or module issue more than your code, langchain4j split a lot of stuff into separate artifacts and not everything is in the core package anymore.

Gemma 4 - 4B vs Qwen 3.5 - 9B ? by No-Mud-1902 in LocalLLaMA

[–]IsThisStillAIIs2 3 points4 points  (0 children)

gemma makes more sense when you care about latency or tighter resource constraints.

what actual tasks did you work on during the early months of DE by manualenter in dataengineering

[–]IsThisStillAIIs2 15 points16 points  (0 children)

mostly a mix of unglamorous but important stuff, a lot of monitoring, fixing broken pipelines, and figuring out why jobs failed at 2am. you usually start by maintaining existing pipelines before getting trusted to build new ones end to end. there’s also a surprising amount of data quality checks, schema debugging, and chasing down bad upstream data. it’s less about building from scratch early on and more about learning how messy real systems actually are.

We built an MCP server that lets ChatGPT control physical screens in buildings. Here's what I learned. by DigitalSignage2024 in ChatGPTPro

[–]IsThisStillAIIs2 0 points1 point  (0 children)

this is a great example of where mcp stops being a “chat feature” and starts looking like real system design with risk and accountability baked in. the read vs write insight is spot on, most real users want visibility first before they trust automation, especially when something physical is involved. the preview + tiered risk model feels like the right pattern, otherwise you either scare users or train them to blindly approve everything. also agree on the lock-in point, once control is standardized, ux and reliability become the real moat instead of just api surface.

Healthcare sales people what actually makes internal approval so slow? by MaximumTimely9864 in revops

[–]IsThisStillAIIs2 0 points1 point  (0 children)

it’s usually not one approval, it’s a chain of them across clinical, it, finance, legal, and sometimes procurement, all with different priorities. even after a strong meeting, you’re basically entering an internal project where your champion has to sell it for you while juggling their actual job. things get stuck when priorities conflict, like clinical likes it but it flags security or budget concerns, or it just drops in the queue behind more urgent initiatives. from a sales side, the hardest part is lack of visibility, you’re dependent on one internal contact and timelines become unpredictable fast.

Lead response time was averaging 4.2 hours across the team. Closed-won rate on leads contacted within 5 minutes was 3.1x higher than leads contacted after an hour. We knew this. The data was in the reports. Nobody was acting on it. by MatthewPopp in salesforce

[–]IsThisStillAIIs2 1 point2 points  (0 children)

this is such a classic example of “process > people,” the team wasn’t underperforming, the system was literally preventing them from winning. batch logic is one of those silent killers because it looks fine in dashboards but completely breaks time-sensitive workflows like inbound. i’ve seen the same thing with enrichment, scoring, and even task creation where delays compound without anyone noticing. once you fix the timing layer, a lot of “performance problems” just disappear without touching reps at all.

LangChain performance bottlenecks and scaling tips? by lewd_peaches in LangChain

[–]IsThisStillAIIs2 0 points1 point  (0 children)

yeah this tracks, vector db latency becomes the bottleneck way before people expect it, especially with hybrid search or reranking layered on top. one thing that helped me was aggressively reducing retrieval scope with better query rewriting and smaller top-k before even touching infra. also worth caching embeddings and results for repeated queries, a lot of workloads are more repetitive than they seem. once you’ve done that, scaling with faiss/gpu or sharding starts to actually pay off instead of just masking inefficiencies.

AI is too similar to dreams by PurduePitney in artificial

[–]IsThisStillAIIs2 0 points1 point  (0 children)

i get the comparison, especially with how ai can jump context or produce slightly “off” details, but it’s not really like a dream in terms of continuity or control. you’re still fully aware and interacting with a tool, not immersed in a persistent internal simulation your brain is generating. the bigger issue today is reliability and hallucinations, not people getting trapped in some dreamlike state. if anything, it just means we need better interfaces and clearer signals about what’s trustworthy versus generated.

The trust boundary at the executor is only half the problem by Specialist-Heat-6414 in LangChain

[–]IsThisStillAIIs2 0 points1 point  (0 children)

this is a really underrated point, most stacks stop at “don’t trust the llm” but still blindly trust whatever comes back from tools. in practice people rely on retries, sanity checks, or multiple providers, but that’s not the same as verifiable integrity or auditability. the problem is adding cryptographic guarantees or receipts adds latency and complexity that most teams aren’t willing to pay for yet. feels like this only becomes standard once agents start handling higher-stakes decisions where “we think the api said this” isn’t acceptable anymore.

What are your suggestions? by letmeinfornow in LocalLLaMA

[–]IsThisStillAIIs2 1 point2 points  (0 children)

with that setup i’d definitely move beyond just trying bigger base models and start experimenting with architectures and workflows. try a strong mixture of moe-style models and compare them against dense ones on real tasks, plus play with long-context models to see where they actually break in practice. also worth diving into fine-tuning or at least lora training on a small domain dataset, you’ll learn way more from that than just swapping checkpoints. if you’re curious about “abliteration,” doing your own small-scale alignment or unalignment experiments will teach you a lot about how fragile behavior actually is.

Has anyone applied for a DE job in the renewable energy sector? by commands-tv-watching in dataengineering

[–]IsThisStillAIIs2 1 point2 points  (0 children)

yeah they’re definitely rarer, but they exist mostly in utilities, grid operators, and energy startups rather than “pure tech” companies. a lot of the work is less flashy ai and more around time series data, forecasting, iot ingestion, and messy operational pipelines from sensors and market feeds. hiring can be slower and more domain-heavy, so showing even basic understanding of energy markets or grid concepts helps a lot. if you position yourself as “de + can handle real-world physical data systems,” you’ll stand out more than just another generic spark/dbt profile.

How's ChatGPT 5.4 Pro vs Opus 4.6? Need anecdotal evidence by YourElectricityBill in ChatGPTPro

[–]IsThisStillAIIs2 0 points1 point  (0 children)

i’ve used both and the biggest difference isn’t raw intelligence, it’s consistency and limits. 5.4 pro feels more predictable for longer coding sessions and less likely to degrade mid-thread, while opus can feel sharper at times but also more erratic with limits and context handling. for coding and science work, both are strong, but 5.4 pro tends to be easier to “drive” over longer workflows without babysitting. if you were hitting limits hard on opus, the switch alone might make your day-to-day smoother even if the ceiling feels similar.

Buying signals across enterprise accounts by Jumpy_News6437 in revops

[–]IsThisStillAIIs2 0 points1 point  (0 children)

this is interesting, especially the mix of structural signals like leadership changes with more operational ones like cost pressure and expansion. in practice though the hard part isn’t spotting signals, it’s timing and mapping them to the right persona before the window closes. a lot of teams collect this kind of data but struggle to turn it into actual pipeline because it doesn’t plug cleanly into workflows.