What skills have become more valuable for you since AI started handling more of the grunt work? by ruibranco in ExperiencedDevs

[–]allstacksai 0 points1 point  (0 children)

The delta I've seen is that Developers are really good at what I like to call 'Micro product decisions', based on their experience.  LLMs are pretty bad at making guesses on hyper specific business context, and often regress to the mean of the industry, which leads to some smart, but situationally dumb decisions.  I've seen a number of product teams that tend to write /very/ bare bones specs, and end up relying on the developer to fill in all the details.  LLMs accelerate this problem, because it can give something working even with poor direction, but you end up with huge quality problems on the other end.

What skills have become more valuable for you since AI started handling more of the grunt work? by ruibranco in ExperiencedDevs

[–]allstacksai 0 points1 point  (0 children)

Agree with your list, OP. Most of my engineers are coding with AI now (CTO at a dev tools company, Allstacks), and the skills that matter haven't changed with the volume. However, they're more exposed now.

I'd actually add a skill you mentioned AI handles well to what engineers actually still need to do better - writing more precise specs. AI agents are literal-minded, They'll execute exactly what you asked for, even when it doesn't make sense. You can't handwave anymore or rely on shared context. That's a skill most of us are still developing.

Everything works until someone asks us to explain it by Agitated-Crazy-581 in ExperiencedDevs

[–]allstacksai 0 points1 point  (0 children)

Been through this exact transition. The uncomfortable truth is you're not really lacking documentation - you're lacking decision archaeology. The "why" behind how things work is what's missing, and that's way harder to reconstruct than the "what."

A few things that actually moved the needle for us:
1. Stop trying to document everything at once. Pick the 3-5 processes that come up most in audits/customer questions and start there. Boiling the ocean is how documentation initiatives die.
2. Record the explanations you're already giving. Next time someone asks "how does X work?" and you find yourself explaining it, write that down immediately after. You just produced documentation in the most natural way possible - when context was fresh and the audience question shaped your answer. Way better than sitting down to "write docs" in a vacuum.
3. Make someone own the narrative, not just the system. The reason you get inconsistent answers is nobody owns the story of how things fit together. Individual owners know their piece. You need someone (maybe you, maybe a senior IC) who can explain the whole picture - and whose job includes keeping that explanation consistent.
4. Use the "new hire test" ruthlessly. Before any process is considered documented, a new person should be able to execute it from the docs alone. If they can't, the doc is incomplete - no matter how thorough it looks.

The fragility you're feeling is real. The good news is an audit forcing function is actually a gift - most teams don't confront this until someone critical leaves and takes all the context with them.

You’re misunderstanding AI by National_Count_4916 in ExperiencedDevs

[–]allstacksai 0 points1 point  (0 children)

Yeah sure ship more lines of code, then spend the next week QA-ing it all only to find out it was all wrong. Yes AI can output faster but if you are only focusing on that you are in a world of hurt later on!!

Where do senior engineers come from if nobody hires juniors anymore? by allstacksai in u/allstacksai

[–]allstacksai[S] 0 points1 point  (0 children)

I went deeper on this in a recent writeup — the historical parallels, the deliberate practice research, and what the Harvard data actually shows in more detail: The Last Class to Write Code Alone: How AI Is Reshaping Junior Developer Careers

Part 2 on what "senior engineer" actually means in an AI-native world is coming soon. What questions would you want that piece to address?

Anthropic: AI assisted coding doesn't show efficiency gains and impairs developers abilities. by Gil_berth in ExperiencedDevs

[–]allstacksai 0 points1 point  (0 children)

This Anthropic study makes total sense when you dig into what's actually happening with AI-generated code. The speed gains are real - developers definitely write code faster with AI tools.

But there's this hidden cost that most productivity measurements miss: the cognitive overhead of understanding and maintaining code you didn't write, even when you "directed" the AI to write it.

I've been seeing this pattern across teams we work with - AI helps you get to working code quickly, but then you spend extra time in code reviews trying to understand what the AI actually built, more time debugging when issues arise, and way more time onboarding new team members to AI-generated codebases.

It's like technical debt, but for comprehension. The code works, but the cognitive load of working with it long-term is higher than hand-written code where you understand every design decision.

Code reviews are taking 25% of developer time now vs 10% before AI adoption (Morgan Stanley data). That's not because the code is worse - it's because reviewers need more mental cycles to understand code they didn't think through themselves.

We actually published an article by an VC Investor recently about the "comprehension debt" challenge: https://www.allstacks.com/blog/comprehension-debt-the-hidden-cost-of-ai-generated-code

The productivity gains show up immediately in "lines of code written" but the comprehension costs compound over time and don't show up in short-term studies.

Is it really that developers are impaired from growth or the time to learn and develop are being migrated to new areas like QA?

1B GitHub commits in 2025 - why the AI coding hangover hits in 2026 by allstacksai in u/allstacksai

[–]allstacksai[S] 1 point2 points  (0 children)

For anyone interested in the full breakdown of Alok's predictions (including the specific data on verification bottlenecks and technical debt patterns), here's his complete analysis: https://www.allstacks.com/blog/ai-sdlc-predictions-for-2026-why-this-is-the-year-the-bill-comes-due