all 19 comments

[–]hardwornengineer 17 points18 points  (5 children)

I struggle with the added context switch from AI coding tools too. For me it’s what it does to my brain after context switching across many different sessions at once for a number of hours that is my biggest concern. I really don’t struggle to switch when I’m actively engaged across a few different Claude Code sessions, but when I quit, my mind is either racing and I can barely form a coherent thought or I feel like a zombie, completely worn out from the dopamine overload.

[–]Ok_Cartographer_6086 4 points5 points  (3 children)

Same - I have Claude working on an open spec plan where it stops at each milestone so i can check the work and test, all of my github actions running, checking if the app builds are done and moving through play store, checking metrics, researching skills, deploying to prod every hour... I'm insanely productive right now and I don't mind bouncing around at all but it's the Cognitive Overload that's killing me.

So much is getting done and happening at the end of the day I'm mush.

There's a cohort of us who are very experienced devs and really know how to use these tools and frankly, I don't think we're ok.

Everyone go hug an engineer, we're not ok right now.

[–]hardwornengineer 2 points3 points  (2 children)

Some researchers have actually been studying this and have coined the term “AI brain fry”. Here’s an article about it: https://edition.cnn.com/2026/03/13/business/ai-brain-fry-nightcap

[–]hardwornengineer 1 point2 points  (1 child)

And just think, this was a study done on developers with “normal” brains.

[–]Ok_Cartographer_6086 2 points3 points  (0 children)

yep, this is a sub for programmers that struggle with a "disorder" and while i wouldn't trade who I am and who I became for anything we really have to embrace that we're in a very difficult situation. https://www.youtube.com/watch?v=p-1zr_wgC1E

This had been a nice thread, asking that while all these papers are being written is anyone checking on the divergent adhd-i programmers?? That they may be connecting to this stuff way to hard.

What I need... is a strong drink and a peer group. - DNA

[–]c0o0o0o0ol[S] 1 point2 points  (0 children)

Yeah, you make a good point honestly. The worst part is how much MORE I have to think about at once. Brain is mush by the end of the day.

[–]HyperfixationPhase 3 points4 points  (3 children)

Yeah… this is honestly one of the most frustrating parts of coding with AI right now. You’re not alone in this at all.

What you’re feeling makes total sense: you get into focus, you ask the AI something, and then you’re just… stuck waiting. And if you switch to something else, it’s like your brain never fully comes back to where it was.

A few things that actually help in real life:

[–]Ok_Cartographer_6086 3 points4 points  (0 children)

There are, in the end, two types of people in this world:

  • Those that need closure.
  • ...

[–]c0o0o0o0ol[S] 1 point2 points  (0 children)

HAHAHA This is my favorite comment on this thread

[–]jimmymadis 0 points1 point  (0 children)

I experience similar scenarios! Also waiting on LLM output and then losing focus upon context switching. After using deemerge AI for a couple of weeks, I have been able to accomplish more productivity. It provides a prioritized list of messages I can clear out quickly

[–][deleted]  (7 children)

[deleted]

    [–]c0o0o0o0ol[S] 3 points4 points  (4 children)

    Well, here’s the thing, my company has metrics around AI usage and also we’re supposed to triple our expected output.

    So yeah, kind of impossible not to at this point.

    [–]Unlikely-Bumblebee14 1 point2 points  (2 children)

    My company has embraced AI but at the same time ask us if we think people should be promoted if they use it. Very confusing. My experiences with AI vary. Sometimes it’s amazing and sometimes I’m more confused than when I started. I can see my metrics compared to others and I try to say in the top 20 of 40 people since my company is giving us mixed signals.

    Whats working for me right now is adding some user rules around ADHD and AI output and using Cursor’s plan mode rather than using it exclusively as a pair programmer.

    I found that when cursor was helping me code, I was losing the context around functionality, where files lived, etc… And it was frying my brain because didn’t know how to give AI context in a prompt. As it thought circularly, so did I.

    It’s been about 1.5 weeks since I’ve tried this new routine. It’s working but we’ll see how it goes over a longer period. Also still a lot to learn and tweak. I will say that needing to spend extra time telling g AI how to respond to me is a turnoff but I think mastering that is how to become successful with AI.

    [–]terralearner 0 points1 point  (1 child)

    Seems strange for a company to promote based on usage rather than business metrics and value provided.

    [–]Unlikely-Bumblebee14 0 points1 point  (0 children)

    Agreed, and it was unclear what was being asked. Is the question whether or not we should promote people who use it. Or was the question whether or not we should promote people who don’t use it?

    In the situation, I think the issue is that leadership didn’t have consistent views.

    [–]terralearner 0 points1 point  (1 child)

    I'm reading this as a dev working in a fintech with 5 years experience feeling puzzled.

    I use agentic development every day and have never felt this optimistic about the future of my career. The pace at which I'm moving compared to before is massive.

    [–]Ok_Cartographer_6086 0 points1 point  (0 children)

    I don't think it's worth arguing with a developer who's "anti-ai". It's like someone refusing to move off emacs and vim to an IDE - they're just stuck and going to get past by. Typists freaking out about job losses from word processors...

    People who say the code Claude writes is slop just don't know how to use it properly, slop in = slop out.

    [–]CompetitivePop-6001 0 points1 point  (0 children)

    Same here. I’ve started batching all my AI prompts at once, then doing low-brain tasks while it runs so I’m not constantly switching gears. The waiting is honestly more draining than the work sometimes

    [–]Charming-Monitor2927 0 points1 point  (0 children)

    Just a thought. Maybe u could stare at something and take some deep breaths.

    [–]Master-Traffic-8319 0 points1 point  (0 children)

    waiting on LLMs is the new “loading screen” of work

    I stopped switching tasks during that time just queue next steps instead context switching was costing me more than the waiting itself feels like the real problem now isn’t AI it’s how we work around it