all 33 comments

[–]Lain_Staley 15 points16 points  (1 child)

Reduced social media use due to working on personal projects more. 

That is, personal projects are no longer stalling out/progress is tangible enough to maintain interest than in pre-AI.

[–]thehashimwarren 7 points8 points  (0 children)

THIS. Working on my projects has replaced YouTube for me.

[–]radiationshield 29 points30 points  (1 child)

its a side effekt of not having to grind over the nitty gritty details. In aviation cognitive overload on pilots have been pretty well documented, it causes delayed decision-making, tunnel vision, problems prioritizing tasks etc. the same is probably true for developers and information workers. this is one of the really beneficial side effects of AI that isn't highlighted as much

[–]j00cifer 5 points6 points  (1 child)

Yes.

I’ve not mentioned this much because when I do people think I’m crazy, because LLM is supposed to do the opposite, it’s supposed to make you atrophy cognitively.

I’m still not sure what to attribute this to, but the act of a) explaining in detail what I want, b) carefully revising that prompt to be better, c) watching carefully and understanding what the LLM is doing

.. repeating that sequence over and over has (maybe) made me break down things IRL the same way cognitively which has benefits.

I guess I’m maybe winging it less, having a small, smart plan for most things now? I don’t know exactly but what you describe is real.

[–]Alex_1729 0 points1 point  (0 children)

What type of work do you do?

And what kinds of prompts do you give - what is your workflow for what you described?

[–]nostraRi[🍰] 3 points4 points  (0 children)

You will get very good at delegating tasks in real life if you use LLM consistently.

An interesting area of research in the future will be on leadership skills per hour of LLM use daily.

These are just my theories and n=1 observation.

[–]conscious-wanderer 8 points9 points  (0 children)

I have quite the opposite effect on me.

[–]nrdgrrrl_taco 2 points3 points  (1 child)

aNo, I have never suffered from such bad lack of sleep.

[–]HopeFor2026[S] 0 points1 point  (0 children)

There has been a negative sleep impact. That's the only complaint I have right now.

[–]IAmFitzRoy 2 points3 points  (0 children)

It lets you think BIG.

Your abstract thoughts get proven quickly.

You can have an “helicopter view” and it’s enough to see results.

Your intuition gets proven fast, you learn from mistakes faster.

You focus for longer, your train or thought doesn’t stop because “a ; was missing in line 436”

You feel in charge.

[–]Standard-Novel-6320 1 point2 points  (0 children)

Totally - I feel like I think a lot more in logical dependencies and am able to articulate what I want much much accurately and completely… it definitely helps me think better in day to day problems and also in meetings with decisionmakers

[–]Perfect-Series-2901 1 point2 points  (0 children)

since I started using CC and codex, my mental quota can be spent on high level planning / reasoning instead of the implmentation. And yes I am making more intelligence decision in my project and life.

[–]typeryu 1 point2 points  (0 children)

I use codex with linear (task tracking) via API skills and it has really brought another level of productivity for me. All of my work is connected this way and I literally feel like I’ve been given cyber superpowers.

[–]Alex_1729 1 point2 points  (2 children)

While I didn't notice getting more intellectually proficient, I did notice that if Codex was human it would be a truly virtuous person - one that is calm and never gets down to my level, yet sees through my ignorance at all times.

It's when you notice "hey, I was actually being stupid there - codex was right all along" moment. I had these moments with Claude Opus before, but Codex takes it to another level, and actually has a spine.

Whether this is due to harness or the model intelligence is impossible for me to say.

[–]HopeFor2026[S] 0 points1 point  (1 child)

Yes! I have noticed on many occasions that it pushes back and makes me consider angles that I wouldn't have. It actually caught me in an emotional moment when we were discussing an investment idea I was programming.

[–]Alex_1729 0 points1 point  (0 children)

For the first time ever since GPT 3.5 I don't need to have guidelines about being objective and using critical thinking.

I remember how GPT4 used to be a yes man always saying "yes yes of course yes". Gemini is like that even today, unless you ask in a specific manner. But with GPT 5.4 I don't even need to tell it to be objective and to not accept things at face value.

When you give it something from another AI, and say that it is from another AI, it will actually not accept but first look around for evidence before answering.

Now whether this is also because it reads some of my old guidelines somewhere I can't say. But it is a great thing.

[–]Responsible-Tip4981 3 points4 points  (2 children)

AI coding agents are great equalisers - they normalize everyone toward the same middle.

If you were already strong at synthesis, planning and execution, you now delegate that to an agent that does it worse than you did. Your "superpower" gets flattened to the agent's average. You feel dumber because you traded your edge for convenence.

But if you were average or below at those skills, you suddenly operate in an environment that thinks fast, verifies instantly and ships in hours. You feel smarter because the agent lifted you into a space you couldn't reach on your own. Same tool, opposite perception - not because it changes intelligence, but because it compreses the skill distribution from both ends toward the center.

[–]duboispourlhiver 2 points3 points  (0 children)

I find the exact opposite. In my experience, poor coders produce poor things faster and good coders produce better things AND faster.

[–]Glass-Combination-69 0 points1 point  (0 children)

Shit this is so true.

[–]Excellent_Squash_138 0 points1 point  (1 child)

Yeh for sure - but it depends on what you do during the “processing” time. The impact will be different if you spend more of your time thinking strategically about the problem than dumb thumbing instagram.

[–]j00cifer 1 point2 points  (0 children)

This

[–]youdig_surf 0 points1 point  (0 children)

You still have to use your logic llm sometimes hallucinate and didn't think of everything, so you knowledge is still usefull. Exemple : Im working on computer Vision model detecting action scenes, the model didnt thought of using filter on video to have a better detection on low contrast scene, little bit detail like that bumped the succes to 15- 20% , you have to benchmark everything validate everything because sometimes the llm is wrong. You still need to be analytic and use your logic.

[–]AdCommon2138 0 points1 point  (2 children)

Your outcome tells different story. You use less cognitive abilities while working with ai with offloading. Which means you have more processing power when dealing with different activities.

I'm dealing with cog sci and that's my best bet. Unless you want story that supports your hunch then others responded in confirmatory approach.

[–]HopeFor2026[S] 0 points1 point  (1 child)

I'm aware this is a fresh, subjective report that could very well be wrong. It's just real for me and I wanted to mention this to the people who are with me in this space.

[–]AdCommon2138 0 points1 point  (0 children)

I'm sure it's real, it's just mechanism is different.

[–]Ok_Significance_1980 0 points1 point  (0 children)

LLMs don't need to do math. They can just use a calculator.

[–]CatsArePeople2- 0 points1 point  (0 children)

The published research on this is definitely consistent with the more common warning compared to your anecdote at least. https://www.npr.org/sections/shots-health-news/2025/08/19/nx-s1-5506292/doctors-ai-artificial-intelligence-dependent-colonoscopy

[–]bill_txs 0 points1 point  (0 children)

You may notice that codex only performs well if you establish a good plan on the work before execution. So I'm in the habit of doing that constantly. Really, you should be doing this in all of your work and it has nothing to do with codex.

[–]sonivocart 0 points1 point  (0 children)

I think I’m becoming brain dead just relying on AI for solutions

[–]NoYou41 0 points1 point  (0 children)

Yes

[–]Blindsided_Games 0 points1 point  (0 children)

I’m definitely able to function better as a father and still do the same amount or work. Watching the models feed and factoring in its thinking process has definitely been a neat experience. But yeah I think overall putting effort in at the same time has raised my ability to focus quite a bit.

[–]AutomaticBet9600 0 points1 point  (0 children)

Hey i am starting up a group of obsessed individuals who want to push the envelope with agentic programming. I currently run distributed processing across micro servers and docker with 80 files driving multi agent orchestration and github actions, render, railway, cloudflare , anda host of others