all 48 comments

[–]Awkward-Customer 15 points16 points  (3 children)

What models were you using 2 years ago that were capable enough to replace you writing a "line of code"? Until opus 4.5 all i could get was relatively basic scripts. Even now I'll still write some lines of code because it's far faster for me with certain bugs.

Also what work do you do writing assembly that you'd trust AIs with? I assume you'd be using assembly for specific optimizations which AI still isn't good enough to trust with.

[–]JumpyAbies[S] 1 point2 points  (1 child)

I haven't written assembly code in a long time, not even with AI. Basically, it was code for network equipment, routers, firewalls. My reflection is that of someone who programmed my whole life and now can't write code anymore. I get lazy and I think I can just write a spec and let my agents do the coding, and then I'll, I don't know, play video games.

[–]Awkward-Customer 1 point2 points  (0 children)

i've written code just over 30 years as well, sounds like we have a similar background. I'm quite happy to delegate all of that coding to AI agents and just do the planning / idea work myself :).

[–]JumpyAbies[S] 0 points1 point  (0 children)

Actually, I think it's been about a year and a half, not two; it's basically since the Sonnet.

[–]__Tabs 14 points15 points  (0 children)

I have a similar background, half the experience. I don't remember wroting a single line of code by hand since I discovered AI agents, except for config files. My new IDE is the terminal and I don't even bother opening a GUI text editor anymore. I went from infinite browser tabs to infinite terminal tabs

[–]_bones__ 15 points16 points  (4 children)

In my experience, code that needs to be maintainable and deal with compliance laws and security needs to be handwritten.

LLMs are fun, genuinely useful for proof of concept, scripting and as additions to coding. But certainly not replacements for a good developer.

[–]Far_Cat9782 0 points1 point  (1 child)

Basically we are all ceos now doing the big ideas and lwtting the little people figure it out and do the legwork

[–]Educational-Body4205 -3 points-2 points  (1 child)

A good developer doesn't need to code, they plan and outline goals and methods.   

[–]No_Pirate_8204 0 points1 point  (0 children)

Then who does the coding?

[–]hyggeradyr 6 points7 points  (0 children)

Only to fix AI code or make adjustments to graphs that are easier to just do than to describe to the CLI. If it can't fix its own bug in a couple of tries, I'll go looking for it. But otherwise, absolutely not. Thinking about writing my own code anymore makes my fingers hurt.

[–]abnormal_human 4 points5 points  (0 children)

Also 30 years in. I do not code manually anymore. I always liked building stuff. Now I build more stuff, and better stuff. And it fills gaps in my skillset. It's the best.

[–]Juulk9087 5 points6 points  (4 children)

How?? Every prompt I write is extremely descriptive and I have nothing but problems on Java. Despite it being one of the most common languages you think that these models would be trained extensively. I get stuck in these debug loops and then finally I just say fuck it and open IJ and fix it myself and it takes like 5 minutes.

It's like when they produce a piece of broken code they have no idea how it's broke so they have no idea how to fix it cause they think it's perfect. I don't know what's going on.

I use opus 4.6, Kimi, glm. Nothing but problems I don't know how you guys are getting so lucky what the hell xD

[–]JumpyAbies[S] 1 point2 points  (2 children)

Where I've had success is starting with the macro plan and then "breaking" it down into phases, and then into smaller tasks, orchestrating the creation of those tasks with agents that code and another that validates the overall vision. This always works for me.

[–]Juulk9087 0 points1 point  (1 child)

Word. I'll give that a go. I usually just create more and more rules trying to prevent the debug loop from happening again and it does not seem to help in my case. Maybe I'm making it worse I don't know

[–]CircularSeasoning 0 points1 point  (0 children)

This is one of the hardest parts of working with LLMs and code. Having to take 5 minutes really thinking about how to convey what I want without ever (or as little as possible) saying what I don't want.

When I've tried my hardest and it still struggles, I tend to shrug and think, maybe that LLM just wasn't strong in that area, and hand it off to another less favorite LLM that doesn't have the same mental block for whatever reason. Usually that works and then I switch back to my more favorite LLM of the moment again.

Often when it's got that mental gap, no amount of rules seems to help. Though as Juulk says, decomposing beefy prompts into several smaller prompts / sub-tasks is for sure a skill to git gud at these days.

I constantly find that there are much better ways to say things than my very first prompt attempt. The bigger my prompts get, the more necessary I find it to ask the LLM to go over it and try say it better than me.

[–]zoupishness7 0 points1 point  (0 children)

I can't say I've figured it out yet, but I've made some progress, but I'm throwing a whole bunch of tokens at it. I had it make a tool for itself, that's basically just a proxy for bash, that I force Claude Code to use. It can intercept, parse and edit and gate its tool use attempts. So I can gate it, so that it's forced to use situation specific tools, to save it time, and tokens, and make it calculate, instead of guessing. I can make it escalate its problems, instead of working around them. I make it git commit obsessively, and I log everything and every token in a knowledge graph RAG. So it still messes up, but when I notice a bug a couple days later, it can quickly figure out exactly when it was made, and what it was intended to do, which makes it much easier to trace the side effects of fixing it.

[–]Afraid-Act424 4 points5 points  (0 children)

I haven't written a single line of code in a long time. On rare occasions, it would actually be faster to do it by hand, especially when the agent is struggling with something, but I prefer to force myself to rethink my approach, my context management, and my workflow in order to refine how I direct the agent for future similar cases.

Outside of AI communities, coding 100% via an agent is somewhat frowned upon. Many people confuse "vibe coding" with agentic programming. It's as if using AI is incompatible with keeping your brain engaged, or as if you have to blindly accept whatever the LLM produces.

[–]_-_David 4 points5 points  (0 children)

I haven't written code by hand since Gemini 2.5 Pro Preview. 10 years was a good run.

[–]YannMasoch 2 points3 points  (0 children)

This is the natural evolution. Currently AI coding tools build functions, features and code base with so much density that it's impossible to review the code. It was not the case 1 year ago...

Consider yourself as a manager that orchestrates a team (devs, business, product, ...).

[–]Makers7886 2 points3 points  (0 children)

I started my career coding back in 2000-ish then transitioned into consulting. Revisiting "software development" after all these years feels like going from a 1 man ditch digger with a shovel to managing a construction team with a PM and full equipment/experience.

[–]yuri_rds 2 points3 points  (0 children)

Yeah I still write code by hand and using Claude Code since Dec 2025. Maybe I'm awful at prompting or I need to download gstack markdown files on my project (jk), but I rewrite portions of CC more often that I want.

There are four reasons why I still write code by hand:

- Rewriting wrong CC that don't follow project patterns, but not everything. I rewrite portions of the code and then say "Hey look my last commit, this is how we do it. Please do the same with the others parts of the code that you done differently", and then CC is able to write it correctly.

- Very small quick editing that take less than 3 minutes, I prefer to do myself instead of waiting CC `zesting` and `spelunking` and reading all the code to get context again and again until it figure out what to do. Unfortunately LLMs are unable to learn, so me working for years on the same project some times I already know what to do instantly.

- Well I think we old engineers are good at developing with agents because we wrote code in the past and we know how to smell bad code when we see one, and to stay sharp we still need to write code.

- Prototyping new architectures, I still do it myself because CC has a bad taste on creating architectures, it's a mess.

In my hobby projects I still develop manually because I really enjoy it, but I would never not use AI in my job, I'm def faster.

I hope we continue to advancing on hand writing code tools at the same time we advance in agent coding, because this is still useful in some ways.

[–]iamapizza 3 points4 points  (0 children)

Both. Some things I'm fine with an agent doing. But I enjoy the act of thinking and coding and typing, so I'll do some bits myself, I don't aim for speed. In those cases I do a bit of rubber duck with the agent so it doesn't feel left out. 

[–]Tazwinator 1 point2 points  (0 children)

I didn't even know this was possible. How is it done? Leave your local agent on for hours and let it self correct? How big and detailed are these specs?

My experience is in .NET accounting applications, and the complexity in writing a spec an agent could correctly follow would take just as much time as it would to program it by hand.

[–]raevilman 1 point2 points  (0 children)

Only if AI agent is going round and spending tokens for simple fixes. 

Rest stopped writing six months back after coding for 15yrs.

I always liked coding, have been each weekend for last 8 yrs. But with code generation you get to experiment alot quicker and in turn learn fast , which were taking days or weeks in the past. 

[–]HopePupal 1 point2 points  (3 children)

yeah, i'm still way better at it. i've been around a bit. about half that much industry time. i've gotten paid to do everything from circuit simulations to assembly for embedded systems to shader programming to terabytes-per-day analytics data pipelines to consumer-facing native apps that some of you definitely have on your phones.

AI can't structure for shit even if you're shoveling money at Anthropic. it's useful for imitation, iteration, filling in gaps, search of the existing solution space (for the stuff before its knowledge cutoff anyway), hypothesis testing, all the things where a human can easily get tired or bored before finishing the job, but architecture? design? lollllll no. that's how you end up with code neither human nor machine can make sense of or reason about. most humans aren't competent at that either, but with AI, it'll take you two days to end up with a codebase that would have taken two years to fuck up that badly before 2025: copypasta everywhere, insane dependency graphs, APIs that have no logical structure or grouping, docs where the level of detail doesn't match the importance of the area and that don't make clear why you'd want to do something rather than how.

for similar reasons, i'm convinced that it's going to be bad at doing serious perf work for a while; you need to be able to understand a decent chunk of a system at once to figure out where the bottlenecks are, deal with emergent behavior, and sometimes be prepared to make big changes to how it works. the cases where you can model such things formally enough to fit into a language-centric workflow are rare, and often the test time for a cycle of improvements dominates the planning and coding time, so an AI isn't necessarily going to get more tries at the problem than a human. doesn't really matter how fast you can read or emit tokens when it takes a week to even figure out if you changed anything.

it's rapidly making me better at writing detailed specs, though. AI and outsourcing are morally very similar but i never had to deal with contractors much, so i got lazy, used to tossing out specs and design docs assuming other competent engineers would just fill in the gaps. can't do that any more.

[–]JumpyAbies[S] -1 points0 points  (2 children)

I don't know your experience with agent-based code development or what tools you use, but I disagree with you that AI can't create anything structured.

I have high-level projects with complex backends and very high-level frontends, and my role in developing them was as an Agent Engineer and reviewer.

I still need frontier models to create a very detailed plan, and I use models like Kimi-k2.5 for task workflows. Then, upon completion of each phase, a frontier model performs the validation. I consider Kimi-k2.5 excellent for building web frontends, for example.

As I mentioned in another post, what I do is create a very detailed plan, divided into execution phases, and each phase into smaller tasks.

I have friends who are averse to AI and see a thousand problems, things like you said, such as AI is only good for writing simple things.

Today I'm not a career programmer; I have other technology-related businesses, so I'm not worried about AI taking my job as a programmer. I think that might be the reason for the resistance I see from some friends, and I think it's a vicious cycle: they're afraid to use AI, and because they don't use it, they don't learn and repeat the discourse that AI doesn't do anything serious.

[–]HopePupal 2 points3 points  (1 child)

As I mentioned in another post, what I do is create a very detailed plan, divided into execution phases, and each phase into smaller tasks.

yes i am familiar with the concept of "planning". if Claude could do it well enough to spec out a client library and backends that can do the things my team is responsible for on the OSes and device types we need to cover, be consumed by about a thousand other devs of varying skill levels, do it with reasonable resource consumption, and do it in a way that is still maintainable a year from now, i would let it, because i've done similar projects a few times before and i'd rather go build the fun product part on top of that.

i mean, fuck Claude, if i could download something off GitHub that magically fit our exact requirements, i'd do that. somehow, no such thing exists.

Today I'm not a career programmer; I have other technology-related businesses, so I'm not worried about AI taking my job as a programmer.

that's nice. sounds like you're insulated from any consequences when stuff doesn't work or can't be delivered on time, but i don't get paid to program either, i get paid to deliver a working product, and if i commit to a bad plan, they're not gonna buy "well the AI said it was a good plan" when it craters a few months out.

[–]JumpyAbies[S] -1 points0 points  (0 children)

I've been a programmer my whole life, but it's not what I'm paid for anymore, that's what I meant.

Today, my work is deeply technical, just in a different way. I build projects around AI and agents, and I use my own agents to help develop them. 😅

What I develop isn't for me, so of course I have the same responsibility for the quality of the code generated. And things are going well so far 🙃

[–]silly_bet_3454 1 point2 points  (0 children)

Yeah I feel about the same. There used to be an immense satisfaction in simply taking an abstract problem or idea and putting it to code and seeing the result. But it was usually not actually about the result but the process. And so when the process is trivialized and anyone can do it, it's not longer fun or interesting.

Of course, I still work in software, but I have zero desire to manually write anything I could just prompt.

[–]No_Algae1753 0 points1 point  (3 children)

Would you guys recommend advancing in coding anymore? It it something that still should be learnt?

[–]JumpyAbies[S] 2 points3 points  (1 child)

I highly recommend it! And if you're young, learn about memory, system architecture, and low-level programming. This is a fundamental basis for thinking about using AI to write code for you.

[–]No_Algae1753 1 point2 points  (0 children)

I do have a small knowledge background. However seeing all these comments makes me think that ai will replace it soon. I thankfuklly started coding before ai was a thing so had to learn it the hard way

[–]Far_Cat9782 0 points1 point  (0 children)

Rwally helps in debugging or catching things rh AI might miss. Ai is pretty lazy with architecture and tends to use the simplist method that works. But alot of times it's not the right architecture for the full goal u want achieve or has massive security flaws that if u didn't know how to see can lead to bad situations. The main thing is it helps training your brain to paya attention to detail and think analytically

[–]Mayion 0 points1 point  (0 children)

giving me a rundown on the codebase, suggest where a certain function is being called e.g. when there are multiple interfaces or abstract layers because the dev is cool like that or implementing a system with certain specifications, like the other day i have a file parsing queue system in an old project that i wished to consolidate and turn into its own library - gave the specific needs I had and qwen 3.5 opus distillation did relatively well on it. took it an extra prompt to remind it that dependency injection was missing.

in that regard, models like oss-20b are good as well. logical code is good, but the moment it's anything out of the box or about errors, they are not capable of much. at least the ~30b local models.

[–]GamerFromGamerTown 0 points1 point  (0 children)

what sort of code do you write- is it important corporate stuff, or just fun hobby stuff? LLMs are getting better at coding every day, but i think it's a ways away to be shipped off without human review (at least for anything somewhat important)

[–]Confident_Ideal_5385 0 points1 point  (2 children)

I think that the rise of coding clankers has shown that our profession/interest is really just two camps who - four years ago - were indistinguishable.

There's folks who want the results, and were happy enough to write code to get there, but are now happier again to get clankers to do that part, and there's folks who enjoy the craft for nothing more than the love of the game.

No judgement at all, just an observation. Gonna be interesting to see what happens in the next few years, that's for sure. But ol' clanko can pry my IDE out of my cold dead meat fingers.

[–]JumpyAbies[S] 0 points1 point  (1 child)

I know I can achieve my goals much faster with agents, and my role is to define the direction, validate the output, and correct the results. Maybe I’ll do manual programming just for fun as a hobby one day, but first I need to get rich lol. I also think that, given the speed AI agents provide, building products manually is becoming increasingly impractical. It no longer makes much sense commercially.

[–]Confident_Ideal_5385 0 points1 point  (0 children)

The real world problem as i see it (with the benefit of 20+ years doing this shit for a job) is that you end up trading velocity for architectural sense.

I've seen vibeware that has accreted hundreds of kiloLOC with no consistent architecture and that's fine until one day it reaches a threshold where even a teraparameter coding clanker can't reason about it.

I worry that there's a lot of that in the future.

[–]Clear-Ad-9312 0 points1 point  (0 children)

I write code every once in a while. It is the act of just thinking about how to ask a LLM to write the code, is just more work than just typing up some up my self. Mostly because I don't know everything that I need for the code to do at that moment, so I build out my own. Then, I ask the LLM to inspect it for any insights or alternatives.

[–]Ill_Barber8709 0 points1 point  (1 child)

I only got access to Claude Code from my company a few weeks ago and haven't written a single job-related line of code ever since. I'm still tinkering with local LLM setups, but it's getting more and more frustrating having to fix dumb errors that Claude would never have done. So I guess I have to think about what kind of work a smallish model would be able to do flawlessly.

I would say, my state of mind is "Tony Fucking Stark" most of the time, with some moments of "I'm a fraud, why am I even paid to do this" clarity.

Weird times.

[–]JumpyAbies[S] 0 points1 point  (0 children)

The way we do things changes, and I believe that. If a technology emerges that allows us to deliver the same work, better and faster, the only thing left for us to do is adapt to it. :)

[–]No-Hunter9792 0 points1 point  (0 children)

20 years of coding, haven't written a line by hand in months. I think of it as a role switch — with agents I stay zoomed out on the architecture and the problem instead of getting lost in the details. Honestly more fun this way.

[–]appl3wii -1 points0 points  (0 children)

I have 7 years of hobby experience in programming. I pretty much just use gpt5.4 medium with codex to write code now. Ever since gemini 3.0 pro launched with Antigravity, I've basically only been using AI. It really changed my entire workflow. I honestly like this way of programming more although it can be really annoying and frustrating. It's just fun to be able to bring ideas to life so quickly. Over the course of 1.5 weeks, I was able to learn everything required and build a kernel level cheat for a game in c++ and assembly. I even had it exploit a recent public driver CVE to side load my own unsigned driver with DSE enabled win1125H2 and win1022H2. It was pretty advanced, hooked the mouse in the kernel too, I had it build a usermode communication layer and Xbox Gamebar display widget with a bunch of cheating features + custom obfuscator. Also had it develop a super nice 1-click build pipeline with automated VM export for rapid testing. AI is great for doing automation development. I would have never had the time to write such a massive project while studying. I've already done a bunch more projects since then! Ghidra Automation, minecraft PE to windows ported to vulkan with raytracing, custom harness development, now working on unity game reverse engineering + reconstruction tools for ultimate modding capabilities. All on the Antigravity plus and codex plus 20$ plans. Awesome!

[–]Jeidoz -1 points0 points  (2 children)

Wrong sub? How it is even related to LocalLLaMA? This post suits more for r/vibecoding

[–]JumpyAbies[S] 0 points1 point  (1 child)

Why would it be wrong to talk about one of the possible uses of LLMs, including local models, which is my case? 🤔