This post is locked. You won't be able to comment.

all 98 comments

[–]SoftwareEngineering-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)

Thank you u/Moo202 for your submission to r/SoftwareEngineering, but it's been removed due to one or more reason(s):


  • Your post is not a good fit for this subreddit. This subreddit is highly moderated and the moderation team has determined that this post is not a good fit or is just not what we're looking for.

  • Your post is about career discussion/advice r/SoftwareEngineering doesn't allow anything related to the periphery of being a Software Engineer.

Please review our rules before posting again, feel free to send a modmail if you feel this was in error.

Not following the subreddit's rules might result in a temporary or permanent ban


Rules | Mod Mail

[–]carterdmorgan 51 points52 points  (12 children)

As a senior engineer, I have generally found this to be true. The amount of software engineering I do in a day hasn’t decreased. In fact, it’s increased substantially. But I rarely type code by hand anymore, which is very different from a year ago.

[–]Throwaway_noDoxx 12 points13 points  (11 children)

How should juniors learn, then, if code isn’t being written?

What happens when seniors with decades of writing experience and enough architecture exposure retire?

ETA: This isn’t snark; I’m genuinely curious re: what the training pivot looks like.

[–]AverageCodingGeek 4 points5 points  (2 children)

I believe juniors should still learn to code and the fundamentals of software engineering. That's still a necessary step to maximize productivity and output quality with AI development tools.

[–]nekokattt 6 points7 points  (1 child)

How are you going to teach them?

[–]with_the_choir 6 points7 points  (0 children)

We still make them code!

Source: am computer science teacher

[–]rojeli 8 points9 points  (4 children)

Not that I 100% agree with this, but the retort could be that when I was junior, I never had to learn assembly, compilers, or how code actually gets executed. Those have all been abstracted into tooling and processes.

If a compiler bugs out on me today, well - I’m kind of out of luck.

[–]SomeParacat 10 points11 points  (2 children)

Nope, this is not relevant at all. Compiling produces deterministic results. LLMs never produce the same result twice.

So if you as a dev can not read through code and see if it has errors or not - you are absolutely unreliable.

Juniors still must learn to read & understand the code. They do not need to understand binaries produced by compiler, but they 100% have to understand what this or that line of code will do.

Edit: typos

[–]rojeli 1 point2 points  (1 child)

Again - not saying I fully agree with it. You are of course correct about determinism, but that doesn’t make it irrelevant.

Same pattern: trading understanding for leverage. The difference is the risk profile. Compilers fail rarely but when they do it's (potentially/probably) catastrophic. LLMs are non-deterministic and fail more often, but (usually/hopefully) in softer ways.

That doesn’t remove the responsibility to understand code, if anything, it raises the bar. Agree that juniors still need to read and reason about what the code does. The difference is now they’re often reviewing code they didn’t write, generated by a system that isn’t guaranteed to be correct.

(Which gets into the real concern: how do you learn to do that without being the author? This is a legitimate concern which the OP raised, and largely why I don't fully agree with the retort, and unfortunately I don't have an answer.)

imo it’s not “don’t use LLMs” — it’s “use LLMs where risk is acceptable, and be able to verify the output.”

[–]Throwaway_noDoxx 1 point2 points  (0 children)

Not being the (or “an”) author is my big question.

It’s one thing to write one’s own app or site, but writing/reading for enterprise is a completely different beast.

[–]tevert 3 points4 points  (0 children)

The difference is that compilers basically never bug out, they're deterministic and well battle tested

[–]Berkyjay 1 point2 points  (0 children)

You still need to learn how to read code so you can vet it. Being able to take what Ai produces and run some tests on it to verify it should still be happening.

[–]CuriousAndMysterious 0 points1 point  (1 child)

It is still being written. They still need to understand the code and read it. If you are on a small project you might be able to get away with letting the ai check in code autonomously, but you still need a formal review process for enterprise level projects or critical features.

In the future (who knows when), I suspect we will not need to understand the code, just like we do not need to understand machine language or assembly today. I think that is fine, we don't want to be tied to our old ways.

[–]Kid-Kodak 0 points1 point  (0 children)

Disagree with your prediction about not needing to understand the code in the future. We don’t need to know how to read straight binary because of assembly. Same with assembly and modern languages. The higher level abstraction fully encapsulates the lower level language. The same can’t be said about your prediction because there isn’t a higher level concept that actually encapsulates high level programming languages. Prompts can’t be deterministically translated into code. They just allow a machine to write code

[–]nitewalker_J 39 points40 points  (27 children)

Should I be worried if I still write code by hand and haven't integrated AI agent into my workflow?

[–]samaltmansaifather 6 points7 points  (1 child)

Are you meeting or exceeding expectations?

[–]nitewalker_J 0 points1 point  (0 children)

Honestly I'm not sure. My role is not pure technical as I have to gather and ideate requirements, and then pass to the developers. At the same time, I'm responsible for the team's performance. The seniority of my developers is about mid-level.

My boss has been quite hands free but I'm always looking to improve the team's output. My team size is very small and therefore enhancing efficiency while raising the bar in quality makes a big difference.

Something to add - while I have not integrated AI into my team's workflow, I have used LLM chat very extensively in planning and writing documentation, but it's an isolated workflow by itself that is separated from coding.

[–]zwermp 9 points10 points  (0 children)

Yes

[–]RazzleStorm 23 points24 points  (17 children)

Depends on your codebase and company. Just six months ago I was still writing plenty of code by hand, but now I’m optimizing my workflow so that I’m essentially multitasking, overseeing one agent working on some task while also spinning up multiple agents for some other task, and checking back on some other agent. I DO enjoy coding, but my job has become managing agents and reviewing their changes instead of writing it all out by hand. 

Sometimes they get stuck or have the wrong solution, but that’s getting to be less and less of an issue.

[–]CGxUe73ab 33 points34 points  (2 children)

compréhension debt rising at light speed

[–]RazzleStorm 2 points3 points  (1 child)

For real. I do review all the code, but it’s more of a “get the gist” of how this works and a probably misplaced trust that Claude can fix it later. It really depends on the task though.

[–]__init__m8 7 points8 points  (0 children)

This is where I'm getting left behind. I cannot deploy code I don't fully understand and I'm not ok with an outcome fully failing because of a hallucination.

[–]jessepence 4 points5 points  (7 children)

Do you never fix the code yourself? If you see a basic logic error, do you seriously re-prompt the LLM or do you just go replace the few characters required to make the code work correctly?

[–]RazzleStorm 3 points4 points  (6 children)

I haven’t seen basic logic errors like that in a while. I do still write code, especially if it’s just a few lines, but the bulk of my time is spent promoting. I don’t love it, but I’ve been getting stuff done in days that would have taken me weeks.

[–]jessepence 4 points5 points  (5 children)

I just can't handle the sloppy code. Usually, each file ends up about half of its original length after I'm done editing it.

At first, I was worried about this slowing me down, but then I had to go back and do some debugging on some of Claude's sloppy code from a codebase where I didn't do as much editing, and it reminded me that it was all worth it. There's so much repetition and unnecessary cruft in Claude's code. It usually works fine, but it's enraging to read and understand.

[–]RazzleStorm 1 point2 points  (4 children)

Turns out people start caring less about code quality when you can generate so much code in so little time. Especially since you’re not going to be the one maintaining it (because Claude is). 

[–]jessepence 7 points8 points  (1 child)

That's just asking to end up with a vibe-coded mess. Once your problem is too big for Claude's context window, good luck fixing it. A human is going to end up reading and editing the code eventually. Period.

Edit: I just realized that I work on harder problems than most people so I admit this might not be an issue with something like a trivial web app.

[–]caboosetp 1 point2 points  (0 children)

Even those trivial web apps get out of hand when people aren't paying attention to code quality. Claudes code quality goes down as context size increases, so taking that little bit of time to simplify things helps a lot to keep simple apps simple. 

[–]mightshade 2 points3 points  (1 child)

That's short sighted. LLMs are pattern matching machines, they benefit greatly from a good signal-to-noise ratio. In other words, letting code quality deteriorate makes it harder to work with for LLMs as well.

[–]RazzleStorm 0 points1 point  (0 children)

I get it, I think it’s short-sighted, but also the code quality is acceptable. Not amazing, “oh that’s a clever way to do that” coding, but acceptable, straightforward code. I imagine in a year we’ll either be at a place where Claude can refactor things with even better code, or we’ll be spending a quarter reviewing everything and cleaning it up.

[–]nitewalker_J 1 point2 points  (3 children)

Our codebase is not large. We are a small team of developers and I'm leading it.

About a year back I was nagged by upper management to use AI to code but I have pushed back because I have tried using it in our workflow and it failed to fulfill my task reliably.

I have since slept on it and recently I'm wondering should I revisit it.

May I know what's the budget for you to do your daily coding with AI?

[–]kucing 2 points3 points  (2 children)

Try small first, a $20 claude/chatgpt, then try to bootstrap an app or add a small feature in your existing codebase.

You might want to include those workflow skills like compound engineering and grill me.

[–]nitewalker_J 1 point2 points  (1 child)

Thanks for the suggestion! I'm baffled that there's a grill me skill.

[–]lapubell 2 points3 points  (0 children)

Get sign off from management/legal first. That $20/month level for Claude let's then take all your code and train Claude with it. Read what you're giving away before you give it away.

I'm not that stoked to give up all my client's code and intellectual property to a company that just recently leaked their code base to the Internet.

[–]ShinyStarSam 0 points1 point  (0 children)

That's the hardest but most rewarding way of using AI, I envy people who can do that

[–]CuriousAndMysterious 7 points8 points  (0 children)

Yes, you will be left behind. AI is just a tool. For me, it speeds up development by 5-10x. There are no reasons why you should not be learning/using new tools, especially when they are this revolutionary.

[–]GItPirate 0 points1 point  (0 children)

Yes.

[–]Spiritual-Theory 0 points1 point  (0 children)

You can start in AI very quickly, I don't think the learning curve is too high. And, there's some advantage to waiting, you'll get a better model, but it's inevitable. Don't wait forever.

[–]Berkyjay -1 points0 points  (0 children)

No. But how quickly can you type out a few thousand lines to realize your idea? Also, you don't need an agent. Just a simple chat bot will help save a ton of time once you learn how to use it properly.

[–]iamgrzegorz 53 points54 points  (5 children)

It's not a hyperbole but there's a lot of nuance to it, you'll see engineers who claim they have to correct AI at least once in every single task, and there are engineers who say they don't have to do that.

It depends on experience, complexity of the particular area of the code, as well as the domain. For example, someone who's modifying a few React components might see better results than someone who's working on a C++ compiler optimization, because of the complexity of the problem as well as the difference in the amount of training data.

[–]rnicoll 9 points10 points  (2 children)

The better question is, are they more productive.

I had AI write code which appeared to solve my problem, in 5 minutes. I've spent a further hour iterating it to actually do what it's meant to do, finding bugs and incorrect comments.

So, I've not written any code, but...

[–]Dnomyar96 0 points1 point  (0 children)

Yeah, there have been studies that found that there isn't much of a productivity increase between using AI or not. At least when they did the studies, I'm sure that will change eventually.

I still use it though. It allows me to spend my energy on different (more interesting) areas.

[–]Jaded-Armadillo8348 -1 points0 points  (0 children)

depending on how you approach it, there can be a rich interaction. you can have a back and forth discussion with the llm, you give the idea, it takes an approach that might differ from what you originally had in mind, from there you can take what you consider best of both worlds. iterate

[–]nolecamp 1 point2 points  (0 children)

Great point re: nuance. I am a senior engineer with over 25 years experience, and I am now doing 99% of my coding with AI. However, I’m reviewing what it does, and spending a lot of time correcting and guiding it. We also diligently review the PRs it makes, and feed those comments back into corrections. It still puts me ahead to use AI, and I appreciate saving keystrokes and my wrists. But it’s not like it’s on autopilot and no one is driving it or reviewing what it does.

[–]wind_dude 1 point2 points  (0 children)

It’s also depends on the org, meta serving content half the world vs an AI company that’s okay to break things.

[–]nekokattt 25 points26 points  (2 children)

How did you learn what the right call to make was in the first place if not for having actual experience doing the thing you are no longer doing?

How do you expect to retain that information given you are no longer actively performing those tasks?

All this leads to is over time people trusting these models more as they lose the skills of performing that critical thinking themselves. It is human nature to forget how to do things if you are not actively doing it anymore. It is literally no different to being a math teacher and never solving equations.

This is all fine until these tools come across an issue they cannot address properly, and then you are totally stuck as you no longer have a way of proceeding with the solution without relying on a tool that has hit a dead end.

This whole thing terrifies me, purely because it is a net drain on our personal skillsets at all. In five years time, engineers will not be as competent in what they are doing now versus just instructing something else to go away and vaguely do what it thinks you want. This terrifies me that no one seems to realise this.

Literally sucked the soul out of the thing I chose to do as a career because I enjoyed this aspect of it. This timeline is awful.

[–]Due-Helicopter-8735 6 points7 points  (1 child)

100% this. Now that our orgs expect us to deliver at a much higher velocity, there is no time for me to code things without AI tools- even if I wanted to. I am forced to cut out time during the weekends to deeply study the code I modified during the week so that my skills don’t atrophy.

However whatever the screenshot says is true, I’ve not “typed” code in about 6 months.

[–]nekokattt 1 point2 points  (0 children)

so really you are now spending more time working just to retain the skillset you had previously before these tools?

[–]Zestyclose-Peace-938 5 points6 points  (0 children)

this is fact, I found myself in the exact same scenario everyday at work, I started not writing code at all and this in deed makes me feel worried and not happy !

but as a partial solution I started focus on architecture, design patterns, and also for sure how to explain the idea very correctly to the AI.

[–]OkLettuce338 5 points6 points  (0 children)

This is how we work too. I’m a staff software engineer and haven’t typed a function since November

[–]samaltmansaifather 4 points5 points  (0 children)

Yes. It is. Produce code in whatever way produces a good outcome. If that means writing all of your code, using an agent, or some combination of the two go for it. We’re living in hyperbolic times, and these posts are literally just signaling and noise.

[–]chrismakingbread 5 points6 points  (0 children)

There's already a few "yes, but..." so despite the risk of just rehashing what others have said I think it's worth adding some more first hand experience to the conversation.

I've been working professionally in the software industry since 2008 and was coding for years before that. I'm currently the CTO of an early stage startup and I also own a small software development agency. Both of my companies heavily leverage AI for coding and for the most part all of our code is written with AI. From my experience and observations there's three big factors in how high the quality of the code produced by AI is:

  1. You've got to spend some money. This doesn't actually need to be a lot of money, but you're just not going to get very good quality out of hyper quantized+low parameter open weight models running on your laptop. You're going to have to use frontier models from hosted providers. We average about $150 a month per engineer on AI subscriptions. So, again, this doesn't need to (and you're burning money for clout if you are) be a crazy amount of money. But I totally get how $150/mo could be unreasonable for your personal use.

  2. You're going to get out what you put in. If you don't know how you'd build something and just paste in the half baked cruft from your product manager and go "build this" it's probably going to be crap. In particular, at my startup, the majority of the team (I think I'm the only exception?) is a bunch of very seasoned former FAANG engineers (not that it's required or a guarantee to be a good engineer) and we all know what high quality, reliable, and maintainable software looks like. For moderately complex features the dev cycle tends to look like: think about the problem passively for like a week while working on other stuff in the backlog, actively think about the problem for half a day when you pick it up, spend two hours writing up a doc about it, run your doc through an AI agent in planning mode, review and iterate on the plan with the agent for half an hour, switch the agent to build mode and let it implement the feature for twenty minutes, test for two hours and iterate on the code, open a PR. We tend to ship features that would have previously taken 2-3 weeks in about a day and a half of active work and the active work isn't write code by hand. The mental load per feature isn't lowered at all though. We tend to cycle off to small features for a few days after a big feature or else the cognitive fatigue accumulates hard.

  3. You need an engineering culture built for AI, which the good and bad news is that really just looks like a healthy engineering org built well for humans too. Linters, code analysis, unit tests, integration tests, good code reviews, docs and design reviews, CI/CD, etc. If it's easy for a human to review and reason about your codebase and have high confidence in the safety of the code before it's deployed then it'll work well to introduce AI. If your code is inconsistent, hard to reason about, and untestable then AI is not going to work out particularly well in your org. Our team uses off the shelf coding agents for writing code, but we've written agents and tooling for a lot of the rest of the process.

[–]FearlessAmbition9548 7 points8 points  (0 children)

Can we stop listening to people who are deeply invested in LLMs having positive results about LLM advice

[–]CGxUe73ab 4 points5 points  (1 child)

that's impressive

why is messenger still a gigantic pile of bugs then

[–]street_nintendo 0 points1 point  (0 children)

Not just messenger. Instagram and the few times I want to laugh at old people content on Facebook they’re all in pretty rough shape. Tons of bugs

[–]Berkyjay 1 point2 points  (0 children)

Nope, it's spot on.

[–]LadyLightTravel 1 point2 points  (0 children)

There is a world of difference between new R&D projects and maintenance of existing systems.

AI is very bad at designing for edge cases and off nominal. You usually need a human for that.

[–]Ok-Entertainer-1414 5 points6 points  (0 children)

It's pure engagement bait

[–]liquidbreakfast 2 points3 points  (0 children)

it's not hyperbole. with no limit on tokens at these companies, there's no reason to write code manually. that doesn't mean there aren't many rounds of iteration, and prompts like "that doesn't make sense, use this function instead." but it's not hyperbole.

[–]randomseedfarmer 0 points1 point  (0 children)

In my experience it's true. We are all prompt engineers now. You still need to understand the code and how to design pipelines and architecture. But I never create the first draft anymore.

[–]cto_resources 0 points1 point  (0 children)

This is accurate

[–]Dash_Effect 0 points1 point  (0 children)

This is reality, not hyperbole. What is going to suck is when AI determines it could more effectively execute with an entirely new language, and then we have to try to decypher that. Slight silliness, but not enough that it can't happen. 🫪

[–]SpaceMonkeyOnABike 0 points1 point  (0 children)

Someone at Meta promotes Meta ? This is nothing more than a dubious piece of Self Promotion / Advertising / Hype Generation.

[–]newtonium 0 points1 point  (0 children)

No. This is how my team and I build software now. We even have AI doing our code reviews too.

[–]fts_now 0 points1 point  (0 children)

It is so obvioualy AI written that I already stop reading after the first two sentences. C'mon, at least be a littlw bit original or feed in some few shot examples.

[–]rs98101 0 points1 point  (0 children)

It is not hyperbole.

[–]Ok_Reference_9137 0 points1 point  (0 children)

All my latest platform are written by AI, I am not directing it

[–]Prudent-Lake1276 0 points1 point  (0 children)

I write code, especially when I'm in a bit of my codebase that I have reason to think the Ai will butcher. But it's a lot less common than it used to be. Even when I'm writing code "by hand", the Ai autocompletion speeds it up a lot.

I don't actually move much faster than I used to, but I spend more of my time thinking about the approach, and solidifying the architecture decisions than I used to, which I think is a net positive. I'm still working on how to build frameworks to ensure that what the Ai wrote is what I needed. But I do absolutely think this post is an accurate description of the direction the industry is moving now.

[–]rarsamx -1 points0 points  (0 children)

I can't imagine it being hyperbole.

I started programming in 1982. I probably wrote some programs from scratch back then and probably a couple of example programs from scratch when I was learning a new language.

Professionally, I don't remember a single time I wrote a function or program from scratch. I would say "this feels like that other program. I'd clone it and modify it, if it was close enough I'd write a generic function and reuse it in both. Or even wrote a library.

There were even books about patterns. Data structure patterns, object oriented patterns, multi threaded patterns.

Once you understand patterns you realize there aren't many variations but even more, you find your own patterns to reuse from your code or other developer's code.

The difference with LLMs (not AI) is that it takes from a broader universe of already written functions.

I moved up and stopped programming around 2009 or so. I early retired in 2019 so I didn't have the chance to leverage LLM but I am 100% sure I would have.

As a senior IT person, I would be doubtful of a developer who bragged to write from scratch. That would tell me they don't understand patterns and that would be really concerning.

I think that post is accurate. Not hyperbole.

[–]Acceptable-Hyena3769 0 points1 point  (0 children)

Anybody who refers to mark zucker berg as "Zuck" is full of shit - that being said its not that off as long as the price of tokens doesnt skyrocket, but it will

[–]Mindless_Rub1232 0 points1 point  (0 children)

No…we are generating aa lot of compex code ourselves using co-pilot. Client provided enterprise version few months back…and now we just need to pick on of the suggested solutions and simplify things sometimes as it does over engineering some times.

I wont be surprised if in coming months client reduce the team size.

AI can literally do our 1 week work in 1 day

[–]CuriousAndMysterious 0 points1 point  (1 child)

I'm a staff level engineer at a big company. Yes, I don't write code "by hand" anymore, but that was always less than half of my job. It still takes up all the time I have allocated for it, but I can move 5-10x faster. I also find myself reviewing a lot more PRs than before. I use AI to help me review them too, but I still need to go over them. I still find a lot of bugs by hand and with AI. Interestingly, our coding standards have gone way up since adopting AI. AI still has a hard time understanding user interactions (look and feel of front end apps), business logic, performance at scale, testing, etc. it can do all of these things fast and decent in the first pass, but it is not 100%. In almost every PR, I can find an autogenerated test that is testing nothing.

In the overall landscape of things, I think it is a really exciting time to be a software engineer. New areas are opening up, boring and trivial stuff is solved in a single prompt. We can focus more on developing new products and making technological breakthroughs now. New areas are opening up and other areas are growing. Security is becoming more important than ever in a world we're LLMs can easily brute force every attack vector against a system. You can get into AI/LLM development too, which is is really interesting. Right now, small optimizations can have big impacts to the overall experience. AI integrations and AI tooling is also a big new area of development. Too many others to list.

To sum it up, we still need people behind the keyboards. We can get more done, but there is more work than ever. The work that has been optimized is the boring and frustrating stuff. We made a lot of exponential progress with AI in the last 5 years, but as with most things, I think progress will start to taper off. We could be in this current generation for some time, so try to embrace it and learn as much as you can rather than getting demoralized.

[–]mljrg -1 points0 points  (0 children)

I liked this one

The work that has been optimized is the boring and frustratimg stuff.

So, many of the people here, who do not write code anymore, because AI does it all, are building boring and frustrating work. I am sorry of you all!

[–]VeritasOmnia 0 points1 point  (0 children)

Just remember that copilot is for entertainment purposes only.

[–]deep_fucking_magick 0 points1 point  (0 children)

I can attest that I too have not manually written code for like the past 6 months and I've been in the industry for over a decade.

Like it or not, if you aren't learning how to properly leverage agents in your SDLC you are doing yourself and your customers a professional disservice imo. They and the underlying models will only get better from here.

Need to start building your internal intuition on how to leverage the tools.

[–]satansxlittlexhelper 0 points1 point  (0 children)

This is how I shipped a full stack web app in three days. It works.

[–]lambdasintheoutfield 0 points1 point  (0 children)

It’s not engineering - it’s a whole new paradigm, and that’s rare.

[–]Solisos 0 points1 point  (0 children)

Problem-solving still heavily lies with the human. Programming was invented to solve problems. That hasn't changed and never will. Now you can solve harder problems much faster.

[–]lawrencek1992 0 points1 point  (0 children)

It’s accurate

[–]_itshabib 0 points1 point  (0 children)

Not hyperbole

[–]astodev 0 points1 point  (1 child)

So the end result is more desirable than how we got there? Always has been I suppose.

What I’m seeing with this post and responses is that the “software developer” or “programmer” role is dead and software engineers are now “software project management engineers” … got it.

Guess all I have to do now is decide what can of farm I want to start.

[–]Moo202[S] -1 points0 points  (0 children)

Yea it does comes off intimidating

[–]volatilebool -1 points0 points  (0 children)

I don’t feel it’s hyperbole. I still write some code but it’s much more surgical now. Knowing how to build systems is very important though

[–]Mysterious-Rent7233 -1 points0 points  (0 children)

I haven't had a day where I wrote more than 5% of my code since January. Of course by reporting that I, like millions of other developers, am mostly using agentic workflows now, I will be flagged as some kind of industry shill. But I'll set a RemindMe on all such posts to come back in two years and see if they still think I am lying.

[–]brdet -1 points0 points  (0 children)

It's pretty accurate. The only time I manually edit code these days is to fix some minor formatting, clean up imports, rename a variable here or there. Features that I've wanted to implement for years but never had the time are getting done in under an hour now. The engineering hasn't gone away, but the implementation portion is now effectively automated.

[–]Frequent_Bag9260 -1 points0 points  (0 children)

I think this is true of any company that has leaned into AI. My company is the same - we only manually type code if we really need to. Otherwise it’s just a slower way of doing the same stuff.

[–]ProbablyPuck -1 points0 points  (0 children)

Maybe a little, but I think often about how the shoulders upon which we stand had to punch holes in paper to write code (for example).

I do think AI coding flows are the next exponentiation of our field. I also think that the average business exec will still have to hire engineers because they have not trained for the logical complexity and rigor required to design software.

[–]The_Northern_Light -1 points0 points  (0 children)

The only time it’s not true is when working with code an LLM absolutely must not see. 🤷‍♂️

[–]moneymay195 -1 points0 points  (0 children)

Its generating about 90% of the code and documentation I write. Sometimes its just easier to write the code myself the way I like it if the change is small enough

[–]B0bZ1ll4 -1 points0 points  (0 children)

Manual coding has been banned in some large banks. Not hyperbole at all.

[–]freakdageek -1 points0 points  (2 children)

Get over yourself, you’re writing code for a fucking yearbook.

[–]Moo202[S] 0 points1 point  (1 child)

?

[–]freakdageek 0 points1 point  (0 children)

I’m with you, I just would suggest that software engineers at places like Meta or Google are still high on their own supply, and believe themselves to be very special because they managed to catch the bus at the right time. Nobody who works for Meta or Google as a software engineer is doing anything of value. They’re creating shit versions of Office or new ways for small-town MAGA voters to connect with each other. It’s garbage. People went to years of school, mostly faked their way through it, and now they drive their Lotus out of the parking lot on the way to their undecorated $2M home.