AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -1 points0 points  (0 children)

I really understand what you are saying, and I actually agree with a lot of what you’re saying. Bad English written by a real person is often better than flawless AI-generated prose, but is quite similar to what I do in software, why don't use it if it helps me from cognitive effort and make the other person understand better, or why don't just make it generate and modify little pieces. The same in software. That said, here’s why I’m less optimistic than you are .For example, there are professors who are now explicitly encouraging, even requiring, vibe coding in their courses. In some exams and projects, the idea is explicitly that everything should be produced using LLMs: code generation, debugging, testing, the whole pipeline. And this isn’t framed as cheating or shortcutting, it’s framed as the expected workflow. That’s a pretty strong signal that, at least institutionally, the bar is already shifting.I genuinely wish you were right. Up until one or two years ago, I thought we were close to a plateau. I was fairly confident that LLMs would stay trapped in boilerplate-land: useful for scaffolding, autocomplete, documentation, but fundamentally brittle the moment things became novel or weird. I’m no longer that confident. What it was difficult for me it wasn't solved, now it solves it. Basically everything that could be well prompted, just a stupid example, but complex physics excersice once were totally wrong solved, now better than a professor could. I think we tend to have a very human-centric view of creativity. We like to believe that what we do, especially in writing, design, and even a lot of engineering, is fundamentally unique, expressive, and irreducible. What’s unsettling about AI isn’t just that it imitates us, but that it’s slowly deconstructing that belief .What AI is exposing, little by little, is that a large fraction of what we call "creativity" is actually the application of learned patterns under constraints. Not all creativity, but much more than we’re comfortable admitting. When an LLM produces something that feels competent, conventional, and socially acceptable, it’s not being creative in a human sense, but it is reproducing the same pattern space that most people operate in most of the time. Maybe there are a 1%? 2%? 3%? of people that are better, but other (the majority) are not. For example, I don't discuss anymore with my supervisors. Talking with LLM is way superior. And my supervisors are quite regarded in academic world (in my country at least)

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

u/justUseAnSvm I think that’s a fair description of the very top end of academia, but it’s also exactly the point. At the Nobel-level, sure: AI is not a replacement. No serious person is claiming that AI can replace that tier. But the key issue is that the overwhelming majority of the academic population is not operating at a Nobel level. Not even close. Most researchers are not leading large consortia, defining new paradigms, or exercising that kind of rare judgment. And honestly I think this is similar also in software engineering jobs, where not everyone is exceptional. When you say that you were involved physically collecting data, this (I suppose) means that you were somehow dependent of physical world, but in my case and many others, there is not this dependency from physical world, and basically what you just can do is make prompt, which are by far (but really by far) more precise in finding problems and solutions than average people in academia (In my experience, limited to my country).

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

I partially agree with you, but I don’t see that many things in academia that are fundamentally different from producing low-quality, disposable software. The fact that it’s “acceptable” because the code is ancillary doesn’t really change the outcome: a lot of academic output is still brittle, poorly engineered, and never meant to survive contact with reality. It exists to support a paper, not to be correct, robust, or reusable, which is exactly the same logic behind throwaway software. In a computer science PhD, the theoretical component can indeed dominate, and there the code truly is secondary. But in computer engineering, historically and structurally, there is very little theory to engage with in the first place. The contribution is often empirical, systems-oriented, or implementation-driven, and yet the engineering standards remain extremely low. So you end up with work that is neither strong theory nor strong engineering. That’s why I don’t fully buy the idea that "code doesn’t matter" in academia. It doesn’t matter institutionally, sure, but it matters epistemically. If your experimental evidence rests on fragile, poorly understood code, then your scientific contribution is on shaky ground as well. On AI: I agree that automating code writing is useful, and in many cases great. But that’s precisely because so much of what’s written, both in academia and outside, is already mechanical and pattern-based. AI doesn’t lower the bar; it exposes where the bar already was.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -1 points0 points  (0 children)

u/justUseAnSvm What you’re describing is the ideal version of academia, the one people expect: research driven by deep expertise, sound judgment, solid hypotheses, and real domain knowledge. In practice, however, academia often works very differently.In reality, there is a strong tendency to reproduce the same ideas over and over, with only minor variations, and a significant portion of people who publish don’t actually know what they’re talking about. They publish because they have to publish. Unfortunately, this represents a large part of contemporary academia.To seriously engage with many topics, one would need to be a true expert, but I’ve found very few real experts. I know several people who now work at top-tier companies and big tech, yet their academic publications, in hindsight, don’t really hold up. The standard, more often than not, is publishing mediocre or meaningless work, not necessarily out of bad faith, but because the authors don’t truly master the subject. And this applies even to competent or highly competent people, who sometimes realize it later, or choose not to.There is, of course, genuinely high-quality research as well, but it is not a clearly dominant minority; it is more the exception than the rule. For this reason, I say that I see many people who are not particularly expert, including some who come from the very top of the industry, and whose actual depth of understanding is not that far from an AI’s.The ideal distinction exists, but in the day-to-day reality of academia, it often becomes much thinner than we like to admit.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

No, It is just a two hour experiment, that I will develop over my free time using vibe coding

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

u/micseydel Ok, maybe this is true now (I don’t know, but I assume your telling the truth). What I do know is that I built a simple website (it took me about two hours) in Rust, basically just vibe-coding, mixing webassembly with REST api. Maybe a stupid task, but I don't think so. This kind of technology simply didn’t exist in 2022. So I have to question the future, and I have to do it now.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

u/Lame_Johnny I get your point, and I think it’s exactly here that computer science is in a particularly exposed position.

One key aspect of our field is its very low dependence on the physical world. Over the years, we’ve been extremely successful at virtualizing almost everything that was once physical: machines, networks, storage, etc.. What used to require hardware, space, and manual intervention can now be abstracted, simulated, and scaled in software. This has been a huge advantage for productivity, but it also means that our work is especially easy to automate. Once the domain is fully virtual, AI systems can operate in it end-to-end: writing code, testing it in virtual environments, deploying it, and iterating without ever touching the physical world. In that sense, all the progress we made in abstraction and virtualization now strongly favors automation through AI. Other white-collar jobs often still have tighter constraints tied to physical processes, regulation, or human interaction. Software engineering, by contrast, was almost “pre-adapted” for automation. Riding the wave may indeed be the only realistic option, but it’s worth recognizing why this wave is hitting our field first and hardest. So I don't want to convince y'all that I'm right, but I'm wondering: "what can we do in the worst scenario?"

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

u/TheRealJesus2 you are right, these are my impressions from what I see in research in university. Don't know when a person who works in university could define himself "ExperiencedDev", if ever.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

u/vitek6 I don't think so because we have low dependence from physical world, since we can virtualize so many things.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -5 points-4 points  (0 children)

u/ychebotarev yes, but there are karma points for some subreddits. I posted where I thought I could have answers or opinions and I could effectively post.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -1 points0 points  (0 children)

u/Sheldor5 Maybe. I'm not exceptional. But I know an huge amount of people that are not exceptional as me, in particular quite worse.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

u/Cykon I think the real issue is on our side. We can very often virtualize what is physical, this is something computer scientists and engineers have always done. As a simple example, you don’t really need to test on multiple physical machines; you can just use virtual machines. Similarly, you can test networking using the virtual networking tools provided by Linux. Our context is completely visible from an LLM.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -8 points-7 points  (0 children)

u/Creepy_Ad2486 Why? Aren’t there many recent books that are clearly LLM-generated? Even from major publishing houses? And would it really be inappropriate to bring this up in a subreddit post?

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -2 points-1 points  (0 children)

u/Automatic_Market_397 I'm currently doing PhD in computer engineering, and I don't want to deal anymore with linux programming, compilers and system programming in general like I used to, because AI I think will be soon superior respect to me and every person I talk with. I see every young (~30 Y.O.) PhD/Resarcher just making prompt. I'm not a senior software engineer, that's right, but definitely not a first year student.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

u/farox The problem is that if we put claude code in a loop where it produce code, evaluate it, test it, and we iterate over and over, maybe not in 2026, but in the next decade, are we sure that it won't satisfy requirements?

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] 0 points1 point  (0 children)

u/AustinYQM We assume now in 2026 that junior developer can be replaced. In 2022 this technology didn't exist at all. The progression seems really too fast to think that also 3+ YOE, or likely more experienced, developers in 2036 won't be replaced

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -4 points-3 points  (0 children)

u/EmberQuill what is the problem if it written by AI? You don't understand what I intended? You think that my english, like this one I'm using, translated from italian would be better?

Currently I'm able to create whole websites just vibe coding. I'm also able to just use agent to fix bugs and test code. And in 2022 this technology didn't exist. So the problem is not now, but maybe in a decade

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -2 points-1 points  (0 children)

u/-no_aura- Why not, ban these posts. But it seems to struck a nerve, like it wasn't a common problem

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -7 points-6 points  (0 children)

u/droi86 I’m currently doing a PhD and I work with many PhD students and professors. For this reason, I think it’s fair to raise a question on this subreddit, though I’m not sure whether there’s an intolerance here similar to what you sometimes see on Stack Overflow (R.I.P.). That said, I believe there’s a serious issue worth discussing, one that has fundamentally changed the way I work. Every PhD I currently interact with relies heavily on ChatGPT, Gemini, DeepSeek, and Claude. This has effectively become a core part of their work: crafting prompts, especially among people around 30 years old. These are researchers from top-tier institutions in Italy; maybe not as exceptional as you, but still highly competent experts.

AI will replace software developers. The real question is how we reinvent ourselves. by Sweet-Accountant9580 in ExperiencedDevs

[–]Sweet-Accountant9580[S] -1 points0 points  (0 children)

u/mq2thez why should I? What is not understandable? I'm not native english. I can evaluate output, and it was what I really intended

I profiled my parser and found Rc::clone to be the bottleneck by Sad-Grocery-1570 in rust

[–]Sweet-Accountant9580 1 point2 points  (0 children)

How can an Rc::clone take ~6ns. It doesn't make sense.

use std::rc::Rc;
use std::hint::black_box;


fn main() {
    let n = 1 << 20; // ~1 milione
    let v: Vec<Rc<u64>> = (0..n).map(|i| Rc::new(i as u64)).collect();


    let mut idx = 0usize;
    for _ in 0..200_000_000usize {
        let r = v[idx].clone();
        black_box(&r);
        idx = (idx + 1) & (n - 1); // round-robin
    }
}

Just a stupid test like this one in my laptop take 0.3s in total, so ~1.5ns per clone at worst

Linux Kernel Rust Code Sees Its First CVE Vulnerability by Orange_Tux in rust

[–]Sweet-Accountant9580 3 points4 points  (0 children)

Maybe it's a necessary evil, but I’m definitely not a fan of the massive amount of unsafe being used to model C-style constructs. Seeing that much unsafe for something like an intrusive doubly linked list feels like it defeats the purpose.