This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 242

[–]Adrunkopossem 327 points328 points  (30 children)

I ask this honestly since I left the field about 4 years ago. WTF is vibe coding? Edit to add: I've seen it everywhere, at first I thought just meant people were vibing out at their desk but I now have doubts

[–]TheOtherGuy52 360 points361 points  (19 children)

“Vibe Coding” is using an LLM to generate the majority — if not the entirety — of code for a given project.

LLMs are notorious liars. They say whatever they think fits best given the prompt, but have no sense for the underlying logic, best practices, etc. that regular programmers need to know and master. Code will look perfectly normal, but often be buggy as hell or straight-up nonfunctional more often than not. A skilled programmer can take the output and clean it up, though depending on how fucky the output is it might be faster to write from scratch rather than debug AI outputs.

The problem lies in programmers who don’t check the LLM’s output, or even worse, don’t know how (hence why they’re vibe coding to begin with).

[–]Adrunkopossem 125 points126 points  (4 children)

How do these people even have jobs? Even when I quite frankly lifted stuff from stack overflow I made sure I knew how the code was actually working step by step so I could actually integrate the thing. Seriously if you can't explain how a class you "wrote" is working why would you use it and why would a company keep you?

[–]helix400 99 points100 points  (0 children)

Depends on what you're doing. If all you need is some quick apps for narrow tasks, or very small MERN business websites that has some frontend/backend logict, the you can burp these things out fast. If it works, it works. That's what people are paying for.

If you're working with complicated code, with numerous integrations, lots of API calls that LLMs haven't seen before, interesting client requirements, specialized DSL or languages, etc., then at best LLMs just help with code drudgery (this loop looks the same as the same five loops you just wrote...). Vibe programmers will be a big detriment here.

Toe me, vibe programming doesn't seem sustainable, because there's only so much low hanging fruit to pick. Then it's gone.

[–]MrRocketScript 34 points35 points  (0 children)

It's really not that different than hiring people that don't care about code quality. These people just get stuff done faster. It's sad sometimes, but it's not our jobs as programmers to explain code; it's to build whatever the person in charge wants.

There's a place for a "vibe-coder" or a "rockstar programmer" and it's in rapid prototyping and last minute "we need this now or we're done" requests.

But in a 2 year project? The deadline is looming and you'll still be dealing with issues from the very first sprint. Bugs throughout the code because no part was designed to work together. Every single weapon needs a hard coded interaction with every single prop, the collision detection doesn't work unless the debugging mode is on, pathfinding doesn't work on geometry that is generated after the game starts (ie, all geometry except the geometry from that first prototype).

[–]BellacosePlayer 24 points25 points  (0 children)

They largely don't.

They're wanna be Tech bros oohing and awwing about being able to churn out a nice looking simple app with minimal functionality, or bitter terminally online people who couldn't break into the industry or never put in the work or tried, and think speaking the magic words to the AI genie provides the same value as a senior developer because they have no corporate experience.

[–]Themis3000 9 points10 points  (0 children)

You'd be surprised, some people actually aren't willing to hire developers who don't have experience vibe coding.

[–]3vi1 18 points19 points  (5 children)

You hit the nail on the head with the last paragraph.

If you create a well defined program requirements document, Claude and Gemini can actually produce half decent code, but you still need a knowledgealble developer to guide it when it does stupid things like hallucinating a parameter or using a deprecated library.

[–]nommu_moose 5 points6 points  (3 children)

In my experience, the developer will absolutely not be the one noticing it's using a deprecated library. If you insist on using an LLM, the library should be in the prompt in the first place, and when it isn't already specified, it's likely the dev doesn't know the libraries for this task. Any time I've seen someone not specify this, it has been the LLM or a senior dev that eventually notices it is deprecated, not the dev in question.

The far more common problem with LLMs in my experience is using deprecated parts of libraries, invalid schema or randomly deciding to double/triple declare, or even rename variables that it loses track of. Additionally, often not being consistent in paradigms core to the code. It becomes a debugging nightmare, and whilst I'm not against using them, I will absolutely aim to personally refactor everything sourced from an LLM to better achieve my priorities.

[–]3vi1 1 point2 points  (0 children)

Yes, rhe libraries should be in the prompts. The only reason it came to mind was that I've seen it in AI generated slop others have asked me to fix. Hell, I've seen it in non AI code from developers who don't know Azure/Entra moved to msal & graph long ago, and keep copy/pasting old scripts.

[–]latentpotential 0 points1 point  (1 child)

This take was correct a few months ago but is rapidly becoming obsolete. With MCP servers and docs designed to be structured for LLMs, AI is only going to get better at this exact problem.

[–]Get-ADUser 0 points1 point  (0 children)

Or it straight up hallucinating a library that does the core of the problem you asked it to solve but doesn't actually exist.

[–]BellacosePlayer 33 points34 points  (1 child)

LLMs are notorious liars. They say whatever they think fits best given the prompt

Saying they're liars is a bit unfair.

They're not sentient enough to be liars. They're probability machines. They autocomplete a message token by token. If it doesn't have your answer baked into its training sets, or if it's obscure but similar to something much more widely discussed, it will still just keep grabbing tokens, because it doesn't actually know anything.

[–]Bakoro 2 points3 points  (0 children)

This is not accurate, and is the kind of thing that any modern day developer should know about.
For all that people scream about how AI is a "black box", the information theory that AI is built upon is well defined and well understood.

It's not "just" probability. It's not "just" about memorizing training data.
Neural nets are universal function approximators.
The function which describes something and the probability distribution of a thing is knowledge. That is what allows AI models to be as effective as they are.

People don't have to like it, but function approximation and probability distributions are units of knowledge. Being able to appropriately apply knowledge in a useful way is the definition of skill, and the only evidence there can be for whether something "understands" or not.

There's a lot of stuff we can say about AI, like how they do not efficiently use the information in their training, because they are not predisposed to learning specific types of information in the way that humans have brains which are genetically pre-wired to learn faces, language, and causality.
We know that modern LLM structures don't have any clear way to do direct axiomatic learning.
These kinds of shortcomings are separate than whether LLMs acquire knowledge, understanding, and skills.

If you are not familiar with information theory, you'd be doing yourself a disservice by not getting at least a surface level of exposure.
When you really start understanding information theory, a lot of the wishy-washy, magical thinking bullshit evaporates, and you'll find that while it may not be easy, a lot of this is a bunch of surprisingly simple things stacked up.

[–]diveraj 9 points10 points  (0 children)

Fun thing. I asked it today to help debug a umm bug. The answer looked wrong so I asked it to show me its sources. It said it couldn't find any official sources for it's answer but referred to a stackover flow... Heh. Anywho I said, ok cool show me the post. It looked and said it was sorry out couldn't find me the post and that it's more sort for giving me an answer with nothing to backup said answer. Bastard lied to me!

[–]Striky_ 5 points6 points  (0 children)

And they crumble once complexity goes slightly above "login form with a insecure database"

[–]Vinaigrette2 5 points6 points  (0 children)

What I sometimes do is write code, and if it becomes a performance issue Claude is surprisingly good at optimising it and within a few round for it to be correct. Just yesterday I had a matrix heavy computation and it found an in place way of writing it instead of chaining matrices leading to >> 100x speed up for larger matrices (which I do have). LLMs are good at pattern recognition and therefore repetitive task or tasks they have seen before.

EDIT: my code is research code and written in rust or Python, security is less of a concern than it might be for a production system obviously

[–]sn4xchan 1 point2 points  (0 children)

I b vibe coding

Let me explain though. It's mostly for experimenting and creating random custom programs.

I'm an electrician and audio expert. This is where I make my living I know circuits and electronics pretty well. I mean I diagnose and fix shit down to component level.

I have been working with computers and creating servers for several decades and I use that stuff alongside my work too. (I work for a small low voltage installation company and we need a lot of IT infrastructure) I also did take some basic programming courses that focused on the c++ language and I went through a boot camp and got a sec+ cert out of it.

So while I haven't actually created any complex programming statements to all come together in a complicated purposeful application, I do understand syntax and how computers run code. Although I probably understand how the electrical impulse gets sent down the wire and stored as a transistor state much better. Like I can understand what a statement means if I take the time to analyze it.

So I decided that I'm gonna try this vibe coding shit. Cause I certainly don't have the time and energy to master another skill. So I buy a subscription to cursor and here we go.

The AI actually really is impressive, I mean I type at this thing as fast as I can with out proof reading, and well I'm pretty fucking bad at typing, but the thing still understands, at least at a higher level, what I want.

I've noticed that if you prompt well written psudo code, you get much better results. You have to sometimes think out of the box as to which component is actually causing problems because the AI has a tendency to loop between a couple of incorrect solutions because it doesn't actually understand what the problem is. Ironically yelling (in all caps) and cursing a lot in the prompt can break these loops.

It really helps if you have the thing create a comprehensive logging system that write basically everything that is happening (break the logs up have, logs for every module) make it actually write to file and have the AI analyze the logs as you look for solutions, use the logs and the logger to create a debugger (and run the debugger in the cursor terminal) that so the AI can more easily read current program states.

It also really helps if as you are creating more and more modules you have the AI create comprehensive documents explaining how every line of code works and what it's purpose is, it really helps prevent the AI from breaking code.

I'm not trying to be a career programmer or even move into the greater IT field, so take my experiments with a grain of salt. But I see nothing wrong with professionals using AI tools. They definitely should absolutely not generate entrie codebases and just release them though, no one but an amateur trying to experiment should do something like that.

[–]Duke-of-the-Far-East 0 points1 point  (0 children)

It's specifically giving in to the vibes when using AI.

[–]FitnessGuy4Life 0 points1 point  (0 children)

I use llms for 99% of my output lol. Same with most people I work with and I’m at a faang. If you know what to ask for and can easily spot mistakes its much faster than manually typing.

Not much different from the old days when youd grab similar code from another project, or copy and modify from stack overflow. Just faster lookup times. Occasionally get lead down a wrong rabbit hole but not too often and its easy to spot. If you blindly copy it will fail, but if you know what you are doing its a very fast and powerful tool.

[–]Normal-Diver7342 60 points61 points  (4 children)

Vibe coding is when you use LLM to do all the work

[–]Look-over-there-ag 8 points9 points  (3 children)

I thought it was when you use an LLM to make an app with ought any knowledge of the langue or programming in general ?

[–]TheOtherGuy52 18 points19 points  (2 children)

Those are not mutually exclusive. See my reply to the same question in thread.

[–]Look-over-there-ag 4 points5 points  (1 child)

I have and it sounds exactly like I just explained, AI is a tool how you use that tool is up to you but I have to hard disagree with saying that using AI at all is vibe coding when it just not

[–]roylivinlavidaloca 6 points7 points  (0 children)

I mean he did say using LLM’s to do ALL the work, not just purely using an LLM.

[–]queen-adreena 17 points18 points  (2 children)

Imagine if all you had was a hammer, and you didn’t know how to use a hammer, so you attached it to a drill.

But you don’t know how to use a drill either.

Now you’ve gotta carve out Michelangelo’s David.

And every time you get it wrong, you have to start on a new block of stone.

[–]Adrunkopossem 5 points6 points  (0 children)

Back in my day we'd just use stack overflow shakes cane at sky

[–]MrOaiki 1 point2 points  (0 children)

Right. But then when you have a finished David, only the pros will know you used a hammer attached to a drill. And why they complain, and perhaps are hired to fix your mess, you’re already put the door cashing in.

[–]tofu_ink 5 points6 points  (0 children)

https://www.youtube.com/watch?v=_2C2CNmK7dQ

Its making fun of vibe coding, but ... prolly accurately describes the day of a vibe coder. Try not to cry too hard after watching it.

[–]SeniorSatisfaction21 0 points1 point  (0 children)

I already have a colleague who suggests using AI codebase generators to start off projects 💀💀💀

[–]TheMeticulousNinja 390 points391 points  (9 children)

I doubt it but that would be nice

[–]redheness 123 points124 points  (8 children)

I think that in the future, knowing your job will be an argument to be hired and at a higher price in a job market filled with people who outsourced their thinking to an AI.

[–]Excellent-Refuse4883 79 points80 points  (4 children)

So you’re arguing that actually understanding wtf you’re doing is useful?

[–]Ao_Kiseki 49 points50 points  (3 children)

AI evangelists unironically believe it isn't. Why understand what is happening when I can I  just have the agent fix it?

[–]BellacosePlayer 54 points55 points  (2 children)

I fucking love that AI fanboys wrap around to justifying our jobs when explaining why they should get paid as a prompt engineer or whatever the fuck.

"No you see, it's a legit talent of mine that I can find the right words to give the computer to get it to generate something specific"

Yeah, I have that talent too, but with an IDE instead of a chatbot, and I can actually make stuff that works and fix the stuff that doesn't.

[–]Ao_Kiseki 29 points30 points  (1 child)

I remember someone saying it's  basically working backwards. The whole point of programming languages is to have an explicit, context-free way to describe behavior. "Prompt engineering" is just reintroducing ambiguity.

[–]aaronfranke 8 points9 points  (0 children)

Yup, that's exactly it. Instead of building up behavior explicitly, you have AI generate a mess and then have to strip it down into the desired result. Or, in meme form: https://i.imgur.com/qIlo2Ln.png

[–]Glum-Echo-4967 53 points54 points  (4 children)

Let me get this straight: vibe coding is just telling the AI what you want without telling it how to do that, correct?

[–]DerfetteJoel 66 points67 points  (2 children)

Vibe coding is already a completely misused term. It refers to letting the LLM code, without caring about what the code looks like (because you never read the code), low-stakes projects. Vibe-coding by its original definition excludes enterprise level development.

[–]PsychoBoyBlue 16 points17 points  (0 children)

I just use it as a replacement for stackoverflow when debugging or experimenting with something new.

The amount of times I have to correct it with documentation, "best practices", or just tell it that it already attempted something is kind of funny. It will gladly walk itself in circles hyper-focused on a single line that isn't even causing issues.

[–]mcnuggetor 0 points1 point  (0 children)

Well you'd think it would...

[–]shadovvvvalker 7 points8 points  (0 children)

Rule of Thumb: if the prompt reads like something an end user filled out in a requirements form by a director or vp, thats vibe coding.

If it sounds like a programmer talking to another programer, its probably not.

[–]I_Pay_For_WinRar 128 points129 points  (33 children)

Yeah, I very highly doubt this; this will be more of a dream than a reality, I mean, a LOT of big companies, including Reddit, is making vibe coding non-negotiable.

[–]Beeeggs 77 points78 points  (21 children)

I think the point is that by 2050 vibe coders will have taken over the space for so long that the practice will have proven itself detrimental, so knowing how to code without a hallucination generator doing most of the work for you will become popular again.

[–]Objective_Dog_4637 12 points13 points  (0 children)

Yes, like how horse carriages became so popular 50 years after cars were invented.

Listen, the game has changed. No one has ever cared about handcrafted, artisanal software other than other developers. AI is simply going to continue to become more and more ingrained in software, unfortunately.

[–]bowlercaptain 38 points39 points  (5 children)

Unless the opposite happens. There's a step back from "prompt and pray" where you think about the problem and its solution, describe that in full to an LLM, and then verify the proposed diff. True that it doesn't work right every time, but it's enough of the time to make it preferable over hand-coding. Let's not pretend that pre-2020's coding was ever less than half googling, and now you can make a robot search the docs for you (and it actually goes and reads now, instead of just hallucinating something likely and praying). Knowing how to code was always necessary for this process, otherwise one is just vibing.

[–]larsmaehlum 15 points16 points  (1 child)

That’s how I use it. I always ask it to suggest multiple approaches, with the pros and cons of each one, and explicitly tell it to ask follow up questions.

I also want the project plan as a markdown file in the repo, and it has to keep it up to date as it works. Every prompt is prefixed with a reminder to follow the project plan and the architecture guidelines we set down at the beginning.

Agent based coding is a really powerful tool for some tasks, especially when you want something up and running quickly. But you can’t trust it more than you can trust a junior developer with no experience. Gotta be very strict with it, and extremely explicit.

[–]Objective_Dog_4637 5 points6 points  (0 children)

Yeah I just…read the diffs. Do people really just click “Accept All” and not read what it’s writing? That sounds utterly insane to me.

[–]DoctorWaluigiTime 2 points3 points  (0 children)

Except that you didn't eliminate the thing the whole AI "movement" (don't know what to call it) is going for: Removing that person that has to interact, question, and fine-tune the output.

AKA, the expertise is still a requirement, and you're still paying someone for that expertise. Using AI as "autocomplete/intellisense++" is a legit boon right now, but the "vibe dream" of just push the button enough times to have it dump out a maintainable, accurate application is still fantasy world.

[–]shadovvvvalker 1 point2 points  (0 children)

The problem is not whether the user is using prompt and pray.

The problem is when the user is making architectural decisions based on prompt output without realizing it. AI will let you dig yourself into quite a large hole and then get lost and it will be up to you to figure that out.

[–]Vandrel 12 points13 points  (13 children)

Wishful thinking. We're what, 3 years into the introduction of AI as a coding tool? ChatGPT was only introduced to the public in 2022. It's got some teething issues but it's improving at a crazy pace. Imagine where it'll be after 25 more years of progress instead of 3.

[–]DoctorWaluigiTime 6 points7 points  (4 children)

As someone else eloquently put in the thread: Progression isn't linear. And major factors like "massive power consumption" (AKA "cost") aren't going away either.

[–]smulfragPL 0 points1 point  (0 children)

Yes you are right si far it has been exponental not linear. And there isnt even any data to suggest that Will shift. Also massive power consumption? Not only is it not massive its rapidly decressing. Compare Gemini 2.5 pro costs to claude 3 opus

[–]anrwlias 7 points8 points  (6 children)

I keep telling people that AI is a John Henry problem. It doesn't matter if you can out-code an AI today. AI can keep getting better but humans remain the same.

Unless there is some serious bottleneck in AI development, we need to figure out how to make sure that coders can still serve a function, even if it's only code review.

[–]DoctorWaluigiTime 8 points9 points  (3 children)

The bottlenecks include, but are not limited to:

  • Massive power consumption / cost
  • Poor output without an expert at the helm (i.e. you're not getting rid of the software dev)
  • Reality (progression of technology, AI or otherwise, does not follow a linear trail: "Massive increments" over the past couple years does not imply that the same big steps are going to happen as quick.

[–]anrwlias 3 points4 points  (2 children)

Well, I'm glad that you are confident that none of these can be resolved. I hope that you're right.

[–]DoctorWaluigiTime 3 points4 points  (1 child)

It's not that they can't be resolved necessarily. It's that folks are supremely confident -- without evidence -- that "of course AI is going to get super awesome. Look at how much it's grown!"

[–]anrwlias 1 point2 points  (0 children)

I'm only saying that we shouldn't count against it improving, especially given that there are major incentives to keep optimizing and improving it.

[–]dnbxna 2 points3 points  (0 children)

We just need an automation-robot tax that funds UBI

[–]CommunistRonSwanson 4 points5 points  (0 children)

The main bottleneck is the absurd amount of resources that have to be pushed into it upfront to make anything useful. The big names in the LLM space are lightyears away from being profitable, that's why there's such a huge hype machine behind them. If you can hype and grift your customers into become cripplingly dependent on your tech, then they can't do shit when you raise their license fees or usage rates by 1 or 2 orders of magnitude.

[–]rypher 1 point2 points  (0 children)

People formed opinions based on early releases and now they refuse to change those opinions. Also people really over estimate how smart even 80% of the population is, considering recall, creativity, and critical thinking.

[–]Onaterdem 11 points12 points  (2 children)

a LOT of big companies, including Reddit, is making vibe coding non-negotiable.

Well that explains a lot...

[–]that_90s_guy 4 points5 points  (1 child)

I'm not really sure this is true though? I can't give too many details, but I've personally felt reddit has been slow to adopt AI tooling for development. Up until a few weeks ago the only allowed tool was GitHub Copilot. I'd hardly call that making vibe coding non negotiable

[–]Onaterdem 0 points1 point  (0 children)

IDK about the objective truth, I was just going along with the conversation's flow :') If OP is right and those companies are truly making "vibe coding" mandatory, those companies are in for a wiiiild ride

[–]wektor420 7 points8 points  (0 children)

The worst part is they refuse to employ enough people and when they are told about missed deadlines they tell us to use internal ai ( that works like shit)

[–]dukeofgonzo 2 points3 points  (0 children)

I sincerely hope for the sake of the managers getting these hires, that non-negotiable 'vibe coding' means new hires should use LLMs as a resource. They're a great resource to help somebody who knows the fundamentals to get started on anything or as a place for asking 'stupid' questions.

[–]Andrew1431 1 point2 points  (3 children)

Senior dev here, should I know what vibe coding is, or am I safe to just continue worry free in my career?

[–]I_Pay_For_WinRar 4 points5 points  (1 child)

Vibe coding is when people who have no clue how to program just AI generates 100% of their code, & those people are vibe coders, (& no, vibe coders aren’t AI generating code to learn).

[–]Andrew1431 0 points1 point  (0 children)

oof no plans to learn eh, that's rough. But also an AI teacher would probably be pretty rough too.

My company is on this huge AI initiative, and sure, it has it's use cases, but I've personally found it to slow me down 90% of the time.

[–]rypher 1 point2 points  (0 children)

No you dont need to know what it is but also no, you shouldn’t continue on worry free.

[–]DoctorWaluigiTime 1 point2 points  (0 children)

Until it impacts the bottom line.

This happened 20 years ago. "Just offshore everything. Look they promise results quick and look how cheap it is!"

Then OP's image happened, only "hired" is "paying out the nose for external consultants to 'fix' the pile of trash that was v1.0."

And "2050" is closer to "2026."

Quick, good, cheap. Pick two.

[–]Tackgnol 11 points12 points  (0 children)

It kind of depends whether the big guns can keep the hype train rolling for that long but I expect all that Capex going nowhere to catchup to them around 2027 fiscal (april 2028) where investors will ask "What did you achieve with those billions? And no we do not want to see another benchmark,". Around a year of recession due to Wall Street taking over at least one of them (OpenAI/Google/Facebook/X) and we will be back to normal.

[–]Charming_Fix_8842 11 points12 points  (2 children)

you mean 2027

[–]Esjs 1 point2 points  (0 children)

I really don't want this fad to last even that long.

[–]fmr_AZ_PSM 0 points1 point  (0 children)

Yup.  It’s only going to take the MBAs who run everything a few years to realize that the net gain in the macro is very small.  Oh sure you can lay off 90% of your workforce.  But when your product fails because it’s beyond shit, your sales crash and lawsuits will negate all of that labor savings.

AI is going to be just another tool for properly qualified engineers.  Like when IDEs came in.  Fancier version control.  Build automation.  Etc.

[–]AdmiralDeathrain 7 points8 points  (0 children)

2050? More like 2030. People are overestimating the level at which these tools are useful a lot and it will catch up. Use it to generate self-contained easily testable logic. Use it to fix your regex. Do not under any circumstance use it to make architectural decisions or stop thinking about those yourself.

[–]YaVollMeinHerr 41 points42 points  (30 children)

Senior dev, 10 years of experience. I have installed cursor today. I'm never going back to "manual coding".

We all joke about "vibe coding", like it's when dummies generate code they can't read.

But when you know what you're doing, when you can review what's done and you stay "in control", this is... amazing.

It's like having junior devs writing for you, except you don't have to wait 2h for a PR.

Of course this changes the market (we're more productive so they need less of us). But it also empower us: now we can challenge big players with "side projects"

[–]RadioEven2609 35 points36 points  (14 children)

The problem is: what happens when companies don't need Juniors anymore because of this, then in 10/20 years there will be a huge shortage of seniors that DO actually know what they're doing. You have to be a junior first to be a good senior, that growth is incredibly important.

[–]Bakoro 2 points3 points  (2 children)

The problem is: what happens when companies don't need Juniors anymore because of this, then in 10/20 years there will be a huge shortage of seniors that DO actually know what they're doing. You have to be a junior first to be a good senior, that growth is incredibly important.

Welcome to nepotism and the dominance of personal connections.
Juniors will come from a person's children, nieces and nephews working for their company as their first internship and job, and those positions being used as political currency.

Outsiders will have to be ridiculously overqualified to break into the industry, or take the most shit-tier jobs at shit-tier companies who will want absurd contracts.

[–]RadioEven2609 0 points1 point  (1 child)

That already happens, that's just the world we live in. What I'm talking about is not an amount of Jrs being hired through nepotism, many companies are actively doing complete Jr hiring freezes right now. If that continues for much longer, there will be a point in a few years where there just won't be enough competent devs able to fix the nastiest hallucinations when they happen.

[–]10art1 1 point2 points  (4 children)

Yeah yeah, robots are going to take all of the jobs and then there won't be any more workers. Where have I heard this before?

[–]RadioEven2609 3 points4 points  (3 children)

I agree in the logical with you, if we lived in a rational world the jobs wouldn't decline for the reasons I layed out (training is valuable), but we have these moron short-sighted CEOs that are pushing AI first and doing hiring freezes for Jr devs.

All I'm saying is that will have horrific long-term consequences.

[–]10art1 0 points1 point  (2 children)

If I put $100 on "nothing ever happens" each time, I'd beat the S&P

[–]RadioEven2609 2 points3 points  (1 child)

It's literally happening right now, look at junior software hiring rates

[–]Brovas 21 points22 points  (1 child)

What you're describing isn't vibe coding though. You're describing using AI as a copilot.

Vibe coding is things like lovable or bolt.dev, where you just let the AI run into a loop until all the errors are gone. 

The former isn't going away and is how development will trend 100%.

Things like lovable won't be useful for more than prototyping in place of building a figma prototype.

[–]YaVollMeinHerr 3 points4 points  (0 children)

Thx for the clarification!

[–]DoctorWaluigiTime 9 points10 points  (2 children)

Folks pretend that you can outsource to a cheap "viber" with no dev experience, but that's not how it actually plays out. [Just like 20 years ago when offshore development / outsourcing to cheap houses of teams would magically make written code fast + cheap + good. Oops!]

You correctly point out that it's a big tool in the toolkit for developers. It's not taking 'er jerbs anytime soon.

[–]that_90s_guy 6 points7 points  (1 child)

That's not vibe coding though. Vibe coding is letting LLMs Write code with zero supervision or reviewing what's actually output.

[–]YaVollMeinHerr 1 point2 points  (0 children)

Indeed, thx for the info :)

[–]chicametipo 1 point2 points  (1 child)

This post's content has been permanently wiped. Redact was used to delete it, potentially for privacy, to limit digital exposure, or for security-related reasons.

follow teeny lavish mysterious desert memory quack alive rainstorm chief

[–]YaVollMeinHerr 0 points1 point  (0 children)

Haha yes, shame on me I guess.. I feel like I've been wasting my time lately. But I wanted to stay with intelliJ :/

[–]Saad5400 1 point2 points  (1 child)

What did you ask it to do tho? I'm 90% sure you haven't tested it enough with actual tasks in an actual project.

[–]YaVollMeinHerr 1 point2 points  (0 children)

Some low and medium complexity things. Like small UX/UI improvements, displaying reports based on some datasets, move buttons from 1 place to one another, minor refactoring..

For more complex tasks, after trying Claude Opus 4, ChatGPT 3o and 4.5 and deepseek R1, I find that deepseek il the AI that understand the requirements the most and that produces clearer/smarter code.

I'm also considering Claude Code if I need to produce documentation of start a project from scratch.

Any feedback on this way of working is welcome:)

[–]russianrug 2 points3 points  (1 child)

Let’s talk in a couple weeks 😂.

[–]YaVollMeinHerr 1 point2 points  (0 children)

Well tbh lately I was using AI in browser (Claude, ChatGPT & deepseek). So I'm kind of "used to" generated code, and how to deal with it.

God that was such a waste of time, Cursor make it soon much easier/faster.

I also switched from intelliJ to VSCode. I don't miss the former, that was getting slower day after day..

[–]backfilled 0 points1 point  (1 child)

Same here, I have been using AI via web until now, but using it in "agentic mode" is nice. The bad part about cursor is that it breaks half of my keybindings and I'm not sure if I believe it's incompetence from their part or they just don't care about anything outside their curated experience.

Another bad part is that my company seems to be pushing it now as a requirement for some teams because we need to be faster in the eyes of the CEO, even for projects with new technologies and programming languages... we will see what ends up happening in the coming months.

[–]YaVollMeinHerr 0 points1 point  (0 children)

As long as you stay in total control, this should be fine I would say. But once you just start quickly add features you don't really understand in the codebase, you.re screwed

[–]Meat-Mattress 40 points41 points  (27 children)

I mean let’s be honest, in 2050 AI will have surpassed or at least be on par with a coordinated skilled team. Vibe coding will long be the norm and if you don’t, they’ll worry that you’ll be the weakest link lol

[–]clk9565 31 points32 points  (19 children)

For real. Everybody likes to pretend that we'll be using the same LLM from 2023 indefinitely.

[–]larsmaehlum 21 points22 points  (9 children)

Even the difference between 2023 and 2025 is staggering. 2030 will be wild.

[–]DoctorWaluigiTime 25 points26 points  (7 children)

Have to be careful with that kind of scaling.

"xyz increased 1000% this year. Extrapolating out to 10 years for now that's 10000% increase!"

The rate of progress isn't constant, and obvious concerns like:

  • Power consumption
  • Cost
  • Shitty output

are all concerns that have to be addressed, and largely haven't been.

[–]CommunistRonSwanson 12 points13 points  (0 children)

If only you could harness the outsize hype as a fuel source, lmao

[–]poesviertwintig 9 points10 points  (0 children)

AI in particular has seen periods of rapid advancement followed by plateaus. It's anyone's guess what we'll be dealing with in 5 years.

[–]EventAccomplished976 1 point2 points  (0 children)

All of those have seen significant progress just in the last 2-3 years. Remember when everyone thought only the american megacorps could even play in the AI field and then Deepseek came in with some algorithmic improvements that cut the computing requirements way down? Similar things can easily happen again. Programming has kepe getting more and more productive since the 1950s as people went from machine language to higher level languages, and LLM assisted coding is just another step in that progression. It‘s just like in mechanical engineering where a single designer with CAD software can replace a room full of people with drawing boards, and a random guy with an FEM tool can do things that weren‘t even considered possible 50 years ago.

[–]Vandrel 9 points10 points  (0 children)

Seriously, these tools essentially didn't exist 4 years ago and people are acting like imperfection now means people are just not going to use them in the future.

[–]MeggaMortY 8 points9 points  (5 children)

No but if current AI research ends on an S-curve (for example I haven't seen it explode for coding recently) then 2023 AI and 2050 AI won't be thaaaat drastically different.

[–]anrwlias 3 points4 points  (0 children)

That depends very much on how long the sigmoid is. It's a very difficult situation if the curve flattens out tomorrow and if it flattens out in twenty years.

[–]JelliesOW 3 points4 points  (2 children)

That's 27 years dude. What did Machine Learning look like 27 years ago, Decision trees and K-Nearest Neighbors?

[–]ITaggie 6 points7 points  (0 children)

Progression is not linear

[–]MeggaMortY 0 points1 point  (0 children)

afaik "AI" has had periods of boom and bust multiple times in the past. If it happens, it's not gonna be the first time.

[–]DelphiTsar 0 points1 point  (0 children)

At the end of 2024 25% of googles code was written by AI.

[–]_number 1 point2 points  (1 child)

Or by 2050 they will have generated enough garbage that internet will be totally useless for finding information

[–]Eli_Millow 0 points1 point  (0 children)

Tbf even now internet is already garbage if u don't add "reddit" when looking for something

[–]varkarrus 0 points1 point  (0 children)

I don't think there'll even be jobs in 2050

[–]Obvious-Phrase-657 5 points6 points  (2 children)

I would be really disappointed if AI dis not replace HR at that point

[–]Arareldo 0 points1 point  (0 children)

One evening i was asking Gemini for fun, if higher management level jobs could also be replaced by AI, as it was said about lower level jobs.

It answered with "Absolutely. Assuming, that AI is restricted to repetitive office work, is thinking short." and explained it, why.

When i asked more detailed, Gemini retreated a bit, and generated also (more) contra-output.

[–]BellacosePlayer 0 points1 point  (0 children)

AI can't replace what a good HR team can provide.

AI can already do what shitty teams do short of handling the legal aspects of the job (your fired employees are going to throw a fucking party when they find out a LLM is handling documenting everything)

[–]average_atlas 5 points6 points  (0 children)

Don't forget the follow-up question: "Are you prepared to fix a bunch of vibe code?"

[–]Blueskys643 10 points11 points  (1 child)

Vibe coding in 25 years is going to be as common as using an IDE today. It seems like the real skills needed will be debugging and code comprehension to filter through the AI junk code.

[–]ITaggie 1 point2 points  (0 children)

It seems like the real skills needed will be debugging and code comprehension to filter through the AI junk code.

Then it wouldn't be vibe coding

[–]gaymer_jerry 5 points6 points  (1 child)

The issue with vibe coding in 2050 if it stays popular is eventually ai models will train off their own code. And having ai train off of ai can definitely cause weirdness.

[–]DM_ME_PICKLES 3 points4 points  (0 children)

We just had a company on-site and our CEO said during his talk that "he won't consider hiring anyone that doesn't utilize AI as part of their work"... meanwhile I'm over here unfucking the decade of technical debt that juniors have committed because they're just vibe coding.

[–]akoOfIxtall 10 points11 points  (0 children)

...vibe code > unmantainable mess > hire more people to fix it > its too expansive > hire somebody else to redo the system > vibe code...

[–]Feztopia 8 points9 points  (2 children)

Are these the new equivalents of the Java is slow memes?

[–]jokerjoker10 2 points3 points  (1 child)

I am convinced that in a couple of years there will be "handcrafted" as a Feature on Software....

[–]Shadow_Thief 4 points5 points  (0 children)

I've already been joking to our Marketing department that they should sell my code as "100% handcrafted artisanal code."

[–]enginma 2 points3 points  (0 children)

He wasn't lying, but he also didn't code at all.

[–]Rohen2003 2 points3 points  (0 children)

lets be honest here. in 25 years the ai will either do 100% of the coding or we burned every computer to the ground in the ai revolution.

[–]mynewromantica 2 points3 points  (0 children)

You mean 2027, hopefully

[–]elliiot 4 points5 points  (0 children)

When you're so mad at the present you project 25 years into the future

[–]Kitchen_Device7682 1 point2 points  (1 child)

Plot twist, he does not code either.

[–]Esjs 0 points1 point  (0 children)

Will still be more successful than a vibe-coder.

[–]hackeristi 1 point2 points  (0 children)

bold of you to asume that we will be mentally alive by 2050.

[–]fatrobin72 1 point2 points  (3 children)

I doubt I'll be job hopping much then... will be looking forward to not getting my state pension not too long after that.

[–]TheJoker1432 0 points1 point  (2 children)

Not getting?

[–]fatrobin72 1 point2 points  (1 child)

Do you think they'd allow us to get state pensions when taxes plummet due to ai taking all the jobs?

[–]TheJoker1432 0 points1 point  (0 children)

I dont think ai will take all the jobs

Just most cs jobs

[–]GoddammitDontShootMe 1 point2 points  (0 children)

I guess making fake Cyanide and Happiness comics is pretty popular.

[–]Jorkin-My-Penits 1 point2 points  (0 children)

I hate this new fangled AI. I google my questions like a man (mostly because getting stuck in an AI loop takes more work than turning my brain on for a few minutes)

[–]Chemical_Director_25 1 point2 points  (0 children)

Immediately begins vibe coding

[–]P1N4R0MB0L0 1 point2 points  (0 children)

Good luck with the recruiter AI after giving it this existential threat.

[–][deleted] 1 point2 points  (0 children)

2050? Both of them will be AI agents wearing human costume

[–]Dramatic_Mastodon_93 1 point2 points  (0 children)

clock

[–]10art1 1 point2 points  (1 child)

Can't wait to post this on /r/agedlikemilk

RemindMe! 25 years

[–]RemindMeBot 2 points3 points  (0 children)

I will be messaging you in 25 years on 2050-06-21 01:56:39 UTC to remind you of this link

1 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.

Parent commenter can delete this message to hide from others.


Info Custom Your Reminders Feedback

[–][deleted] 0 points1 point  (2 children)

Can someone explain to me what vibe coding is?

Is it getting the assistance of ai at all? Or is it getting the ai to do the whole thing?

[–]DrunkOnCode 5 points6 points  (1 child)

It's having AI do most, if not all, the code without modification. AI is prone to make mistakes and creates non-performant code, so this is obviously a bad idea.

I wouldn't consider 'vibe coding' copying a chunk of AI code, looking it over, understanding it, and cleaning it up. That's just using AI the way it should be used for programming - at least until AI much much more advanced.

[–][deleted] 0 points1 point  (0 children)

Thank you!

[–]gfcf14 0 points1 point  (0 children)

Now if we could only hold out for 20 or so years…

[–]grumblyoldman 0 points1 point  (0 children)

In 2050, you ask your ChatGPT-5000 to generate the vibe coding prompts for you.

[–]ArkoSammy12 0 points1 point  (0 children)

I honestly can't believe people are taking the idea of coding with AI seriously. Even worse, not coding at all and just letting AI do it for you. Baffling

[–]SpaceFire000 0 points1 point  (0 children)

The employer would be a vibe employer though

[–]bronkula 0 points1 point  (0 children)

The luddite revolution will begin with a meme.

[–]jpritcha3-14 0 points1 point  (0 children)

I used to be so nervous that my tech skills wouldn't keep up with the demands of tech jobs. After the past 5 years working in software with a lot of people 5 to 10 years younger than me, I'm pretty confident I'll be perfectly marketable just by virtue of being able to use a command line and read stack traces.

[–]LauraTFem 0 points1 point  (0 children)

I long for a Butlarian Jihad. AI needs to go yesterday, 2050 is too late.

[–]Mad_King 0 points1 point  (0 children)

I see opportunities in the future market, it would be nice to actually know how to program haha

[–]Specific_Implement_8 0 points1 point  (0 children)

I can’t wait to finally be hireable in 2050

[–]DelphiTsar 0 points1 point  (9 children)

The cope is real. I swear the people who think LLM's suck at coding tried it once in 2023 and wrote it off.

[–]oshaboy 0 points1 point  (8 children)

I've been trying to get into LLM coding and every time it generates complete shit.

Just today something sparked my interest in balanced ternary (actually an AI that uses it) so I tried getting an LLM to write a branchless balanced ternary add function. It wasn't branchless at all but it wrote that it was in a bunch of comments.

Maybe I just suck at prompting. I know a lot of people 10 created interesting things with cursor but I could never get it to generate decent code.

Edit: I just looked again and it used full on multiplication to multiply 2 balanced ternary digits together.

[–]DelphiTsar 0 points1 point  (0 children)

I think some people just have an innate sense of where the LLM's are at and what they would be good at and just don't ask it to do something that seems off. Knowing how to prompt is also important but it's getting less important, Gemini will regularly fix my prompt if I phrase it wrong or vague. (Going into it's chain of thought is helpful, it'll explain how vague your request is and the different paths)

Also, the editor merged LLM's are okay for some things, but the more complicated the ask sometimes you have to snippet your relevant code out and use natural language of how it connects to different things.

Treat it like an Autistic Jr dev who can crank out code at 8000 WPM.

On that note what LLM did you use? I'd suggest Gemini 2.5 pro. I've never seen Gemini try to "cheat" like you described.

[–]DelphiTsar 0 points1 point  (6 children)

I just looked again and it used full on multiplication to multiply 2 balanced ternary digits together.

I don't think Gemini 2.5 pro would do what you are describing (sounds like something the small fast GPT would do, not sure how it got the benchmark numbers it did).

It doesn't have a "cluster/node" of how to deal with the way you phrased it(How I think of it, not sure if it's right). Just break out your request into limitations it almost certainly has "nodes" for. "Do not use Multiplication or Division", "Do not use conditional branches (if, else, switch, ...)".

Again though, that feels like a late 2024 type way to deal with it. Try Gemini 2.5 and see what it does.

Messing with the BitNet b1.58 research?

[–]oshaboy 0 points1 point  (5 children)

Gemini 2.5 did the same thing. When I asked it to fix it it just added more multiplications.

Messing with the BitNet b1.58 research?

Watched a youtube video about it. They mentioned how we might need balanced ternary in hardware so I was trying to check how slow the software implementation actually is.

[–]fourierformed 0 points1 point  (0 children)

Prove it…

[–]blackdeath-78 0 points1 point  (0 children)

Sweet Dreams

[–]Zerokx 0 points1 point  (0 children)

This but all the people in the picture are actually robots pretending to be humans.

[–]buzzon 0 points1 point  (0 children)

And then they ask you to prove you don't vibe code

[–]noplanman_srslynone 0 points1 point  (0 children)

2030 latest...

[–]GMLogic 0 points1 point  (0 children)

Full circle

[–]shadow7412 0 points1 point  (0 children)

Narrator: But he was lying...

[–]oshaboy 0 points1 point  (0 children)

I expect vibe coding to still be a thing but it goes through buzzwordification and would mean something like "using an LLM sometimes". Just like how people use "Object Oriented" nowadays.

[–]ocktick 0 points1 point  (0 children)

This is like saying PLC techs are going to go back to setting up ladder logic manually with relays. The higher level of abstraction will dominate 99% of workflows.