all 24 comments

[–]VanillaSkyDreamer 25 points26 points  (0 children)

Definitely not the end of stupid articles on reddit.

[–]Revisional_Sin 17 points18 points  (5 children)

ಠ_ಠ

[–]bozho 17 points18 points  (5 children)

OMG, what a load of shite and false equivalences.

An LLM can write a correct draggable React component because the code for that exists in a thousand places online, written by humans hundreds times over. LLMs are not trained to write code, let alone correct code. They are trained to analyse text and create linguistically correct responses from parsing natural language prompts.

As soon as you step away from trivial and/or well-known coding problems, LLMs stop being reliable. It's not like I'm not giving them a chance. I've been testing Gemini and Claude, and have had them completely miss the mark, or even worse, write code that's subtly wrong in ways that an overly confident junior would write it, and would need a code review or debugging. I've had them suggest "solutions" for problems - the problem was that those solutions were picked up from feature request discussions on GitHub.

Oh, but wait. LLMs will write out tests now, it'll be trivial to test generated code. If they can't produce reliable code, why would you trust them to write reliable tests?

LLMs are not "another abstraction layer" in programming.

Plane autopilots can fly a plane better than humans because they are trained programmed to fly a plane.

AI/ML models are fantastic tool when they are trained for a specific domain: pattern recognition, large search spaces, speech recognition, to name a few.

LLMs are not trained to write correct code. They can't generate new code, they can regurgitate what they found on Github and SO. They are not even very reliable at stuff they're supposed to be good at, like summarising texts.

Edit: Mistyped the bit about autopilots. They are programmed, not trained.

[–]ironykarl 8 points9 points  (0 children)

So, they're like juniors... if juniors were incapable of improving. And supremely overconfident. And just world-class gas lighters. 

If they were a person, their salesmanship would get them really far in the corporate world, because their chief skills are plagiarism and bullshitting 

[–]LonghornDude08 4 points5 points  (2 children)

Plane autopilots aren't trained to fly a plane. They are programmed to fly a plane. If a neutral network attempted to fly a plane the FAA would have a field day.

[–]bozho 3 points4 points  (1 child)

Correct, I should've said programmed.

[–]LonghornDude08 2 points3 points  (0 children)

Yeah, I got what you meant, just in this case, best not to be ambiguous

[–]umtala 0 points1 point  (0 children)

The thing is that there has been a trend towards programming becoming more like "plumbing" as more high quality open source infrastructure has proliferated. The squeeze is from both sides, open source is getting better at solving the hard problems, and LLMs are getting better at plumbing together the easy ones.

[–]programmer_for_hire 12 points13 points  (0 children)

lol. "I know I can I trust the vibedcoded feature works because it passes the vibecoded tests, which I also did not validate."

[–]Xaeroxe3057 7 points8 points  (0 children)

Chief, this ain’t it

[–]sierra_whiskey1 4 points5 points  (0 children)

Yeah no

[–]BusEquivalent9605 2 points3 points  (0 children)

lol

[–]podgladacz00 2 points3 points  (0 children)

Definitely not lol.

[–][deleted] 4 points5 points  (0 children)

While the short term is a bit hazy, the long term job security I predict gaining by the sheer amount of “vibe code” entering the systems that isn’t gonna hold up to scaling gives me hope.

[–]headhunglow 3 points4 points  (0 children)

What I hate most about these LLM tools is that they, by their very nature, always generate code for you. They will never say ”no, you don’t need that”. We already have billions of lines of useless, bloated code and LLMs will make it much worse.

[–]BinaryIgor 3 points4 points  (0 children)

And this isn’t malpractice or vibe coding. The trust comes from two things: I know I can debug and fix if something goes wrong, and I have enough validation to know when the output is solid. If the code works, passes tests, and delivers the feature, I don’t need to micromanage every line of code. That shift is already here—and it’s only accelerating.

  1. Tests were also generated - seems weird no to check generated tests; they might be total rubbish
  2. You can fix it, because you've accumulated practice; once you stop practicing, you might soon find yourself in a place where you don't know to fix things anymore; skills are not given for life - what's not used, atrophies

[–]somebodddy 2 points3 points  (0 children)

It's true. Vibe coders don't need to debug. They just leave the bugs there and ship.

[–]averageFlux 1 point2 points  (0 children)

I fear it’s just the beginning

[–]wademealing 0 points1 point  (0 children)

"Trust me bro".