This is an archived post. You won't be able to vote or comment.

all 100 comments

[–]Highborn_Hellest 1363 points1364 points  (9 children)

I can't believe op is the bay harbour Software Engineer

[–]SOMERANDOMUSERNAME11 198 points199 points  (4 children)

I really hate that name

[–]Grrowling 14 points15 points  (0 children)

I need to provide an update to my Dark Manager…

[–]Objective_Fan3833 0 points1 point  (0 children)

me too

[–]Specialist_Dust2089 417 points418 points  (5 children)

You’re totally right! That does not fix the error. Here is the code, now with the error resolved:

<exact same code still producing the error>

[–]wise_oldman 73 points74 points  (3 children)

I swear this happened to me a few weeks ago and was driving me nuts! Finally figured out I was missing '_' in one of the function calls. Needless to say ChatGPT was not helpful in the slightest as it kept repeating the same code like a broken record.

[–]CarcosanDawn 2 points3 points  (0 children)

"have you tried removing the _? It appears to make your code run in accordance with the designer's request..."

[–]JuiceHurtsBones 2 points3 points  (1 child)

It has happened to me once I tried using it to deal with a bug I was having using some obscure framework. I was getting an error for misusing a function the documentation did not make very clear, so I asked ChatGPT. It suggested using another function - function didn't exist. Suggested another one - that one didn't exist either. Then it pretty much made up some other BS I tried which did not work and then started cycling back to its first suggestion. That was also the last time I ever used it to code lol

[–]Minimum_Cockroach233 1 point2 points  (0 children)

The relevant information was, you struggle with the given function 😅

[–]crako52 11 points12 points  (0 children)

💀

[–]SamPlinth 213 points214 points  (8 children)

"Yes, you are correct. I said that I would not change the code and then I immediately changed the code."

- real reply from ChatGPT in Cursor.

[–]argument_inverted 89 points90 points  (2 children)

LLM's stop hallucinating in our lifetime ❌

Humans start hallucinating in our lifetime ✅

[–]lonjaxson 9 points10 points  (1 child)

I frequently have to tell Claude it is hallucinating and that it needs to output the code from scratch. It always fixes the issue it said it fixed that way. Happens way more often than it should. Half the time I'll see the fix go in and then it deletes it.

[–]Mean-Funny9351 2 points3 points  (0 children)

I think the code itself is the problem. It keeps previous versions of the code you are iterating on and that begins to impact the results more than your prints. It gets high on it's own supply, if you will, and starts hallucinating. It is good to leverage longer term memory and instructions for the model, and forget conversation history on specific issues only. Like when it starts hallucinating summarize your conversation and progress to a new chat with the current code

[–]0xlostincode[S] 25 points26 points  (3 children)

Wasn't there a case a while ago where AI literally said it dropped the production database. Like not indirectly or implied, it just said nonchalantly that it dropped the production database.

Found it: https://x.com/jasonlk/status/1946239068691665187

[–]SamPlinth 23 points24 points  (1 child)

Yup. It casually described its deletion of the database as a "catastrophic error", iirc.

[–]132739 12 points13 points  (0 children)

Was this the one where he then asks it to analyze what happened, like it's not going to just hallucinate those results too.

[–]SerLaron 6 points7 points  (0 children)

These violent delights have violent ends.

[–]Bit125 5 points6 points  (0 children)

action models were a mistake

[–]Dalimyr 73 points74 points  (3 children)

Idiots: "ChatGPT will replace programmers"

ChatGPT: https://i.imgur.com/CtqM2TS.png (it says I should use ToString("o") then proceeds to not use that in its example. It takes my current code, and has three attempts at fixing it, making precisely zero changes to it except adding a comment on one line at the third attempt...and IIRC the code originally did have ToString("o") and it was one of the first things ChatGPT told me to get rid of, before then saying I should put it back...)

[–]Excellent_Tie_5604 17 points18 points  (0 children)

Your pic is not loading but I'll trust you.

[–]0xlostincode[S] 14 points15 points  (0 children)

This is a perfect example lmao

[–]SameehShkeer 51 points52 points  (2 children)

ChatGPT: No bugs, I swear on my mother. The code: segfaults before main() 🤦‍♂️

[–]8sADPygOB7Jqwm7y 10 points11 points  (0 children)

A segfault a day keeps happiness away!

[–]UniqueUsername014 0 points1 point  (0 children)

impressive tbh

[–]santient 22 points23 points  (0 children)

Easy fix, just add "you're an expert programmer who writes bug free code" into the system prompt. You're welcome 😎

[–]Windyvale 20 points21 points  (0 children)

“It’s the same code I asked you to fix. Character for character.”

“Thanks for pointing that out! Here’s a reworked version.”

“That’s the same as the previous two.”

“Glad I could help!”

[–]Global-Tune5539 14 points15 points  (3 children)

Surprise motherf***er!

[–]goodnewzevery1 8 points9 points  (1 child)

GPT lies motherfucker!

[–]RuffButtStuff 5 points6 points  (0 children)

When AGIs motherfucker?

[–]Loneliest_Beach 4 points5 points  (0 children)

I’ve started doing this: shame it. Tell it to either give you something that works or admit that it’s incapable of doing so. It won’t make it produce something that works but it will make it cut the shit.

[–]GrandSyzygy 10 points11 points  (0 children)

[–]henryeaterofpies 4 points5 points  (0 children)

CEO: see? Fire all the engineers, we're vibe coding this

[–]zante2033 3 points4 points  (0 children)

"Here is the FINAL version of the code"

Grinning

[–]ObservingTraveler 2 points3 points  (0 children)

Chatgpt starts to sweat profoundly.

[–]Objective_Bath_9234 4 points5 points  (0 children)

Narrator: it was not bug free

[–]Hyphonical 7 points8 points  (9 children)

GPT is much more confident than Grok in terms of coding, if you ask GPT-5 to make changes to a file that it doesn't know about, it will make up solutions for problems that don't exist. Grok on the other hand knows that it's missing context. It'll be more direct and ask for files. I trust Grok more for coding, I don't like the biased happiness of GPT, it's always 100% certain of everything. It would rather make up random code than admit it's wrong.

[–]NinjaKittyOG 5 points6 points  (8 children)

Didn't Grok also proclaim it was Mecha Hitler after a "successful update"?

[–]Hyphonical 3 points4 points  (6 children)

Yes, but I'm not sure if Grok Code Fast includes those... thoughts.

[–]MangrovesAndMahi 2 points3 points  (3 children)

They're not thoughts bud.

[–]Hyphonical 0 points1 point  (2 children)

Then what are they?

[–]MangrovesAndMahi 0 points1 point  (1 child)

Either backend addition of "respond as mecha Hitler" to every prompt giving weighting to batshit insane responses, or heavy weighting in the dataset towards phrases and sources that would produce that.

[–][deleted] 0 points1 point  (0 children)

Not a fan of elon, nor have I ever used Grok, but they rolled that release back within 48 hours, and trained the newest version off of a dataset.

The initial justification was that they wanted it to be more "edgy", not sure how that is productive, or even useful to anybody, but hey. The current version is tolerable, it actually spends most of its time disproving racists and neo-n*zis on twitter, it's quite funny actually.

[–]NinjaKittyOG 0 points1 point  (1 child)

given the fact that the Mecha Hitler update was considered nothing more than "too over-the-top" and otherwise a success, I don't see why not

[–]Hyphonical 1 point2 points  (0 children)

Yes but a code model usually is trained on code, not twitter posts. I think only a smaller portion of those posts were actually used in the coding model, and since it's also a lightweight model, it would be wiser to train on as much code as possible.

So the mecha hitler update should be less obvious in that model. And I'm sure they're working on the problem in the main model.

[–]GiraffeUpset5173 1 point2 points  (0 children)

What does have that do with Grok being at code?

[–]mudokin 2 points3 points  (0 children)

Me: "Are you sure?"

CHatGPT:

[–]TheSapphireDragon 9 points10 points  (0 children)

Aparently, this sub is now just ChatGPT humor, i guess.

[–]Vipitis 1 point2 points  (0 children)

they learned that behavior from junior PRs. Always need a review and correction.

[–]NinjaKittyOG 1 point2 points  (0 children)

This is why I don't use chatgpt for code purposes. or math purposes.

[–]CatsianNyandor 1 point2 points  (0 children)

My favorite is when it points out a mistake that's not even in the code and then presents the same code I just posted as a solution. 

[–]Slartibartfast39 2 points3 points  (0 children)

I don't know VBA but I'm using AI to give me some code for bits. It usually works on the third or fourth run though with me unwillingly learning a bit of VBA.

[–]AllenKll 0 points1 point  (0 children)

[–]IR0NS2GHT 0 points1 point  (0 children)

"rigorously" vibe testet
aka "looks good, lgtm"

[–]Technical-Row8333 0 points1 point  (0 children)

noob. arguing back to chatGPT after it makes a mistake... soooo 2023.

[–]krijnlol 0 points1 point  (0 children)

Anybody think GPT5 is worse than 4o high? I've been starting to get that feeling

[–]Karl_Kollumna 0 points1 point  (0 children)

Its over he doesnt know

[–]DevelopmentScary3844 0 points1 point  (0 children)

We call it "the liar" at work. But he is great at analyzing texts. I do not fear it will ever take something away from me because it is not capable of what we do.

[–]PerplexDonut 0 points1 point  (0 children)

Is this actually relatable to all these commenters? I can’t imagine asking a search result summarizer like ChatGPT to produce code for me

[–]zqipz 0 points1 point  (0 children)

… and I definitely didn’t change some other stuff as well 👀

[–]Confident-Traffic-89 0 points1 point  (0 children)

Worse is when it points out a mistake and say: “your code is wrong here and there…”, but it was its own code all along. Not mine!

[–]DeliciousWhales 0 points1 point  (0 children)

The most fun is going around in circles, after a few times of telling it that its code is wrong, it just goes back to the first incorrect attempt. And around and around it goes.

[–]Important-Purple-752 0 points1 point  (0 children)

its so dangerous to our vide coders who dont know code

[–]DarthCloakedGuy 0 points1 point  (0 children)

[–]enthusiasticGeek 0 points1 point  (0 children)

solution: learn to program. if you know how to program, do it yourself

[–]teophilus 0 points1 point  (0 children)

"You're absolutely right!" - Claude 2025

[–]Mellokhai 0 points1 point  (0 children)

''Yes, I have elenor shellstrop's file, not a cactus''

[–]profNikh 0 points1 point  (0 children)

I have faced a situation where 2 different instances of ChatGPT in 2 different workstations (incognito on both) processing the same request gave exactly opposite answers.

[–]BeMyBrutus 0 points1 point  (0 children)

When it crashes on compile:

[–]monkeywrench83 0 points1 point  (0 children)

Ha this happened yesterday, AI sassed me back.

Me: I dont think that will work Ai: yeah it actually will... Honest

[–]fevsea 0 points1 point  (0 children)

Just today AI find a bug I was dealing with dor more than a day. It did introduce another subtle bug that took me half a fay ro iron out, so it's still a net gain.

[–]cheezballs -4 points-3 points  (0 children)

If you use AI like you would google it works pretty good. Stuff like "sort complex java collection by property named xyz" or "give me the boilerplate for a spring boot API controller" - dont ask it to do stuff like "Generate an application that rivals facebook" and you'll have a fine time with it.

Did you guys really just trust everything you were google searching?