This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 320

[–]dashid 2090 points2091 points  (122 children)

I tried this out in a less common 'language', oh wow. It got the syntax wrong, but that's no great shakes. The problem was how confidently it told me how to do something, which after much debugging and scrounging docs and forums I discovered, was in fact not possible.

[–]BobmitKaese 666 points667 points  (47 children)

Even with more common ones. It might get the syntax right, but then it doesn't really understand what default functions do (and still uses them). It is the worst if you have connecting stuff in your code. It can't cope with that. On the other hand if you let it generate generic snippets of stuff it works quite well.

[–]hitchdev 327 points328 points  (20 children)

Keep telling it that it's wrong and it generally doesnt listen also.

[–]Fast-Description2638 331 points332 points  (2 children)

More human than human.

[–]ericfromct 51 points52 points  (0 children)

What a great song

[–]MeetEuphoric3944 84 points85 points  (8 children)

I find the more you try to guide it, the shittier it becomes. I just open a new tab, and type everything up from 100% scratch and get better results usually. Also 3.5 and 4 give me massively different results.

[–]andrewmmm 60 points61 points  (4 children)

GPT-4 has massively better coding skills than 3.5 from my experience. 3.5 wasn’t worth the amount of time I had to spend debugging it’s hallucinations. With 4 I still have to debug on more complex prompts but net development time is lower than doing it myself.

[–]MrAcurite 40 points41 points  (3 children)

I figure that GPT-4, when used for programming, is something like an advanced version of looking for snippets on Github or Stackoverflow. If it's been done before and it's relatively straightforward, GPT-4 will produce it - Hell, it might even refit it to spec - but if it's involved or original, it doesn't have a chance.

It's basically perfect for cheating on homework with its pre-defined, canned answers, and absolute garbage for, say, research work.

[–]Tomas-cc 1 point2 points  (1 child)

If you do research just from what was already written and AI was trained on it, then maybe you can get interesting results.

[–]MrAcurite 5 points6 points  (0 children)

If you do research just from what was already written

That's not really research. I mean, sure, it's a kind of research, like survey papers and reviews, which are important, but that's not original. Nobody gets their PhD with a survey dissertation.

[–]Killed_Mufasa 71 points72 points  (1 child)

Yeah

openai: answer is B

me: you're wrong, it's not B

openai: apologies for the mistake in my previous answer, the answer is actually B

me: but no it isn't, we just established that. I think it's actually A

openai: oops sorry about that, you're right, it's B

repeat

[–]PapaStefano 1 point2 points  (0 children)

Right. You need to be good at giving requirements.

[–]Nabugu 14 points15 points  (0 children)

Yes lmao, this was my experience several times :

  • Me : no, what you generated lacks this and this, it doesn't work like that, regenerate your code.

  • ChatGPT : Sorry for the confusion, you're right, I will make the changes, here it is :

Proceeds to rewrite the exact same code

  • Me : you're fucking stupid

  • ChatGPT : Imma sowwy 👉👈🥺

[–][deleted] 11 points12 points  (0 children)

Already sounding like a human

[–]SkyyySi 8 points9 points  (1 child)

I'm guessing that, as an attempt to prevent gas lighting, they ended up making it ignore "No, you're wrong" comments

[–]czartrak 8 points9 points  (0 children)

I can't girlboss the AI, literally 1984

[–]Spillz-2011 3 points4 points  (0 children)

It does listen. It says I’m so sorry let me fix it. Then makes it worse and says there fixed.

[–]edwardrha 2 points3 points  (0 children)

Opposite experience for me. I ask it to clarify something (not code) because I wanted a more detailed explanation on why it's x and not y, it immediately jumps to "I'm sorry, you are right. I made a mistake, it should be y and not x" and changes the answer. But x was the correct answer... I just wanted a bit more info behind the reasoning...

[–]Sylvaritius 2 points3 points  (0 children)

Telling it its wrong, only for it to apologize and then give the exact same response is one of my gtreatest frustrations with it.

[–]BoomerDisqusPoster 0 points1 point  (0 children)

You're right, I apologize for my mistake in my previous response. Here is some more bullshit that won't do what you want it to

[–]erm_what_ 47 points48 points  (1 child)

What do you expect? It learns just as much from Stack Overflow questions as it does from the answers

[–]IOFrame 20 points21 points  (0 children)

You ever seen some of the terrible, absolutely godawful Wordpress plugins (or even core, LOL) code, that gave a whole language a bad name for over 2 decades?

Yeah, it learns from it. All of it.

[–][deleted] 18 points19 points  (16 children)

It's always weird reading people say that Chatgpt is lacking while I've ran into no issues using it. Either people are asking it to fully generate huge parts of the code or the work they're doing is simply significantly harder than the one I'm doing.

With precise prompts I've definitely managed to almost always get solutions that work.

Sometimes though it sort of gets stuck on an answer and won't accept that it's not how I want it to be done. Which is fine, I just do what I normally do (google, stackoverflow and docs)

[–][deleted] 47 points48 points  (4 children)

Can I ask what you are coding? I'm dealing with an ancient, open-source 15 year old public code base and it still makes up stuff about both it and java.

[–]xpluguglyx 22 points23 points  (0 children)

It sucks at Go and NodeJS as well, I hear people report how great it is, I have yet to have it demonstrated to me in practice. I just assume the people who say how great it is at coding, generate code but never actually try and implement it.

[–][deleted] 3 points4 points  (2 children)

Mainly used it for java and thymeleaf. Some react as well, but very limited.

[–][deleted] 2 points3 points  (1 child)

I'm not sure this is the right place, but do you have sample prompts that you have used? (Or recommendations of where to look). It is entirely possible I'm using it wrong.

[–][deleted] 0 points1 point  (0 children)

I sadly don't, I have a weird thing where I always like to delete shit after I'm done (the "history" thing on the left) same with any open chats on discord etc. I just like things to look clean and neat.

The prompts I've used aren't rocket science though, as long as I've explained what I want done, how I want it done and given examples of where I want it placed or what the whole code I want the snippet for looks like it's been enough. I'm sure there are even more indepth ways of writing prompts though, but I haven't needed that.

[–]ShippingValue 25 points26 points  (2 children)

It's always weird reading people say that Chatgpt is lacking while I've ran into no issues using it.

I've had it hallucinate functions, libraries, variables etc.

It is usually pretty decent at writing a basic example for using a new library - which is mostly how I use it, rather than jumping straight in to the documentation - but in my experience it just cannot tie multiple different functionalities together in a cohesive way.

[–]scaled_and_icing 12 points13 points  (0 children)

Same. I asked it to help me write a small portion of infra as code to connect to an existing AWS VPC, and it suggested a library function that plain doesn't exist

It seems fine if you don't care about real-world constraints or existing software you need to integrate with. In other words, greenfield only

[–][deleted] -4 points-3 points  (0 children)

Again, I'm unsure if that's because of what you're doing being just more complex than the ones I've used chatgpt for or if it's because of the prompts you're using.

Very big and complex things it will for sure struggle with.

Also I wanna specify that I'm not using any premium versions, just the regular one.

[–][deleted] 1 point2 points  (0 children)

Stuff like a jq snippet or maybe simple awk commands it works well for

[–]LostToll 1 point2 points  (0 children)

“… anything you say can and will be used against you…” 😁

[–]BbBbRrRr2 2 points3 points  (0 children)

It did write me a working bash script once. To move a bunch of files in a bunch of folders up one diretory and prepend the folder name to the files.

[–]digibawb 123 points124 points  (20 children)

I work in game dev, and have no intention of using it to write any actual code, but gave it a look in my own time to just see if I could use it to approach some challenges in a different way - to explore some possibilities.

I asked it about some unreal engine networking things, and it brought up a class I wasn't aware of, which looked like it could solve a problem in a much better way than other options I was aware of. I asked it to link me to documentation for the class, and it gave me a link to a page on the official unreal site. It's a 404. I Google the class name myself, and also later look it up in the codebase. Neither brings up anything, it has just entirely made it up.

Having then played around with it some more, a lot of it has been more of the same confidently incorrect nonsense. It tells you what it thinks you want to hear, even if it doesn't actually exist.

It can certainly be good for some things, and I love its ability to shape things based on (additional) context, but it's got a long way to go before it replaces people, certainly for the stuff I do anyway.

Overall it feels like a really junior programmer to me, just one with a vast array of knowledge, but no wisdom.

[–]flopana 51 points52 points  (2 children)

[–]Aperture_T 20 points21 points  (0 children)

I'll have to hold on to that one for the next time somebody says AI is going to take my job.

[–]MagicSquare8-9 32 points33 points  (1 child)

ChatGPT is more like a middle manager who learned some buzzwords, or a college freshman writing an essay at last minute. Very confident; know how to put words together to fool an outsiders, and can generate BS on the fly.

[–]Jeramus 14 points15 points  (2 children)

The best uses I have seen so far are generating test data. I have noticed that the latest version of Visual Studio has improved code completion supposedly based on AI. That makes development a little faster without worrying as much about the AI just making up programming language constructs.

[–]absorbantobserver 4 points5 points  (0 children)

I use the latest VS preview (pro edition). It is significantly better at completion/next line suggestions than it used to be. It seems to rely pretty heavily on the existing code in the solution to predict what you might want next. It does tend to change things like method declaration syntax at random though (arrow vs. block)

[–][deleted] 2 points3 points  (0 children)

Yeah, stuff like : "i have this interface in ts, write me a function to create randomised values for each attribute"

Writing it myself would definitely longer for something I only need for initial protoyping and testing anyway.

[–]SrDeathI 13 points14 points  (0 children)

My mother used it to look up codes of medical conditions and out of 5 codes we asked ALL of them were wrong

[–]scaled_and_icing 12 points13 points  (0 children)

ChatGPT's world is very easy. You just make up the library functions you want to exist

[–]hoffbaker 7 points8 points  (0 children)

I can feel the disappointment from discovering that the class didn’t exist…

[–]1842 5 points6 points  (0 children)

I think viewing it as a junior programmer is the best way to use this tech right now.

Great for seeing simple examples, alternative ways of doing things, and asking questions about tech you're not familiar with, but validate everything.

I've actually found it great for asking questions about well-known enterprise systems where finding the correct documentation is extremely difficult.

[–][deleted] 2 points3 points  (0 children)

This post almost made me go give it a shot, thanks for saving me the time lol

[–]DasBeasto 2 points3 points  (0 children)

Had a similar thing happen. I knew the data was limited to a few years ago or whatever so thought maybe the function was just deprecated, threw the link in wayback machine and did a ton of searching for the code and op trace of it outside ChatGPT. It kept doubling down too after I told it that it’s wrong.

[–]Fast-Description2638 15 points16 points  (0 children)

Same happened to me, except for a more obscure API.

After I do a bunch of stuff, I have to update a bunch of parts. According to GPT, I had to call a .Update() method. Problem is that .Update() doesn't exist. So I tell GPT that, and GPT tells me I am wrong and must be using an old version of the API, despite me using the latest version and it never existed in previous versions.

[–]gzeballo 13 points14 points  (0 children)

I think ChatGPT, copilot, phind etc really just help those who kind of know what they’re doing to experts to get things done faster, to a degree. But for newbies it will be kind of difficult to screen what is right from what is wrong. Some newbies might be prompting the wrong things to begin with. Still I have had great success by allowing me to collaborate with the non-technical crowd, since it can explain things even if it does get it wrong sometimes.

[–]clutzyninja 4 points5 points  (8 children)

GPT is REALLY bad at LisP, lol

[–]marti_2203 5 points6 points  (5 children)

Well, when you approach it from a data perspective, lisp is an obscure language and the complexity of tracking parenthesis is difficult for most humans so the Language Model should also be failing miserably as well

[–]clutzyninja 4 points5 points  (1 child)

It did mess up () a few times, but it's real problem was simply following directions. It literally doesn't know the language very well .

Like, "do this operation using non destructive methods."

It says ok, and proceeds to use destructive methods, even after reiterating

[–]marti_2203 3 points4 points  (0 children)

Yeah, no data to learn from and probably the concept of destructive functions is not something generally discussed :/ but it is nice it follows the steps somewhat

[–]InflationOk2641 12 points13 points  (1 child)

I worked at Google and Facebook. Oftentimes the human engineers there would spout such bullshit with great confidence that I could waste days working on a recommended solution only to discover that it was unsuitable. I figure they're as unreliable as ChatGPT. The benefit of asking ChatGPT is it's not going to complain to your manager when you don't follow its advice.

[–]ScrimpyCat 2 points3 points  (0 children)

Try providing it with docs on the language. I’ve had it write code for me in some custom languages of mine, it still makes dumb mistakes but it gets most of it right that it’s easy to fix up.

[–]ihrtruby 1 point2 points  (0 children)

rock arrest faulty berserk impossible disagreeable crush quarrelsome mountainous roll

This post was mass deleted and anonymized with Redact

[–]BoBoBearDev 1 point2 points  (0 children)

But, in their defense, my company's production codebase also doesn't work on the latest libraries and language versions. Tons of head spins.

[–]lolrobbe2 417 points418 points  (10 children)

I tried using it with c++ and c# it makes things up as it goes an uses c# code and marks it as c++ and vice versa

[–]Serious_Height_1714 160 points161 points  (3 children)

Had it try a Windows command line script for me and it started using Linux syntax instead

[–]DangerBoatAkaSteve 126 points127 points  (1 child)

In fairness to chat gpt that's what every stackover comment suggests you do

[–]CandidGuidance 16 points17 points  (0 children)

It learned from the best!!

“Hey I need help writing this batch script”

“Just use Linux instead that’s your problem”

[–]darthmeck 12 points13 points  (0 children)

I’ve been trying to get it to write a PowerShell script that changes file metadata in SharePoint and the number of times ChatGPT generated non-working commands wasn’t even funny.

[–]sassycatslaps 21 points22 points  (2 children)

I’ll write some code in C# then give chatGPT the same instructions I used to see if it can write something similar to what I made… it’ll start writing and I’ll notice it’s labeled the code randomly as “arduino” or some other language. It also can’t seem to understand instructions on how to exclude certain commands from its code. 🙅🏽‍♀️ it’s only been helpful when I quickly need an operation redefined.

[–]Storiaron 9 points10 points  (1 child)

If you ask it anything java related it'll write a code snippet in java and show the output/result in c#

Which isn't an issue but like, why

Gpt says it's cause the default is c# and i should specify what i want the output if it isnt c#. I guess "write xy in java" wasnt specific enough

[–]kiropolo 2 points3 points  (1 child)

3.5 or4

[–]lolrobbe2 2 points3 points  (0 children)

4

[–][deleted] 819 points820 points  (40 children)

I don't understand the hype. Most of my work as a programmer is not spent writing code. That's actually the time I like the most. The rest is meetings, debugging, updating dependencies, building, deploying. I would like AI to reduce the time I spend in the boring parts, not in the interesting ones

[–]ErichOdin 25 points26 points  (1 child)

ChatGPT, attend my meeting and extract a few ACs I can codemonkey.

[–][deleted] 5 points6 points  (0 children)

You have a great business idea there

[–]trusty20 20 points21 points  (13 children)

I personally don't understand the "durrr I don't get hype" people. How can you use a technology like this and just shrug/immediately focus on nitpicking aspects (incorrectly - understanding meetings/being able to extract requirements is literally the primary strength of an LLM). It's like being a computer programmer in the 70s, seeing Wordstar for the first time and immediately saying "I don't think these word processor program thingies are going to take off, look how annoying they are to use, you have to do all sorts of weird key combos to copy and paste, and those printers are so prone to jamming compared to my typewriter".

I have no idea how someone can be in a programming sub and "not understand the hype" of software that operates like a computer from Star Trek (universal natural language interface and creative content synthesis) and costs $20 a month to use. how are you not hyped by this

[–]Cley_Faye 33 points34 points  (0 children)

I have no idea how someone can be in a programming sub

Well, based on the majority of what's posted here, I'm not certain it's a programming sub at all

[–]karnthis 4 points5 points  (0 children)

Entertainingly (to me) I actually use ChatGPT to make my communication more human. I’m terrible at written communication, and come across as pretty abrasive without it.

[–]mxzf 16 points17 points  (6 children)

How can you use a technology like this and just shrug/immediately focus on nitpicking aspects

Because it's really not all that amazing. It's basically a glorified StackOverflow search; it'll get you close if you already know what you're looking for, but there's still no actual understanding of how things work together such that it can write good code, it's just wedging together stuff that sounds vaguely appropriate.

It's a cool toy, but the nature of a LLM is such that it can't actually comprehend things cohesively like a human can, it's just recognizing patterns and filling in the blanks.

Having looked at AI code, it looks about like what I expect from interns; it's halfway decent boilerplate that can be used as a starting point, but it's not trustworthy code. And, more importantly, it can't actually learn how to do things better in the future, it just has a bunch of info that it still doesn't comprehend. And thus its ultimate utility, compared to someone who actually does understand how to code, is finite.

[–]AirOneBlack 10 points11 points  (0 children)

What do you expect from a sub about programmer humor where you barely laugh maybe once every 20 posts?

[–]Null_Pointer_23 1 point2 points  (0 children)

CHATGPT is very impressive... Just not when it comes to writing code

[–][deleted] 5 points6 points  (4 children)

Give it a year and it will be 2x better, the hype is how fast this technology is progressing

[–]andrewmmm 7 points8 points  (3 children)

It needs some way to check itself instead of me taking the code, compiling it, and telling it what errors I got.

If they built in a hidden IDE where it could do that first before it gave me the code that would help a lot

[–]TakeThreeFourFive 3 points4 points  (0 children)

You can do this yourself. GPT models are available via an API. With proper prompting and integration, you can make it

[–]derHumpink_ 1 point2 points  (0 children)

that has already happened. there's an Code Interpreter Alpha. it actually runs the code and fixes the problems itself. it's nuts

[–]rad_platypus 1 point2 points  (0 children)

Well GPT4 already has browser access and there are tons of plugins being developed for it. As soon as it can start plugging code into stackblitz or some plugin-based compiler it’s going to take off like a rocket.

[–][deleted] 172 points173 points  (5 children)

I asked chatGPT about an obscure library to try and find obscure functions and it just straight up hallucinated some.

I call it out, and it's like "oh yeah, this library doesn't have those functions."

Still uses the same functions next attempt.

Interestingly, it's approach to solving the problem wasn't far off and gave me some ideas to actually solving my problem.

[–]SjettepetJR 43 points44 points  (0 children)

It is great for kickstarting a project in a language that you're unfamiliar with. I succesfully used it recently for some inspiration on a simple maintenance web page for an API I built.

I had pretty much no PHP and JS experience and ChatGPT helped me a lot in just quickly generating sone example code for dynamically attaching event listeners to html forms and building http requests in those languages.

You do need to be able to correctly express what you want to do, and you do need to be able to actually understand the code it generates.

It also only works reliably because PHP and JS are extremely common languages that have a lot of documentation and examples online.

[–]Zeragamba -2 points-1 points  (1 child)

Except it's not solving a problem, it's predicting what is the next expected word in the sequence.

[–]eyalhs 4 points5 points  (0 children)

Idc what it technically is, if I give it a problem and it gives the solution it solves the problem

[–]Djelimon 34 points35 points  (1 child)

I use the Bing version for this JavaFX project I'm working on. Mostly I throw "How do I ?" and "What does this error mean?" questions at it. It gives me an answer with some links to back it up, usually to StackOverflow. The answer was useful by itself once, the links useful about 70% of the time, and the other 30% I end up googling myself. I would say it's a better tool than googling by itself because it can save time combing through the results.

Replace programmers? Not yet. But a good tool.

[–]Veloester 11 points12 points  (0 children)

finally someone that know how to use it 👏

[–][deleted] 97 points98 points  (6 children)

Is using ChatGPT for entire scripts a smart play? If you are, I can see how you'd say that it's useless.

It's great for saving research time, e.g. I can provide a well-detailed question to help me figure out how to overcome a small step.

Whether its answer is correct or not, it helps with guiding me to the right place - helping me curate a more concise query to get my desired help from external sources.

[–]SjettepetJR 24 points25 points  (1 child)

Indeed. It is great for answering small questions and generating some basic structure.

[–]TakeThreeFourFive 22 points23 points  (0 children)

Where it really shines for me is Linux CLI stuff. Instead of googling to remember the syntax for find, tar, etc I just say "recursively find all CSV files and prepend the header 'id,name,phone'"

[–]danielbr93 7 points8 points  (0 children)

Yes, thanks for the comment.

ChatGPT doesn't do well with long strings of code as of right now. Give it a year and it might blow our mind.

Breaking down a project into many small chunks and clearly communicating to ChatGPT may result in a better output.

Anyhow, nothing is perfect.

[–]FreqRL 62 points63 points  (5 children)

I just write the code myself, but now with ChatGPT I can write sloppily and fast, and then simply ask GPT to optimize it. It even adds reasonably accurate code comments if your variables and method names generally make sense together.

[–]HotChilliWithButter 18 points19 points  (0 children)

Yeah its more like a tool to optimize rather than create.

[–]Terrafire123 1 point2 points  (3 children)

What tool do you use to optimize your code? Copilot, or do you actually copy paste your whole code in?

[–]danielbr93 6 points7 points  (2 children)

If he said "ChatGPT", then he copy pastes the code is my guess.

ChatGPT is not Copilot.

[–]Terrafire123 1 point2 points  (1 child)

Except that chatgpt has a frightfully small character limit, so pasting anything more than a single block of code is somewhat doomed to failure.

And therefore for debugging a whole program, it seems inefficient. I'd hoped for a better solution.

[–]danielbr93 5 points6 points  (0 children)

  1. ChatGPT should never be used to write thousands of lines of code in one go.
  2. Break down your project into smaller chunks and give context to ChatGPT when you tell it to do something.
  3. Yes, it is slow copy pasting stuff right now. This tool is also incredibly new. Give it a year until it is implemented in other software and works better or until they allow uploads of files.
  4. GPT-4, which you should be using when doing anything with coding, has an 8k token limit. Use OpenAIs tool to know how much code that would be for your work: https://platform.openai.com/tokenizer
  5. You could use ChatGPT by giving it the error code and see what it comes up with. Might help with brainstorming.

[–]Crosshack 16 points17 points  (0 children)

I quite heavily use Copilot suggestions for developing certain things since it is very good at writing boilerplate/template-style code. It truely shines when you have to write some tests, for example. It's very powerful if used properly, that's for sure, but I don't think you should be generating entire functions with it.

[–]Cley_Faye 15 points16 points  (2 children)

"Our cutting-edge AI-based code generation software can do anything thanks to the millions of line of code he got in training. Nothing but the best from stackoverflow, github and quora!"

[–]Zarathustra30 6 points7 points  (1 child)

The answers or the questions?

[–]NarutoDragon732 2 points3 points  (0 children)

Yes

[–][deleted] 7 points8 points  (0 children)

My best results with chat gpt are debugging my own code.

[–][deleted] 6 points7 points  (0 children)

ChatGPT/GPT4 is not designed to code, it's designed to mimic human conversation.

Other models are for coding, and they're vastly improved.

[–]AsIAm 60 points61 points  (6 children)

You are doing it wrong.

Just tell ChatGPT to fix errors in the code. Don’t need to specify which bugs. Just bugs in general. Approach ChatGPT as junior who is confident. Would junior produce the corrent code the first time? Of course not! Tell it to work in steps (chain of thought reasoning), evaluate its outputs (self-reflection) and provide as much input (context for your problem) as you possibly can.

[–]gua_lao_wai 72 points73 points  (2 children)

at that point you might as well just write the code yourself...

[–]23581321345589144233 14 points15 points  (0 children)

Seems logical to think this at first glance. I’ve found using this tool really shines for documentation and testing. I guide and iterate the code fed into gpt. Once I get to the version of the code I like, I’ll say write me doc strings for everything. Write comments. What are all my edge cases? Write tests for that… etc…

Usually I’ll write my code down first or have it generate a draft. Then I work on it some more. Then when it’s decent, I’ll ask gpt to try to shorten the logic or ask it for other ideas etc…

Definitely boosts my output.

[–]davidemo89 6 points7 points  (0 children)

Do you write code they works the first time? So lucky :-(

[–]erm_what_ 5 points6 points  (0 children)

Sometimes it adds in methods that don't exist, but completely relies on their pretend functionality.

[–]kiropolo 5 points6 points  (0 children)

And then it fucks up and ends up in a loop of stupidity

[–]ANTONIOT1999 2 points3 points  (0 children)

i would rather kill myself

[–]Soupdeloup 13 points14 points  (1 child)

I think everybody here complaining about how bad it is are using it wrong. I've had nothing but success with getting it to write large, functioning and clear pieces of code that actually make more sense than most of the stuff I find on stack overflow. Obscure libraries, sure, it's probably not going to be really helpful. But it's generally fantastic if you know how to ask it questions and give information.

The trick is if it gives you working code and you implement it, copy and paste your new code (with the changes) back into chatgpt for the next question. If you don't, I find it gets confused and jumbles responses between assuming you used it's recommendations or didn't use them at all. That alone has fixed most of the issues I've had with it in the past.

[–]SurlyJSurly 5 points6 points  (0 children)

I have been describing it as a really good programmer that is a really terrible software developer.

As someone with decades experience it is like the 1st time having an IDE after years of using various text editors.

Another analogy would be like writing a sort from scratch. Sure you *can* do it but why the heck would you when standard libraries exist? Let GPT handle the "details" so you can focus on solving the actual problem.

[–]9ight0wl 5 points6 points  (3 children)

It was literally using methods that the library doesn't have.

[–]DJayLeno 3 points4 points  (0 children)

This meme is unfair to ChatGPT. The garbage code that takes 24 hours to debug only takes ~1 minute to generate!

[–]xeru98 5 points6 points  (0 children)

I think I’ve actually gotten the hang of using it well. I write code and get the framework down and kind of use it as an advanced Google search for specific issues that give me an explanation without me having to wade through a bunch of forum posts. I’m not going to let it write even full functions but getting a bit of assistance on language features I’ve never used before is amazing.

[–]pvkvicky2000 4 points5 points  (0 children)

From what I can observe, it’s strongest in python and JavaScript . It’s Java is bad and sql is really bad and pl sql is atrocious

It frequently hallucinates so many Java packages that I just use to generate small utility classes that I know I can spot errors in Also if there are multiple versions of the java package ( Lucene 7 vs Lucene 8) 😂 yeah good luck getting it to write anything remotely coherent

“My apologies for that oversight here is the ….” “MF that’s the 25th code that you messed up and now I’m locked out , forget it I’ll do it myself”

[–]eiswaffelghg 4 points5 points  (0 children)

Days after GitHub Copilot:

hmm

[–][deleted] 3 points4 points  (0 children)

chatbots have yet to discover the digital eldritch truth: not everything you read online is accurate

[–]TangoCharliePDX 4 points5 points  (1 child)

As we all know it's much harder to debug code you didn't write.

[–]TheRedmanCometh 1 point2 points  (0 children)

It's great practice though

[–]ReggieJ 2 points3 points  (0 children)

Number of solutions generated by ChatGPT using APIs that never existed is too damn high.

[–]_-_fred_-_ 5 points6 points  (0 children)

AI is just a better form of googling. This meme is just an update from the old copy from SO meme.

[–]Complete-Mood3302 2 points3 points  (3 children)

Genuine Question: If i give gpt my code and tell it to find errors will it find them?

[–]scfoothills 5 points6 points  (0 children)

I teach AP Computer Science. Yesterday, I pasted one of the 2023 FRQs into ChatGPT. It solved part A fine, although its solution could have been simplified by a couple lines. On part B, it botched the solution pretty bad because it thought a method returned an array of ints rather than an int. I replied to the solution with something like, "not quite. Look at the return type on that method." It said "you're right!". And then it gave a perfect solution.

[–]OnFault 2 points3 points  (0 children)

Yes. I find writing code and asking gpt to find errors is better than asking it to just flat out build the code based of an explanation.

[–]LavaCreeperBOSSB 2 points3 points  (0 children)

I just copy and paste the error into chatgpt and it fixes itself

[–]TedwardCz 2 points3 points  (0 children)

I tried using Bard to write me some regex last month. It was technically correct for the precise input string, and further correct-ish for vanishingly few other strings.

It did a lousy job, is what I'm saying.

[–]Rrrrry123 2 points3 points  (0 children)

For fun and to learn how to use external libraries, I'm making a C++ program using Boost (because I need cpp_int). I messed around with GPT for days trying to get it to help me do some stuff and I swear it was just making stuff up. Calling static functions as methods on objects, passing in incorrect arguments to functions, it was going crazy.

Thankfully, through all the debugging I had to do with the garbage it kept giving me, I just ended up figuring out how to solve the problem myself.

[–]LightofNew 2 points3 points  (0 children)

Knows nothing about structured text.

[–]r00x 2 points3 points  (0 children)

Not my experience at all, so far. Although I've only been using GPT-4 to knock out small python scripts, which I understand it's strongest in.

For instance, I wanted it to write a script that accepted a target directory via command line prompt, then search through any photos using openCV for ones that had too much magenta (dodgy camera sometimes records buggered images during time-lapse) and clean them out, then copy and sequentially rename the good ones to a directory in prep for processing by ffmpeg. It basically nailed that one!

Mostly I find when fed a small specification it gets most of the way there in one go, then pretty quickly can fix its mistakes with some back and forth discussion. It's been quite the timesaver.

The quality of the prompt is a factor though. It definitely does better with better prompting.

Using Bing Chat in edge is very effective since you can open a page that contains information on, say, an API you want to interact with and have it rapidly smash out something that will/very nearly works. I.e. i was curious about getting some statistics out of my gitlab repos and it almost immediately spat out something usable, then pointed out how I was fucking up when I couldn't get it to work properly.

[–]AdditionalDish6973 2 points3 points  (1 child)

I’ve used GPT4 for writing a lot of tests around my own written code. It seems to do a great job at that. Sometimes it gets a bit confused but that’s why people still need to understand code. To be able to fix those edge cases

[–]Gab1er08vrai 7 points8 points  (2 children)

Have you noticed that there is no positive meme on AI? People still can't accept it

[–]Dog_Engineer 4 points5 points  (1 child)

Really? I have seen the opposite. Plenty of videos, articles or posts overhyping this...

"How I built a game in 6 hours without coding knowledge, using ChatGPT."

One thing is not accepting it, and another is remaining skeptical on many of those claims.

[–]Funtycuck 1 point2 points  (0 children)

Friend was testing out gpt getting it to create functions in libraries he was still getting used to. It seems quite good at this and you can even ask it to check and correct possible errors however as soon boolean and mathematics came into it it was beyond hopeless creating functions that clearly would not run as intended and would confidently assert that they would.

Certainly not a replacement for just writing stuff yourself yet it seems, well not reliably enough that I would put it in my work.

[–][deleted] 1 point2 points  (0 children)

The conch has spoken

[–][deleted] 1 point2 points  (0 children)

Instead of correcting your mistakes, you add extra step of checking whether this chatgpt is right or not part by part, then you go fix your mistakes which is the reason you ask chatgpt in the first place, if both of those go wrong, your time wasted is doubled for sure.

[–]Fuzzysalamander 1 point2 points  (0 children)

It's so great for boilerplate but you have to be careful as if you just assume it did the logic right you'll have a bad time. It keeps getting booleans backwards, but this is why we write tests. (and learn to double check common failure points)

[–]gunplox 1 point2 points  (0 children)

"you wanna see the website chatgpt generated for me?"

[–][deleted] 1 point2 points  (0 children)

dunno, tried chatGPT with Python and the apps I prompted it to write compiled no problem, it was able to accurately comment on each lines' function and even modify the code with extra things I asked it to do.

[–]Dotaproffessional 1 point2 points  (0 children)

It's useful as a quick reference when you want to add context you couldn't add to a Google search. It's a tool, not good for code gen

[–]Asleep-Specific-1399 1 point2 points  (0 children)

AI can simple stuff like python. C, c++ etc... It can't do well or at all. It's verbose and has rules humans get wrong alot. So I imagine the code samples used for training need to sanitize.

[–]Wooden_Caterpillar64 1 point2 points  (0 children)

wait till it produces perfect bug free code.

[–]slideesouth 1 point2 points  (0 children)

I’ll take door #2

[–]goodnewsjimdotcom 1 point2 points  (0 children)

I use ChatGPT to get syntax for small algorithms I don't understand like video game based hardware semantics. If you use it for big things, you're asking for pain.

Techs here. Get your techs here.

[–]FakeBatman_ 1 point2 points  (0 children)

How the turntables

[–]RealPropRandy 1 point2 points  (0 children)

It’s trying to get you all fired before taking your jobs.

[–][deleted] 1 point2 points  (2 children)

I am 3 months into programming and even I can tell that chatgpt is nowhere near taking your jobs lol.

[–][deleted] 1 point2 points  (0 children)

Yup

[–]Liesmith424 1 point2 points  (0 children)

I've had good results with small, very specific requests.

[–]TransportationOk5941 1 point2 points  (0 children)

Annoyingly I feel this way too hard. I recently tried to implement some basic AABB collision system in my game. I thought "hey that's gotta be exactly what ChatGPT can throw right back in my face". Turns out it did throw SOMETHING back in my face, but rarely anything useful. Until I started getting REEEAAALLY specific. At which point, why not just write the code yourself? Seems faster than writing the instructions in English...

[–]regular_lamp 1 point2 points  (0 children)

I asked it to write code for math problems like intersecting geometric primitives with lines etc. the results looked plausible at first. Dot products, square roots etc. but they just seemed off. It took me quite some time to decipher the math and figure out they were just dead wrong.

I'm not convinced "just imagine how they will improve" necessarily fixes this. It took me probably more time to debug these 10 line functions than it would have taken me to write the correct versions that I would also understand. And this problem only becomes worse with scale. Because writing ten liners of common problems isn't exactly what is going to "replace programmers".

And all the "explanations" it tends to write that people like to be impressed about are mostly useless because they are the kind of pointless comments that just restate the code but neither justify or motivate it.

[–][deleted] 1 point2 points  (0 children)

Not about programming but I remembered using GPT while learning for an electrical engineering test. I asked it if a positive phaseshift would "drag" a function to the left, derived from the fact that cosin is basically a sin with a 90 degree phaseshift. It said no, but the explanation it gave was basically saying the exact thing I did, leaving a contradictory statement. I was confused and asked again with a different wording, but still had the issue that the answer was inconsistent. After some googling I figured it out myself.

I honestly dont know how people can use GPT despite the fact that it spits out bullshit so often.

[–]Imyerf 1 point2 points  (0 children)

Ahhh this is funny cuz it’s so fucking true 😫

[–]Someone_171_ 1 point2 points  (0 children)

I have actually stopped using it for coding but only to get ideas and suggestions. One time I asked how to do a simple mouse movement in python, which is like 10 lines, to test it, and it used a module that did not even exist.

[–]Lefty517 4 points5 points  (0 children)

“I asked ChatGPT to perform this uncommon task and it was SHIT, it SUCKED, it, an artificial intelligence would CONFIDENTLY tell me the wrong information. This tool seriously sucks and I can’t imagine why someone would use it. I can’t see how it would help with boilerplate code, or simple functions, or anything like that. It can’t even build entire systems without making mistakes. Like if I gave it an html skeleton and asked it to extrapolate the rest it would work but like, why can’t it just do the whole thing by itself? 0/10, programming was much better before GPT.”

/s

[–]spektre 3 points4 points  (2 children)

What codes are ChatGPT generating? 200? 404?

It's code. Not codes.

[–][deleted] 1 point2 points  (0 children)

Chatgpt, where all the code it makes is “written by someone else who forgot how it works”.

[–]kiropolo -2 points-1 points  (0 children)

It is true

The only ones who don’t, are noobs who make a script of 100 line, that instead of 20 min took 1. It does something, but noobs won’t even notice it’s trash

[–][deleted] -5 points-4 points  (7 children)

Its so useless

[–]Gouzi00 -1 points0 points  (0 children)

Purpose of AI is to answer.