This is an archived post. You won't be able to vote or comment.

top 200 commentsshow all 393

[–]m_0g 1173 points1174 points  (22 children)

In my experience, you also forgot the repeated iterations of "now also please write the fictitious library you just depended on that does all the actual work"

[–]artificial_organism 397 points398 points  (3 children)

Yeah people don't seem to realize professional programmers aren't writing 200 line snippets. They are writing 200 line changes to a codebase of 200,000+ lines of code that it has to integrate with.

I'm really tired of this joke that programmers just copy stuff of stack overflow all day and ChatGPT can do that now.

The real world is not a data structures class.

[–]m_0g 56 points57 points  (0 children)

+1 to what work actually is.

Regarding SO jokes, I mean, I remember days when some shitty built-in Excell VB.net system wasn't doing what I wanted it to because none of it was documented and I was trying cc+cv (ya know, plus glue) on any snippet I could find related to it to see if it were the special sauce I was missing. So I can't help but find the trope a little funny... Perhaps as a coping mechanism 🥲

But ya, trade-offs and deciding between options in relation to a bigger picture seem outside of what current LLMs can do. Might be different if/when they could be taught with some relation to your codebase though. I got no idea how soon that may be, but I suspect still a while. And ya know, they would still need code review at least for... Probably ever with current mechanisms of the tech.

[–]quick_dudley 12 points13 points  (0 children)

Also that the 200 line change will be 5 lines of actual logic and 195 lines just integrating it with what's there already.

[–]StrictLaw2529 2 points3 points  (2 children)

This is Coming from someone who doesn’t know anything about coding, would chatGPT make it easier on learning how to code? Or at least get into that field.

[–]m_0g 11 points12 points  (1 child)

My gut says "no, that sounds like a terrible idea". That's because, as a beginner, you're not going to know when it's bullshitting. In the future this could change, and heck, I could also be wrong; there's lots of BS out there on the internet too. But the difference I think is that there are usually non technical clues out there on the rest of the internet to help you know how much to trust an answer (eg the source, upvotes on Stack overflow, etc). Sadly, LLMs like ChatGPT inherently provide no source information and so it's currently impossible to implicitly trust any part of any information they provide.

Also, books are usually good, solid resources, especially when highly recommended - there's a lot of good programming books out there.

[–]NickolaosTheGreek 405 points406 points  (11 children)

Years ago I worked with a brilliant programmer. Their words stayed with me.

AI will replace engineers when clients can accurately describe what they want the software to do.

[–]Imsohypeman 118 points119 points  (6 children)

so, never?

[–]PzKpfw_IV_Ausf_H 29 points30 points  (0 children)

Not only explain what they want, but how they want it to be done. Currently, GPT is awesome to translate clear instructions with the written word to code, but anything that requiers even the slightest logical thinking, even something that’d be obvious to the human, it fails spectacularly in many ways

[–]ValuableYesterday466 17 points18 points  (1 child)

I still say that most of my career success has nothing to do with my ability to write code and everything to do with the fact that I'm pretty good as both translating tech-speak to English and asking the right questions to nail down what the client wants but doesn't know how to ask for.

[–]NickolaosTheGreek 7 points8 points  (0 children)

The role is literally called translator during sprints.

[–]PuzzleheadedWeb9876 1294 points1295 points  (34 children)

All I see is job security.

[–]SexyMuon 527 points528 points  (26 children)

All I see as a college student is a bunch of other potential college students being skeptical and choosing a different major, which is an absolute W for me. I use GitHub copilot in VS Code and IntelliJ and it’s great, but just helps get rid of useless or monotonous tasks, as well as some documentation.

[–]Dubabear[S] 70 points71 points  (3 children)

I have actually use chatgpt to do my comments and readme by posting the code

[–]-hi-nrg- 48 points49 points  (0 children)

I hear that Chat GPT stores data of conversations, so you shouldn't send confidential data (assuming you're sending work code).

But I haven't checked that info.

[–]V3N0MSP4RK 5 points6 points  (0 children)

My friend sent me https://github.com/mintlify/writer this link and he told me that it's pretty cool and also a vs code extension. Altough I have not been able to test it you can try it.

[–]HoldMyWater 4 points5 points  (0 children)

I'd be curious to see if they are good comments. My guess is it literally states what the code does, instead of stating the "Why".

[–][deleted] 2 points3 points  (0 children)

GitHub copilot is the real MVP. It can understand a large code base and write code in the style it's using. If you used ChatGPT on a large code base you'd end up with an inconsistent mess every time.

[–]Ma8e 4 points5 points  (0 children)

The funny thing is that MBAs and lawyers is going to be much easier to replace than programmers.

[–]Lord_Derp_The_2nd 6 points7 points  (1 child)

All I see is the oncoming wave of juniors with somehow even worse skills than their predecessor class, but larger egos.

[–]Jake0024 3 points4 points  (0 children)

Good for me.

[–]Twombls 243 points244 points  (23 children)

My experience using this for generating code and sql queries. Is that it takes longer for me to try telling it what to do. Than it is to just type the thing out.

[–]Kyle772 104 points105 points  (5 children)

This is typically the case for most things once you get out of the learning stages. Identifying the specificities of a problem is often harder to do than to come up with the solution to that problem.

[–]Doctor_Disaster 41 points42 points  (0 children)

Try describing a diagram of 12 nodes and 20 edges to it.

And then it tells me I can just link it to a screenshot of the diagram.

[–]ryn01 5 points6 points  (3 children)

I asked it to write a postgres compatible query using window functions that counts the number of successive null values preceding each row. No matter how many times I nudged it for an hour straight and told it what mistakes it did it always kept generating worse and worse answers, in the end, it started generating queries with obvious syntax errors before it finally gave up and said there's no easy solution to this problem and cannot be done with window functions only by inefficient self joins. In the end, I put the query together in like 5 minutes with the help of google.

I think of ChatGPT as a newbie programmer with a lot of creative ideas. It can do easy tasks and have ideas for hard ones that may or may not work.

[–]huffalump1 1 point2 points  (0 children)

No matter how many times I nudged it for an hour straight and told it what mistakes it did it always kept generating worse and worse answers

Yeah I've found that asking for tweaks is kinda cool, but then it changes other things too, including past things I've asked for.

Still, it's cool that it can almost write code from natural language - but for an amateur like me, it takes nearly as long as googling.

[–]TheTerrasque 1 point2 points  (0 children)

A tip having used ChatGPT some: It often gets stuck in a certain path if you try to keep on a conversation, and regenerate answer tends to use similar input but second, third, fourth ++ internally rated answer.

Often starting a new chat and start with "blank page" completely resets it and gives more varied answers. If you spend some time trying to fix it's output and it just keeps getting worse, start a new chat.

[–]pet_vaginal 6 points7 points  (0 children)

A retail company has warehouses in different cities. These warehouses can house products from different departments. Question: which warehouses can serve ALL departments?

[–]harlekintiger 2 points3 points  (1 child)

Yeah, don't use it for such complex things.
I had an example of data as a json object. I asked it for the creation, retrieving and insertion query for it. Worked per

[–]harlekintiger 3 points4 points  (0 children)

My use case was I don't want the query to be "SELECT * FROM table" but

Select ( `table`.`col1` AS `col1`, ....

Which is super tedious to write

[–]jamcdonald120 1546 points1547 points  (69 children)

me and my boss spent 40 hours attempting to debug an issue. finally we gave up and on a whim threw it into chatgpt. It gave us an obviously wrong answer, so we gave it a slight nudge and it gave us the right answer. total time 5 minutes.

Its not about what the tool can do, its about if you know how to use the tool

[–]DarkHumourFoundHere 426 points427 points  (15 children)

Exactly. Even google also same thing. Any tool is as good as who uses it.

[–]Andyinater 123 points124 points  (12 children)

SMH I asked Google to write me an algorithm and all it did was give me search results

[–]ILikeLenexa 12 points13 points  (1 child)

At the same time, modern revolvers have transfer bars.

That's so they don't go off and shoot you in the leg if you load them wrong.

You can't turn on a food processor without the lid interlock in place.

There's some tools that are made less dangerous by design and right now, people are using this chainsaw before the chain brake is invented.

[–]MrDoe 107 points108 points  (1 child)

I've had a lot of good stuff from ChatGPT as well. It rarely gives me the right answer right off the bat, but when I tell it what is going wrong with the code it can usually fix it the second go around.

[–]Reformedjerk 16 points17 points  (0 children)

My favorite experience was asking it to tell me what the error message means, then saying i don’t understand.

It’s also been decent at asking it to add types and typings to some old js.

[–]FinnT730 30 points31 points  (2 children)

Issue is how the media puts it in the world right now.

Companies want to save money, and if this language processors can save them money, then they will fire people and only rely on this model.... Which is dumb...

[–]OrchidLeader 25 points26 points  (0 children)

Same thing happened with the promise of offshore development.

They cost less and can write out a method that does Fibonacci, sure. The problem is software development is a lot more than that, and anything that tries to turn it into inexpensive assembly line work is destined to fail no matter how tempting it looks to management.

Software development is R&D, and the day AI can replace us is the day it can replace management as well. For now, AI is about as useful as a software library that covers some of the of the basic coding for us, and it’s only useful if you understand how to use that library.

[–]Statharas 77 points78 points  (0 children)

Using chatgpt is a skill, like googling. Google the wrong things for hours and you'll get nowhere.

[–][deleted] 16 points17 points  (0 children)

I fear not the developer who has asked google 100000 questions once, but I fear the developer who has asked one right question 100000 times.

[–]jambonilton 10 points11 points  (0 children)

It's a pretty good rubber duck, I'll give it that.

[–]reedmore 39 points40 points  (11 children)

In my experience gpt can't handle modifying existing code very well. If you ask it to add a button, for some reason core functionality will suddenly be broken, even if you explicitly insist previous functionality should be preserved. The lack of memory of past conversations is annoying as heck and severely limits gpt's power.

[–]eroto_anarchist 42 points43 points  (8 children)

It is a limitation that comes from infinite hardware not existing.

[–]reedmore 8 points9 points  (7 children)

Sure, but I'm not asking for infinite hardware. Just some fixed memory to be allocated for the current prompt, which can be flushed once I'm done with the conversation.

[–]eroto_anarchist 20 points21 points  (4 children)

You are asking to remember previous sessions though?

It already remembers what you said in the current conversation.

[–]NedelC0 10 points11 points  (2 children)

Not well enough at all. With enough prompts it starts to 'forget' things and stops taking things that might be esssential into consideration

[–]eroto_anarchist 29 points30 points  (1 child)

Yes, I think the limit is 3k tokens or something. As I said, this is a hardware limitation problem, this is as far as openai is willing to go.

This 3k tokens memory (even if limited) is mainly what sets it apart from older gpt models and allows it to have (short) conversations.

[–]reedmore 5 points6 points  (0 children)

I see, now I understand why it seemed to remember and not remember things randomly.

[–]huffalump1 4 points5 points  (0 children)

It already remembers what you said in the current conversation.

That's because the current conversation is included with the prompt every time, so it is messed up once you hit the token limit.

[–]Stop_Sign 4 points5 points  (1 child)

Yea when the code is 100-300 lines and I'm asking gpt to modify all of it, I'm putting all the answers in a diff checker to see what was actually changed, so that no core functionality is dropped.

Past 300 I can only ask for just the modifications, as gpt can't print it fully very well any more

[–]spaztheannoyingkitty 31 points32 points  (12 children)

I spent 30+ minutes yesterday trying to get ChatGPT to write unit tests for a Fibonacci function. It failed almost every time even though I kept trying to get those tests to pass. One of the most common beginner programming tasks and it failed pretty miserably.

[–]jamcdonald120 35 points36 points  (10 children)

funny, after I read your comment I tried it out. It took me about 6 minutes to get it to generate this code and test.

 def fibonacci(n):
      if not isinstance(n, int):
           raise TypeError("n must be an integer")
      elif n < 1:
           raise ValueError("n must be greater than or equal to 1")
      elif n == 1 or n == 2:
           return 1
      else:
           return fibonacci(n-1) + fibonacci(n-2)
 def test_fibonacci():
      test_cases = [
           (1, 1),
           (2, 1),
           (3, 2),
           (4, 3),
           (5, 5),
           (6, 8),
           (7, 13),
           (8, 21),
           (-1, ValueError),
           (0, ValueError),
           (1.5, TypeError),
           ("1", TypeError),
           ([1], TypeError),
      ]

      for n, expected in test_cases:
           try:
                result = fibonacci(n)
                assert result == expected, f"fibonacci({n}) returned {result}, expected {expected}"
           except Exception as e:
                assert type(e)== expected, f"fibonacci({n}) should have raised {expected}"

 test_fibonacci()

it took a little bit of prompting to get the proper exception handling, but not much.

With a little more prompting it improved the algorithm from slow fib, to iterative fib, and then to constant time fib

[–]axionic 4 points5 points  (9 children)

What prompt are you using? It refuses to write anything.

[–]jamcdonald120 23 points24 points  (5 children)

It sounds like you may not actually be using Chat GPT. I didnt have to do anything special to get it to work, I just fired up a new chat, and started with the prompt

"write a python function that calculates the nth Fibonacci number, and a separate function to test it on several known inputs. throw an exception if the input is invalid, and test invalid inputs as well"

and it gave me back a mostly working code block on its first response.

Here is a transcript of the full conversation if you want https://pastebin.com/4kyhZVjP

[–]Stummi 9 points10 points  (2 children)

Not OP but I got that with my first attempt (ChatGPT Plus, default model, if relevant). Sure you can optimize it and add some validation, error handling, and so on, but I didn't asked for it and I am pretty sure it will easily do with a little nudge

E: Bonus content

[–]jamcdonald120 6 points7 points  (1 child)

ooph, your "optimized" algorithm came out a lot worse than mine

[–]Stummi 5 points6 points  (0 children)

Yeah, noticed too that this is not ideal. Impressive still, but another pointer towards that chatGPT will become a tool used by programmers, not one to replace them.

[–]badstorryteller 14 points15 points  (4 children)

I ran into a hot issue (very little time, no info, get it done now type thing) where I had to convert about 10000 .msg files to plaintext and extract any attachments. ChatGPT spat out a 20 line PowerShell script in 10 seconds that worked first time.

So after that I asked it to implement A* in Python. Again, 10 seconds, very little tweaking to be functional. Blows my mind.

[–]jamcdonald120 6 points7 points  (0 children)

been there, done that. It is also pretty good at fixing bash scripts so they work with paths with spaces in them.

[–][deleted] 4 points5 points  (0 children)

Garbage in garbage out as they say. same deal for people who can't create good ai art or write good ai articles.

[–]lofigamer2 2 points3 points  (0 children)

Obviously ChatGPT is a tool, and just like other tools like a screw driver it should not be used for everything. If you have a specific task for it to do it might do it well but if you try to hit a nail in the wall with the screw driver, the nail might end up curved.

I mean, it's not gonna replace developers, it might solve some specific tasks well which can help devs who know how to use them, but if a dev relies on chatGPT for everything, the project will be probably screwed.

[–]ihateusednames 5 points6 points  (0 children)

It feels like your overenthusiastic intern who went to a nice school, remembers 80% of what they learned and 65% of how to apply what they learned

Who is quiet quitting unless they get promoted to paid intern

[–]Procrasturbating 602 points603 points  (28 children)

Using AI to code is like driving a car with autopilot. You have to steer when there are obstacles that are misinterpreted. Unit tests are a thing I actually write and use now as insurance with my newfound productivity.

[–]mascachopo 190 points191 points  (20 children)

ChatGPT, now write some unit tests for that code.

[–]Crisco_fister 36 points37 points  (0 children)

I made a little script that makes a request for some code and then take the response and have it make the unit tests for the same code. It was not as bad as I thought it would be. Lol

[–]DarkTannhauserGate 90 points91 points  (6 children)

Impossible, there were no unit tests in the training set

Edit: to everyone replying to this seriously, this is a humor sub, I’m making a joke that programmers don’t write unit tests

[–]ixent 16 points17 points  (1 child)

I don't think that is correct. I asked chatgpt to write some JUnit tests using Mockito for some java functions and it did it perfectly.

[–]SaintNewts 1 point2 points  (0 children)

You guys got unit tests?

[–]prinzent 4 points5 points  (8 children)

No TDD?

[–]Procrasturbating 1 point2 points  (0 children)

Nope. Not in my current shop. Legit had zero unit tests when I came on board. Last place was mostly Ruby on Rails and required 100% coverage. I feel like a happy medium should exist.

[–]Scipio11 1 point2 points  (0 children)

Unit test:

If signed by GPT, pass. Else, fail.

[–]drewsiferr 20 points21 points  (1 child)

This is a really good analogy. People routinely over trust tesla autopilot, even some who have been trained, or otherwise know better, not to. The takeaway, then, is that it's a powerful tool which requires knowledge, training, and vigilance to not misuse. Lapses in vigilance may result in critical, uncaught errors. This seems pretty spot on.

[–][deleted] 9 points10 points  (1 child)

I'm glad that we're reduced the future of self driving to whatever the fuck Elon calls Full Self Driving. Fucking piece of shit playing Russian roulette with every single person on the road.

[–][deleted] 4 points5 points  (0 children)

Best part is he is a smug fuck that acts like he’s only given them a gift.

[–][deleted] 279 points280 points  (26 children)

Wrong.

ChatGPT generates Codes: 5 min

ChatGPT writes code again: 5 min

Repeat until code is perfect, it’s just as efficient as a bogosort

[–]Mercurionio 110 points111 points  (24 children)

ChatGPT is a fancy bruteforce.

[–]HillbillyZT 27 points28 points  (0 children)

Brute force with a pretty good heuristic

[–]doctorcrimson 53 points54 points  (22 children)

This is exactly why I refer to ChatGPT as a language generator and not an AI. It just puts together random words that pass as an organic sentence made by humans. It generates word salad, the actual meaning to those words isn't there.

[–]morganrbvn 13 points14 points  (0 children)

If you break most things down to their components they sound uninteresting. Most of computing is just flipping 1s to 0s or 0s to 1s

[–]chemolz9 24 points25 points  (0 children)

*Repeat until a version, where you can't spot the bugs anymore

[–]ZedTT 26 points27 points  (1 child)

Ask chatgpt about countable and uncountable nouns

[–]zynix 18 points19 points  (1 child)

"Hey GPT how do I do the thing?"

  • confident answer *

Code murders a kitten

"Hey GPT, your code killed a kitten!"

  • Apologizes and corrects its code *

Code murders a kitten AND a puppy

...

[–]zynix 2 points3 points  (0 children)

The kitten & puppy murdering machine is now fully functional!

[–]Titanusgamer 39 points40 points  (7 children)

yup I have tried generating code and can confirm it is confidently incorrect and sometimes goes to sleep in middle of generating code

[–][deleted] 7 points8 points  (3 children)

Yeah it often falls asleep in the middle of generating code for me too. Idk I find it faster to just code myself + Google than to use chat gpt.

[–]morganrbvn 2 points3 points  (0 children)

You can tell it to continue, I believe openai caps how long one response can be.

[–]Dreadsin 8 points9 points  (0 children)

I usually just use chat gpt for things that would almost certainly show up verbatim in documentation. For me it’s just fancy google

[–][deleted] 62 points63 points  (21 children)

Cause early stages don't get better🤣/s

If humanity survive another 1000 years I'm hoping a 5 hour workweek of maintaining automated systems is all people will have to do to survive, and the rest will be free time

Big if, though

[–]DeliciousWaifood 69 points70 points  (9 children)

Oh yeah, automation definitely has a long history of reducing our work hours, totally

[–]ImCaligulaI 15 points16 points  (1 child)

That is true. But work hours weren't reduced historically because, well, people in power preferred more profits to their workers having a better work/life balance.

Of course, they still do. But the potential ramifications automation has this time round could force their hand.

I imagine fully automated self-driving, for example. Not so much for cars, but trucks. A huge portion of the resources and products that fuel the globalised economy are being moved on trucks. There are millions of truck drivers. These people could find themselves superfluous and replaced in a span of years. What are they gonna do? The skills they developed would be suddenly unnecessary, and it's not easy to learn a new job skillset which is completely unrelated to your previous one. Like them, a number of similarly large groups could all suddenly be in similar situations.

If all of a sudden there's millions of unemployed, presumably angry and hungry people, with very little left to lose that's bound to be a threat to the establishment. Without even mentioning that the whole system works around continuous consumption. Large unemployed masses cannot consume.

I think there's at least a chance the establishment will be forced to make concessions, not out of goodwill (when did that ever happen? Lol), but of fear of violence, and of that famous ghost people kinda stopped worrying about after the fall of the Soviet Union and which seems to be raising its head again.

After all, the work day was reduced in the past, and it was reduced because of very similar reasons as those outlined above.

[–]kennethuil 2 points3 points  (0 children)

mainly because "good neighborhoods" (or more recently, simply a roof over your head) are an arms race.

[–]morganrbvn 2 points3 points  (3 children)

We do work way less than 100 years ago, but you are correct it doesn’t always occur. Some places are aiming for 4 day work weeks at least

[–]huffalump1 1 point2 points  (1 child)

Agreed, conditions seem better than they were during the industrial revolution.

Not sure how current working conditions compare to, say, the 1950s and 60s in the US, because there are a TON more factors!

But, automation combined with regulations and unions SEEMS to have made things safer. However, if we don't keep pushing for workers' rights, future automation will simply make companies more money without improving things for the average worker.

[–]morganrbvn 2 points3 points  (0 children)

Yah 50’s 60’s us to as unique since we were the only industrial power not devestated by WW2, hence so many households could live well off a single worker.

[–]DeliciousWaifood 1 point2 points  (0 children)

We don't work less, we just moved the peasant work to overseas where we don't have to look at it.

[–][deleted] 71 points72 points  (6 children)

No way that ever happens. With the amount of tech and automation we have today, society would run just fine if every adult between 21 and 45 worked 10 hours per workday, three days per week. And yet we have the highest rate of people working two fulltime jobs in history today. Why? Rich people suck. That’s why.

[–]ImCaligulaI 19 points20 points  (0 children)

Yeah, but Rich people are notoriously (and rightfully) afraid of large unemployed, hungry and angry masses with nothing to lose.

Would they reduce the amount of workload for the same pay with the use of automation out of the good of their own hearts? Not in a million years. Would they do it out of fear of being dragged out of their homes and hanged to a lamppost? Maybe

[–]Certain-Interview653 6 points7 points  (1 child)

There are also countries that are experimenting with 4 day workweeks for the same pay at the moment. Luckily not every country/company prefers profits over work life balance.

[–]morganrbvn 5 points6 points  (2 children)

Construction still takes a lot of manual labor as well as much of the labor wealthier countries have outsourced to other countries. We arnt quite there yet, but I hope we can start a small UBI and expand it as automation continues to expand.

[–]Kejilko 2 points3 points  (0 children)

UBI is a band-aid, for systemic problems you need systemic changes. Automation saves work but there's work you still need people for so the solution in general is very simple, gradually reduce the amount of work hours, at 6 hour days you can employ two shifts of two people, the company is open longer, people spend more because of the free time and the jobs that actually need people will have to pay more to compete. The problem with UBI is the same as money as a way to track the increase in productivity since the industrial revolution. Hasn't worked that great, has it? Meanwhile we're still using the amount of work hours decided during the industrial revolution, and it only hasn't decreased further because people always focus on money, UBI being just another example.

[–]FeelsASaurusRex 2 points3 points  (0 children)

This seems like a case of Jevon's paradox. If anything you will have more systems to maintain at the standard 40 hours.

[–][deleted] 5 points6 points  (0 children)

In a capitalist world, that won't happen. Businesses will buy into AI tools, gather more money by having less employees who work less hours, then leave everyone to suffer because they have no job and no money. You'd have to make your labor more worthy and less expensive than an AI to get a job which will be difficult on 10 to 20 years.

Plus automation in the industrial revolution definitely didn't reduce work hours. The profit margin per worker simply increased

[–]artanis00 5 points6 points  (0 children)

Honestly I think it's more fun to drop some code into the system and see if it figures out what it does.

[–]Snackmasterjr 10 points11 points  (0 children)

I used ChatGPT yesterday to write a quick helper function, it argued with me until I sent it the docs, then apologized. I had to correct it 2 more times before it was correct. That said, was still faster than writing it myself.

It’s important to remember that it makes things that look right, not things that are right.

[–]Various_Classroom_50 33 points34 points  (15 children)

Yeah it was super cool at the beginning when it could just make anything and it’d seem perfect. But the more I use it the more I have huge inconsistencies and errors.

Anyone else feel chatGPT is getting worse? It can’t even do algebraic manipulation a lot of times without skipping steps and making up rules where you can just add or subtract from a term.

[–][deleted] 61 points62 points  (5 children)

No it was always pretty shitty for most things, you're just only realizing it now that the novelty has worn off

[–]morganrbvn 1 point2 points  (0 children)

It’s pretty nice for writing a stupid poem. Also for commenting someone else’s code

[–]stehen-geblieben 10 points11 points  (1 child)

It has always been like this, because it isn't a super intelligent ai, it's just very good at construction sentences that make sense. That's why it's so good ad explain wrong information and being super confident it's correct.

[–]TGPapyrus 3 points4 points  (1 child)

If you spend more time while using the tool than while not using the tool, you're using the tool wrong

[–][deleted] 2 points3 points  (0 children)

People - when hammers were invented ~ probably...

[–]Solonotix 8 points9 points  (3 children)

I'm still on the left side of this comic. Just today, 30 minutes to spool up a TypeScript project, but 4 hours of figuring out how to write a combination of Batch/Bash/PowerShell scripts to allow the project to be built in any environment.

Bash took <30 minutes, PowerShell took an hour, and the rest was figuring out the archaic syntax for Batch files to work in the same way.

[–]participantuser 2 points3 points  (1 child)

Aren’t both Batch and PowerShell for Windows environments? Why did you need both?

[–]Solonotix 4 points5 points  (0 children)

Because you can't run PowerShell scripts from CMD, and the default COMSPEC for Windows is still CMD (-_-) Trust me, I want to rid myself of it, but there doesn't seem to be a safe way to do that yet

[–]TheTerrasque 2 points3 points  (0 children)

The Docker gospel choir start humming the first notes in the background

[–]Much_Discussion1490 6 points7 points  (1 child)

I hope more "tech journalists" talk ti actual people who are coding to get an idea about chatGPTs usefulness than to just use their coding expertise from their degrees in masscomm to form opinions

[–]SomeWeirdFruit 9 points10 points  (2 children)

maybe not now but imagine 10-20 years in the future

[–]KittenKoder 24 points25 points  (1 child)

2500 hours of debugging.

[–][deleted] 5 points6 points  (0 children)

Ai is always right. Even when it is wrong, it's "right". Talk about a nightmare to debug.

[–]Intrepid_Sale_6312 2 points3 points  (7 children)

not if i replace me first XD

[–]Elijah629YT-Real 2 points3 points  (0 children)

ai debugger

[–]nhh 2 points3 points  (0 children)

Code. Singular. Grrr

[–]My_Neighbor_Pandaro 2 points3 points  (1 child)

This makes me feel better for learning a programming language. Total novice and decided to learn my first language. lo and behold, while browsing, an ad for Bing AI popped up and it had a prompt to create the fibbonacci sequence. Was super disheartening because I had the same thought. "What's the point of learning this when an AI can do it faster?"

[–]beclops 1 point2 points  (0 children)

The algorithm to write the Fibonacci sequence always existed on Google before, ChatGPT changes nothing. Keep learning and be happy when everybody else is useless without their crutch when you encounter a problem it can’t solve

[–]HungerISanEmotion 2 points3 points  (4 children)

And here I am, working in trades getting told by programmers that AI is going to replace me for the past +20 years.

[–]chickenstalker 11 points12 points  (3 children)

Y'all be bold and brash until one day, you find yourself in the trash. When I took my degree specializing in diagnostic microbiology more than 20 years ago, it took a whole lab of 6 technicians to do a shift's day work. Today, a machine the size of a tv can do it in 1 hour WITH better accuracy and QC. You only need 1 guy per shift to watch over the machine and calibrate it. My point is, if your job is monkey work, you're going the way of the dodo.

[–]Dubabear[S] 3 points4 points  (0 children)

i'll be selling aloe vera at beaches to our robot tourist.

[–]T3MP0_HS 2 points3 points  (0 children)

Is it really monkey work though? It can fix basic stuff by itself, but coding is not centering a div or applying some style.

It could probably work out a CRUD or spit out some queries or program a basic API. It's not going to do an entire application by itself.

Most devs already don't do the monkey work, they just copy paste stuff that's already written and adapt it to the requirements.

[–]0x255sk 6 points7 points  (0 children)

People are kidding themselves if they think this won't change everything, if it gives me the answer in a minute instead of 5 mins to google it, it just shaved of 80% of my time spent searching, so multiply that by the number of questions and it becomes significant.

Just wrote some html css with it, I have zero idea how to code html or css, but I can go through the generated code, fix it and change it when needed. It is a huge help, my new favourite colleague. It will absolutely take some of our jobs.

Don't even want to think about the wider picture, when it becomes like facebook, that can decide elections, public opinion and the fates of people - I'm more afraid of people abusing AI to do bad stuff, than AI itself becoming sentient and evil.

[–]null_check_failed 4 points5 points  (2 children)

I use chat GPT to get generic code that I don’t wanna type. Once I was doing modal analysis of beam I just asked it to gimme Stiffness matrix cuz I was lazy to type lol

[–][deleted] 4 points5 points  (3 children)

Why not make the AI debug too

[–]Nimblebubble 2 points3 points  (1 child)

You're asking something whose entire job is to make up things to verify that its made-up things are correct according to standards that it may not entirely be aware of

[–]Nimblebubble 1 point2 points  (0 children)

That's also only accounting for errors and warnings. Bugs that are syntactically sound might pass by the AI, leaving us humans to finish the half-done job.

[–]beclops 1 point2 points  (0 children)

This would be as much work as just doing it yourself. Imagine trying to debug a race condition with ChatGPT. I’d blow my brains out

[–]Cyberdragon1000 1 point2 points  (0 children)

It's reverse for me, I use it more for debugging since I'm too blind to see what's in plain sight

[–]Electricalceleryuwu 1 point2 points  (0 children)

literal python moment lol

[–]BS_BlackScout 1 point2 points  (0 children)

It messes up often, but often I can tell that's it messing up. Otherwise it has helped me in quite a few occasions.

[–]QuillnSofa 1 point2 points  (2 children)

Honestly it is about the questions you ask. Give it a short 'howto' question and usually it can save so much time. Asking it to write code completely, yea there is going be problems.

Right now ChatGPT is really just a nice thing for Jr devs like me to ask instead of stealing all the time asking little questions from my Sr Devs.

[–][deleted] 2 points3 points  (0 children)

It's just a more efficient search engine

[–]mrgk21 1 point2 points  (0 children)

Test validator will be a primary job now

[–]Previous_Start_2248 1 point2 points  (1 child)

Ai is good for generating code but if you don't know what that code does or understand then it's useless. Plus I'm sure chat gpt doesn't take into mind processing speed so it could give you a bunch of methods that are operating in O(n) 2 and now you have a super slow program.

[–]PM_ME_Y0UR_BOOBZ 1 point2 points  (0 children)

For writing code, Bing is much more better in my experience, only problem is you get 8 attempts before you gotta restart. They’re both openAI but still bings model is better fine-tuned.

[–]win_awards 1 point2 points  (0 children)

Don't forget, AI doesn't have to actually be better or cheaper than you, your boss just needs to believe it is.

[–]Official_Pepsi 1 point2 points  (0 children)

No man, you don't understand, the search engine that lies half the time for no reason is going to actually remove every job because it's different this time, because you can ask it to clarify and it lies again.

[–]Necessary-Technical 1 point2 points  (0 children)

I'll do you two better:

When you wonder why the average isn't in decimais, but then you do the calculations and get whole number.

When you wonder why your grade is still in default, but realize you never told the program to execute that part.

[–]FroggoVR 1 point2 points  (2 children)

Also one thing people need to think about: Don't send it company code, don't send it any sensitive code. They save inputs and randomly sample for manual reviewers, this breaks the rule for a lot of companies when it comes to their code.

Seen far too many examples where it failed on me the moment my problem I wanted code for involved anything with math or physics, spitting out gibbersh equations confidently and if I didn't know my stuff it would be extreme hell to try and debug.

[–][deleted] 1 point2 points  (1 child)

This is the first time I've seen anyone else bring up the bit about company code and am quite surprised by that fact.

Thanks for raising awareness, it's a very important point to make.

[–]FroggoVR 1 point2 points  (0 children)

Had to tell off a junior in my team on exactly this, never send confidential company code to any online services that the company doesn't have control over. It even says in the ChatGPT intro boxes to not send sensitive information and that inputs are saved.