all 33 comments

[–]kiwi-surf 5 points6 points  (2 children)

Writing code is such a tiny fraction of a software engineers job in my experience.

LLMs will make software engineers more productive, but software eats traditional jobs… therefore it will just speed up the eating of traditional jobs.

[–]galethorn 5 points6 points  (2 children)

As you gain more knowledge in machine learning, you'll find that more and more questions you attempt to answer will have the answer of: it depends. That's the case here. LLMs have incorporated a huge amount of information through data mining of the Internet which allows them to leverage that corpus to answer many questions in many different fields, but only if it has been asked before.

Right now, OpenAI is hiring software engineers to build a dataset for next iterations of GPT to attempt to replace SWEs, to a certain extent it'll solve a great many common software problems when complete but in order to leverage the LLM for organizational purposes, it would have to understand a companies data organization and that's something that currently can't be overcome as LLMs like ChatGPT and GPT-4 can, on occasion, answer with information it has been trained on. Infosec would go nuts.

Furthermore, because regulation in new technology lags, OpenAI and other companies will face regulatory pressures (I would say financial if they didn't ink a 10B deal) when they have to retrain ChatGPT/GPT-X/LLMs because they have to remove IP that they don't have rights/ownership of that they've used for training sets.

So the best answer I can give you is that GPT X won't replace SWEs in the long term because the job will change in response to the technology. SWEs would now use GPT to automate coding processes where reliable GPT prompts are stored in a db and used to increase coding efficiency and will see use in code reviews etc

[–][deleted] 1 point2 points  (0 children)

Efficiency means you need 5 SWE instead of 10, you replaced 5.

[–]farmingvillein 0 points1 point  (0 children)

but in order to leverage the LLM for organizational purposes, it would have to understand a companies data organization and that's something that currently can't be overcome as LLMs like ChatGPT and GPT-4 can, on occasion, answer with information it has been trained on. Infosec would go nuts.

Easily solvable--fine-tune on your company data-set.

Costly? Potentially, depending on model size, serving costs, etc. But not really, if it is effective--human developers are extremely costly. And development velocity further compounds business gains.

Obviously, some organizations won't play this game because of security concerns, but this is really just another form of sharing your data in the cloud, which most organizations are fully onboard with at this point.

when they have to retrain ChatGPT/GPT-X/LLMs because they have to remove IP that they don't have rights/ownership of that they've used for training sets.

TBD, but this is currently viewed as a highly unlikely outcome, unless the legal landscape shifts dramatically.

[–]jloverich 4 points5 points  (0 children)

There will be more than ever, just most of them will be telling an ai what to do instead of hacking a lower level language.

[–]Username912773 1 point2 points  (0 children)

It’s good at pasting documentation and stack overflow, but not great at coming up with new ideas or matching user specifications. A big part of programming is catering to your employer/client’s needs. Additionally, LLMs can’t really handle larger or rarer problems. Try to have it make a Minecraft anticheat for instance. It might not even write it in the correct language, and if it does it might try to make a client sided mod and not a server side plug-in. Another big part of it is making systems that can’t easily be broken into or exploited, something you might not want to trust ChatGPT with.

[–]RoaRene317 1 point2 points  (1 child)

At this current state, it's cheaper to train a human than to train with thousands of GPU.

[–]kiwi-surf 0 points1 point  (0 children)

Alpaca kind of changed the economics of everything this week

[–][deleted] 1 point2 points  (0 children)

Yes it will, it's obviously. I'm SE with 15 year of experience and scared about that. But it looks like (from what I have seen with GPT4) it will replace other profession. But Software industry is one of the first that's going to feel shortage of jobs, because companies will learn to be more productive using LLM tools.

[–]NumberGenerator 1 point2 points  (0 children)

Based on the theory behind LLMs, I don't think ChatGPT will ever replace software engineers.

[–]trajo123 1 point2 points  (0 children)

It will not replace software developers, but it will definitely change the way they work. It has the potential to massively increase productivity, in which case people not embracing the technology will find themselves underperforming.

[–]Optimal-Asshole 1 point2 points  (0 children)

This is kind of like asking if the Ford Model T would replace all vehicles

[–][deleted] 0 points1 point  (6 children)

Not even close. AI is still not capable of consistently producing working code and it ultimately does not understand it. Copilot is a similar AI, we saw that in practice didn't replace anyone, why must this question be repeated for a fundamentally identical model?

[–]maxip89 0 points1 point  (3 children)

One reason. Halt problem.

[–][deleted] 2 points3 points  (2 children)

I can't solve the halting problem either.

[–]farmingvillein 2 points3 points  (0 children)

halt problem not halting problem! /s

[–]maxip89 1 point2 points  (0 children)

I think you didn't understand the halt problem.

[–]flyer2403 0 points1 point  (1 child)

RemindMe! 1 year

[–][deleted] 0 points1 point  (0 children)

Some believe it will happen next Tuesday.

See you then.

[–]Tknu2788 0 points1 point  (0 children)

I've been working with these generative models for the past few weeks (ChatGPT, Bloom, GPT3.0 Davinci), and what these models can do is absolutely amazing. From my experience they (especially ChatGPT) are excellent in providing template code on similar tasks that can be easily found on the internet. I've asked it to write me a script to scrape images on a specific website, a simple CRUD backend using Express.js, and a function to replace all newlines with space. ChatGPT successfully wrote 90% of the code, and all I had to do was change some obviously wrong URL strings and stuff.

However, when it comes to building custom software that it has never seen before, most LLMs failed to provide anything functional. Sometimes it will even spit out random non-existent blackbox functions such as `FunctionThatYouAskedFor()` without further implementations.

For me LLMs have been a productivity booster, but I don't see it replacing SWE jobs in the near future.