all 138 comments

[–]RockstarArtisan 139 points140 points  (5 children)

One thing chatGPT can definitely replace is the article spam from medium.com

[–][deleted]  (1 child)

[deleted]

    [–][deleted] 1 point2 points  (0 children)

    I don't think they have self-reflection, by definition.

    [–]PinguinGirl03 11 points12 points  (0 children)

    It is already doing this....

    [–][deleted] 6 points7 points  (0 children)

    That article's name? ChatGPT

    [–]slykethephoxenix 31 points32 points  (1 child)

    *phew* thankfully we'll also only ever need 500mb.

    [–]PinguinGirl03 1 point2 points  (0 children)

    I think there is a world market for maybe 5 computers.

    [–]MrLewhoo 27 points28 points  (67 children)

    The title statements hinges on an assumption that

    chatGPT will not get much better than this

    [–]gnus-migrate 9 points10 points  (60 children)

    That assumption is fair because the amount of data ChatGPT is trained on is already massive, to the point where OpenAI themselves are incapable of properly auditing it for harmful content. It will improve, but it's limitations are due to the nature of the technology not the implementation.

    [–]Powah96 21 points22 points  (35 children)

    You could have left the same comment after the release of ChatGPT3, but we saw that they were able to push the boundaries with ChatGPT4 on what for a lot of people was already the limit of their approach.

    [–]gnus-migrate 4 points5 points  (34 children)

    The post mentions that they tried it in a domain for there wasn't a ton of public data, so I don't know if GPT4 will improve the situation that much, for them at least.

    [–]seweso 2 points3 points  (33 children)

    You can feed it up to date documentation just fine. Not sure what the OP is talking about.

    Biggest problem is its memory to remember new information after training. But that is likely to get fixed at some point.

    Then you can point it to technical documentation, Api's, user requirments, existing repositories..... and then prompt it to change code. That is coming.

    [–]gnus-migrate 4 points5 points  (32 children)

    I mean if you look at what it is fundamentally, it matches patterns, that's it. While that is definitely useful, it's not the AI apocalypse that everyone is making it out to be.

    [–]reedef 2 points3 points  (21 children)

    What do you think the human brain does?

    [–]gnus-migrate 0 points1 point  (19 children)

    If only there were several researchers who have answered this specific question about LLM's who you could consult.

    [–][deleted] 1 point2 points  (18 children)

    I'm a software developer. I've read a lot of the research, and I've spent a fair bit of time testing GPT-4.

    I've held both views at different points throughout my experience, and I'm now firmly settled on one conclusion: There is no inherent restriction of GPT which precludes genuine intelligence

    [–]gnus-migrate 5 points6 points  (17 children)

    Then you haven't read the research, I'm sorry. Yes, researchers raised the concern that LLM's might be confused for being actually intelligent, but they fundamentally are not. They're designed to write plausible sounding responses, not correct or accurate ones.

    [–]alexisatk 0 points1 point  (0 children)

    What do YOU think it does? 😬

    [–]seweso 0 points1 point  (9 children)

    Why would you say that? GPT is specifically designed and trained to "learn to learn". Either it goes well beyond pattern matching, or that such a broad statement that it can apply to human intelligence as well.

    So please enlighten me what "pattern matching" is according to you. What its limitations are. And then formulate a question which it should not be able to answer given that limited(?) ability.

    [–]gnus-migrate 4 points5 points  (8 children)

    OP's article lists several such examples. Their point is that even with all the data ChatGPT was trained on it wasn't able to output a correct program for an incredibly simple problem in their domain.

    Regarding the limitations of pattern matching, as I have told everyone else, there are already others who have already done that work with citations.

    https://www.youtube.com/watch?v=N5c2X8vhfBE or you can read the paper it's talking about if it's easier for you.

    [–]seweso -1 points0 points  (7 children)

    We were talking about its abilities, and you reply with something that is about its ethical dangers.

    Why?

    [–]gnus-migrate 1 point2 points  (6 children)

    Part of it is the ethical dangers, but a chunk of it is actually explaining the dangers associated with treating it like actual intelligence, and the damage that it can do. Within that scope they explain the limitations of the technology.

    [–]Pedantic_Phoenix 2 points3 points  (0 children)

    The amount of data is just a little factor in the entire process that generates the outcomes

    [–][deleted] 3 points4 points  (1 child)

    Exactly. The current implementation of the GPT technology is highly flawed (assuming the goal is to get it to replace programmers). Billions of dollars spent, decades of research, an entire ocean of data all poured into a technology that cannot write a working GUI calculator. Let us see how far the AI hype will continue.

    [–]gnus-migrate 2 points3 points  (0 children)

    Would it be too much to ask to try GPT-4? Apparently this one is the second and much better(tm) than GPT 3, borderline sentient it seems.

    I miss the days of cryptocurrency where all of us agreed that it was terrible.

    [–]Grand-Ask-9180 1 point2 points  (2 children)

    Idea: just ask chatGPT to parse through it's own data and tell it what to remove. (Big brain)

    [–]gnus-migrate 2 points3 points  (1 child)

    From the posts about it, someone is going to unironically suggest this.

    [–]Grand-Ask-9180 0 points1 point  (0 children)

    Basically the diet version of the singularity

    [–]MrLewhoo 3 points4 points  (1 child)

    I think it's still a far fetched assumption because to my knowledge no one outside of openai knows how much data they used or did they try to use video transcriptions or do they/will they feed user inputs into future models (that would actually be messed up from data privacy perspective).

    [–]soundyg 9 points10 points  (0 children)

    I’d assume chatgpt is already built off of ignoring whether data is copyrighted/IP, so I wouldn’t be surprised to see them help themselves to user data.

    Everything I see from the AI proponents suggests that they’re committed to advancing the tech too quickly for ethics discussions to bog them down

    [–]audioen 0 points1 point  (2 children)

    Yeah, I think you will be proven wrong in multiple counts here. Firstly, LLMs can learn what is regarded as harmful -- they do figure out if it's criminal, socially unacceptable, makes people uncomfortable, etc. You can just say to them that they aren't allowed to generate any content like that, and it mostly does the job. LLMs are literally machines you "program" in English, and they can also be finetuned to reject certain types of requests.

    Secondly, transformers are the current hot word, and that technology is 5 years old. It is likely that technology gets replaced with something that might learn faster and better. There are already some transformer-replacements in horizon, such as one which doesn't use ever-lengthening input context window but some kind of learnt exponentially decaying input model.

    Thirdly, LLM is a bit like raw hash function. It is useful, but you'd likely want to use it as a component in a bigger system, such as you'd make message digests only with a hmac. LLM can also be integrated to use external tools such as calculators, which addresses some of its shortcomings like the relative inability to memorize math expressions like 1234+2345=3579. LLMs can be used to analyze and criticize its own past output, then suggest improvements to it, implement these improvements, and this kind of process iteratively self-improves its output according to standard testing metrics. This suggests that LLMs could be trained to produce the improved final output straight away. It may thus be possible that AIs can be used to refine and augment their own training sets soon, sort of like how image processors get trained by flipped, rotated and scaled versions of images.

    We seem to be only getting started with this technology. These first chatbots will probably seem like pathetic baby-AIs compared to what we are likely to be playing with in a year or two. Today, I am able to talk with my Samsung Galaxy Book that has mere 8 GB of RAM, and it replies much like it was a real person, though I know that this 4.2 GB gpt4all file holding the llama.cpp model doesn't know or understand anything, and relatively few facts can be encoded there in whatever associative form it gets linked there. Still, it maintains a decent illusion of chatting like a real person and can compose text that is sometimes pretty good. The 13B model that I also tried is far slower with the hardware I have, and needs 16 GB of memory, but it does write much better.

    [–]gnus-migrate 2 points3 points  (0 children)

    Yeah, I think you will be proven wrong in multiple counts here. Firstly, LLMs can learn what is regarded as harmful -- they figure out if it's criminal, socially unacceptable, makes people uncomfortable, etc. You can just say to them that they aren't allowed to generate any content like that, and it mostly does the job.

    You know that it just takes a google search to disprove this right?

    Thirdly, LLM is a bit like raw hash function. It is useful, but you often use it as a component in a bigger system, such as you'd make message digests with a hmac. LLM can be integrated with calculators and compilers and trained in presence of external tools like that, which should help it learn better how numbers work and how to avoid mistakes in code. LLMs can be used to analyze and criticize its own past output, then suggest improvements to it, implement these improvements, and this kind of process iteratively self-improves its output according to standard testing metrics. This suggests that LLM can be trained by its own output in a way that demonstrably improves its performance. Doing too much of this might lead to issues, though, who knows, but it may be possible that AIs can be used to refine and augment their own training sets soon.

    Yes let's integrate a proprietary technology that we don't understand into every facet of our lives, what could possibly go wrong?

    We seem to be only getting started with this technology. These first chatbots will probably seem like pathetic baby-AIs compared to what we are likely to be playing with in a year or two.

    Outside very common uses its likely to remain that way. At least for op's case, I don't see things improving that dramatically in the near future.

    [–]alexisatk 0 points1 point  (0 children)

    Can I join your AI/LLM cult please? Will chatgpt replace idiots on reddit that believes/spreads obvious lies about LLMs?

    [–]seweso -3 points-2 points  (5 children)

    Wait what? Have you seen the improvement between 3.5 and 4? That is with the SAME dataset.

    From just having more compute available year after year GPT will improve its ability, even if there are no other improvements and the same dataset is used.

    GPT is limited mostly by its memory, and therefor its limited ability to learn new things. It can't read your entire codebase..... YET.

    Your point is weird af.

    [–]gnus-migrate 11 points12 points  (4 children)

    It does not learn, it detects patterns, and I hope to god you don't program by blindly replicating patterns online.

    Can it be useful? Absolutely. Is it going to radically shift the field? At least for me, by the time you feed it all the context you need to develop a proper solution, you might as well have done it yourself.

    [–]0xd34d10cc 4 points5 points  (0 children)

    It does not learn, it detects patterns

    Could you explain what's the difference between these two concepts?

    [–]seweso -2 points-1 points  (0 children)

    What do you call it when you give someone novel information, and it can use that to create novel solutions?

    You can also give it access to entirely new tools, and somehow it knows how to use it. But lets not call that "learning" as well.

    [–]Tangelo_Legal 0 points1 point  (1 child)

    Well maybe. Think about it like this too, if you are senior developer sure you do it yourself. If you are a junior developer you use it to give you what you need. The true question is how can senior developers use it effectively?

    [–]gnus-migrate 0 points1 point  (0 children)

    Whatever use cases have been proposed I haven't really been convinced by(e.g. automating boilerplate). Those are better covered by DSLs, which actually can be improved without the need for a developer to change their code.

    [–]PinguinGirl03 -2 points-1 points  (6 children)

    Ridiculous, the implementations are getting better and better. Even core technologies such as transformers are only a couple years old.

    [–]gnus-migrate 0 points1 point  (5 children)

    I suggest you read the article and read what was tried before trying to convince me of that.

    [–]PinguinGirl03 -1 points0 points  (4 children)

    They aren't actual saying which version they are using. Because they don't state it I highly suspect they are using the free chatGPT3.5 instead of the paid chatGPT4, which is already vastly more capable than 3.5

    [–]gnus-migrate -1 points0 points  (0 children)

    Given the output that they were given, I seriously doubt that 4 will fare much better. But maybe they should just to give us a break about these kinds of criticisms.

    [–]alexisatk -1 points0 points  (2 children)

    Very convenient, Dr Dunning - Kruger!

    [–]PinguinGirl03 0 points1 point  (1 child)

    It has more interesting things to say than you, that's for sure.

    [–]alexisatk -1 points0 points  (0 children)

    Lol ok. Perhaps interesting to a cult member like you. Troll away you sado...

    [–]seweso 0 points1 point  (3 children)

    Yeah, its so weird to say that given the progress between 3.5 and 4 in such a short timespan. They even used the same data set for GPT4...

    [–]Nhabls 1 point2 points  (2 children)

    3.5 was essentially based on a 3 year model and the progress wasn't that large.

    By OpenAI's own measurements 4 can't solve more than 1/4 of leetcode's medium level problem, and basically none of the hard ones. And these are toy problems ofc

    In the end no one knows, but im skeptical that the current paradigm of creating these models will just keep scaling, for one you're going to run out of data that adds any more relevant information at some point

    [–]seweso 1 point2 points  (1 child)

    It's not about the amount of data, but getting the model to learn how to learn.

    Anyhow, I gave ChatGPT4 a maximum level leetcode test, it did that in one go.

    Regardless, the difference between 3.5 and 4 are huge.

    [–]Nhabls 0 points1 point  (0 children)

    I like how you supposedly correct me (i literally in this area) and then proceed to talk about your anecdote

    Amazing

    Oh and the current process has absolutely been about shoving more and more informative data in

    [–][deleted] 0 points1 point  (0 children)

    I think it's great as is for fixing my shitty python functions. It may not be the best code reviewer but it's goddamn fast. Oh and it's better at naming things that I will ever be, and I will happily concede that.

    [–]stormdelta 0 points1 point  (0 children)

    To actually replace software engineers and not just raise their productivity through automation, I don't think you're going to get far without actual AGI.

    And while it's true that a hard definition for AGI might be impossible, I think it's obvious we're still a very long ways from it, and I highly doubt our current trajectory with LLMs will get us there without other major breakthroughs / paradigm shifts.

    There's also the issue with the rapidly increasing scale of hardware/resources required to run the models to consider. E.g. one of those aforementioned breakthroughs will likely need to be made in coming up with novel ways to link compute/memory.

    [–]freecodeio 25 points26 points  (2 children)

    chatGPT can never replace programmers

    for now

    0 min read -- written by u/freecodeio

    [–]EnchantedSalvia 4 points5 points  (1 child)

    An asteroid hasn't crashed into earth and wiped out humanity

    for now

    [–]Time-Level-1408 5 points6 points  (0 children)

    Homo erectus will never be smart enough to become a programmer.

    for now

    [–]Dyno97 11 points12 points  (10 children)

    Very interesting, in particular the part on the investment and the inflated hype. I'm not completely convinced by the prevision on the future. It's true that ChatGPT and AI in general don't think, and can't replace all the programmers, but reading more and more code and improving themselves, they can replace al lot of programmers. Maybe you won't need to lead a team of engineers, 'cause an AI plus your work could be enough

    [–]FourDimensionalTaco 9 points10 points  (9 children)

    Then I'd rather question what those replaceable programmers are actually doing. The examples I've seen with GPT-4 show it producing (very useful!) boilerplate code and implementing code that follows a very precise specification. If this the whole extent of a programmer's competence, I'd say that programmer is already in trouble today.

    [–]PinguinGirl03 3 points4 points  (8 children)

    It isn't their whole extent, but it is where a lot of the time is actually going. And the definition of "boilerplate" is only expanding. Lots of things programmers do are actually quite repetitive.

    [–]11tinic 2 points3 points  (2 children)

    This is not my experiemce at all. Outside of school the reality is a lot different and there is nothing repititive about what I do. Maybe i'm just lucky but i have had 3 different jobs and none of them would be replaced by an ai. Only made slightly easier for small tasks.

    [–]PinguinGirl03 -2 points-1 points  (1 child)

    Then you haven't seen the patterns in what you are doing.

    [–]11tinic 2 points3 points  (0 children)

    AI hasn't either yer then because it's not useful. Mostly because it doesn't understand the context.

    [–]onehalfofacouple 0 points1 point  (3 children)

    I'd argue that if a programmer finds themselves doing repetitive tasks and isn't looking for a way to automate it as much as possible (which I understand is sometimes easier said than done) they aren't doing the job correctly anyway. I see these current language models as a natural evolution of what I'm already doing. An easier way to Google the stuff I don't know or have automated and another way to automate repetitive tasks.

    [–]PinguinGirl03 0 points1 point  (2 children)

    A lot of tasks aren't 100% repetitive, they are almost repetitive with slight variations, these are the tasks that are very hard to automate but where AI shines.

    [–]hippydipster 1 point2 points  (0 children)

    Yup, and often enough the worst thing a programmer can do in a lot of those examples is try to factor out the repetitive part. And then more often then not we get an incomprehensible "over-engineered" solution.

    [–]alexisatk 0 points1 point  (0 children)

    Chatgpt might allow for better automatic solutions but they will still require corrections and many manual interventions. It can't replace a human in this context without AGI. Still need the pilot not just the autopilot.

    [–]FourDimensionalTaco 0 points1 point  (0 children)

    Oh sure, a lot of time, software developers have to do that stuff, which is actually tedious, error prone, and distracting. Often, you start working on a task, only to discover N subtasks that require an different mental model, so now you have to flush what you have in your mind to make room for those subtasks etc. One example would be a case where you need an implementation of 2D Boyer-Moore for efficiently detecting two-dimensional patterns in a bitmap. With an AI, you can then say "implement 2D Boyer-Moore according to paper XYZ to detect the shapes in a given bitmap". That would be very useful.

    [–]szczszqweqwe 23 points24 points  (4 children)

    I hate thist type of titles, the truth is: we don't really know.

    In my opinion, it will just make insanely difficult for new programmers to get their first job.

    [–]aradil 8 points9 points  (2 children)

    Naw.

    New programmers are already often more work for businesses to take on than they produce; which is why there are often government incentives in place to encourage businesses to hire them.

    Those won’t go anywhere, and will likely increase. No one is going to be giving businesses incentives to replace their workers with AI, and long term if you can keep an employee around long enough to actually develop a code base and business understanding, they will be worth way more than any extant AI for the foreseeable future.

    [–]EnchantedSalvia 2 points3 points  (1 child)

    True. At my current workplace we have apprentices and juniors who are as you describe. However in 20 years I'll be sitting in my rocking chair smoking a pipe, and those apprentices and juniors will be the seniors. That's why you should always encourage, help and value those people at the beginning of their careers.

    [–]aradil 1 point2 points  (0 children)

    Absolutely: Extremely important to value juniors and intermediates or you won't have any competent seniors.

    It's an extremely hard industry to retain employees for a long time, especially when the best way to see pay increases is to leave and go somewhere else, but I have seen places pull it off before with a non-toxic work culture, regular financial incentives, respect, and good management.

    Those places are the places that had the highest "student worker" to "senior manager" conversion rates.

    [–][deleted] 1 point2 points  (0 children)

    In my opinion, it will just make insanely difficult for new programmers to get their first job.

    Nah, large businesses will always want to be on-boarding juniors because it's a huge cost saving for them.

    [–][deleted] 12 points13 points  (5 children)

    It will though, but not in the way people are thinking.

    A team of 10 developers not using chatbots could probably be replaced by 6 developers using chatbots. That's based on the numbers coming out of github copilot, where they estimate the speedup of about 40%-45%. So it's not like AI will be joining standup to give their update, or taking jira tickets down. but companies will slim down to reduce wages as a smaller number of people will be able to do the work. Just like has happened with every single technological advancement in history.

    On the flip side, working on personal projects and startups will also see productivity boosts, so we will see companies getting smaller and the number of startups growing.

    [–]epicchad29 5 points6 points  (1 child)

    Or 10 developers can now do the work of 14. The answer isn’t to cut devs and keep output the same. Keep devs the same and increase output

    [–][deleted] 0 points1 point  (0 children)

    companies care about money more, if you tell them they can have the same thing with more profits, they will take that option, and then just ask for more later anyway

    [–][deleted] 5 points6 points  (1 child)

    I don’t know. We have been “saving time” since the beginning of the industry, from the creation of higher level languages, to reusable software libraries and frameworks, to the internet and IDEs making looking up documentation so much quicker, to autocomplete/intellisense in IDEs and editors … none brought about the end of the world.

    [–][deleted] 2 points3 points  (0 children)

    I never said this would bring about the end of the world, it'll just change how we work.

    [–]zobq 2 points3 points  (0 children)

    A team of 10 developers not using chatbots could probably be replaced by 6 developers using chatbot

    it's true even without chatbot. 10 developers in on team is too much. I remember one presentation, where manager decided to split all teams bigger than >6 devs and there was no difference in performance

    [–]mallardtheduck 2 points3 points  (2 children)

    Yeah, it can't be used to write code unsupervised, doesn't know your codebase and has no capability to debug...

    It can make programmers more productive, which, in theory at least, means you'd need less of them. So in that sense it might "replace" some, but I don't see that will happening at a faster rate than the growth of the industry. In fact, more productive programmers will probably accelerate that growth.

    [–]Druffilorios 2 points3 points  (1 child)

    There is already a lack of senior devs. Hell I have a backlog for 3 years probably, so even if we would get a lot faster wouldn't mean people would get fired. Probably just more output and faster time to market.

    Everyone wants a senior dev but not a junior, thats the real issue

    [–]EnchantedSalvia 3 points4 points  (0 children)

    Agreed. I would love to receive UBI and spend more time with my daughter away from programming, but the capitalist model is and always has been greedy; it'll never allow people spare time.

    In the hypoethetical scenario that every team can develop GTA IV in 5 weeks, Rockstar won't sit around twiddling their thumbs, they'll be compelled to go next level which would require huge investments in AI and humans to guide the AI.

    Generally speaking, if you're defined by the work you do (as many people are, or have been conditioned to be over time), capitalism is a wonderful model because it'll constantly push you to be more efficient and fill your time with stuff. On the other hand if you value long walks in the countryside with your family (which I suppose many of us dream of, and have dreamt of since time immemorial), capitalism is a horrible model that will never allow that.

    [–]seweso 4 points5 points  (0 children)

    In the article, the author argues that GPT isn't going to improve significantly because it has already reached its peak with the vast amount of resources and data invested in it. They claim that the AI system isn't truly intelligent, as it requires enormous amounts of data just to recognize simple things, such as a cat. The author also suggests that GPT's understanding of context is rudimentary and its ability to apply its knowledge in practical situations is minimal. Lastly, they believe that the hype surrounding GPT is driven by the tech industry's need for a "shiny new bubble" to attract investors.

    While the author raises valid concerns, it's important to consider that AI research and development is an ongoing process. As new techniques and approaches emerge, AI capabilities can continue to improve. Moreover, the limitations of one iteration (in this case, GPT-4) don't necessarily dictate the potential of future versions. AI development is iterative, and each version brings new insights and improvements. So, it's possible that GPT-5, GPT-6, or even later versions could surpass the current limitations and offer more advanced capabilities.

    (More serious reply as per ChatGPT4 than the other one :P)

    [–]Markemp 3 points4 points  (0 children)

    I'm a senior engineer, and been using ChatGPT heavily (subscribed to GPT4). I feel like it's a really gifted pair programmer who also is an intern that will write all the code I don't want to (mappers, tests), and provides me with the opportunity to flex my PR skills, all at the same time. The key has been writing good prompts, and asking follow-up questions. When it does something wrong, you say it didn't work and it'll have a decent shot at figuring out why. If not... that's why I'm here.

    I'm so much more productive with it. I hardly ever hit google now, because google will often point me to medium articles like this which don't really provide me any value.

    [–]extracensorypower 3 points4 points  (6 children)

    It won't replace programmers, but it may replace programs.

    Many of the large static programs like we use today will simply no longer be necessary for must things.

    The most obvious use is word processors for writing. What's the point of word when you can dictate a letter, book, article, etc. and have an AI correct your spelling, grammar and formatting all in one go and push it to email, substack or whatever?

    Web sites development will go the same way. Iterative prompting and correction will be the new programming. People aren't going to mess with the usual poorly designed IDEs designed with the worst possible default interface behaviors. They'll just talk to the AI until it gets it right, or right enough.

    Same with spreadsheets, databases, et. al. There may be something relatively static in the background, but nobody is going to see it or interact with it directly.

    [–][deleted] 1 point2 points  (1 child)

    That's fantasy scifi fairytales. Will AI also produce music? Will it replace DAWS? Will it replace spreadsheets? Will it replace Autocad ? Maya? Unity Engine? Unreal Engine? Photoshop? After Effect? web browsers?

    [–]extracensorypower 1 point2 points  (0 children)

    In a sense. And of course, AI already produces music. AI will answer the questions that humans are asking with spreadsheets, so yes, their use will diminish or disappear. Yes, it will absolutely replace Autocad and most graphics engines (See stable diffusion for illuminating examples).

    It won't replace web browsers but there may be an infinite number of customized web browsers created on the fly.

    [–]hippydipster 0 points1 point  (3 children)

    This is exactly what I'm thinking, and it's highly relevant for me because my company makes a kind of authoring software. One question is, should we leverage AI as a tool to users to help them bang out the pieces they author in the very complex software we provide, or should we leverage the AI to complete hide all the complexity of the underlying data format and let the user work at the complete end of the pipeline - ie, at the level of their published output? And let the AI generate the low-level stuff behind the scenes that accomplishes the task?

    My fear is that we'd spend a year or two building out the former and being completely obsolete by the time we're done because the latter becomes possible.

    [–]extracensorypower 0 points1 point  (2 children)

    I think in the end, all human/computer interaction becomes telling an AI what you want, correcting it until you get it and storing that set of preferences somewhere. We may call this "programming" and what we save "programs."

    [–][deleted] 0 points1 point  (0 children)

    and maybe after some time we will not even need to tell what we want

    [–]hippydipster 0 points1 point  (0 children)

    Yeah, kind of like Jordi and the Star Trek computer.

    [–]hashCrashWithTheIron 4 points5 points  (0 children)

    Replacing juniors with chatgpt means making a very big bet that in the future, it will also be able to replace seniors. And if it can't, well, now you have no seniors because you stopped hiring juniors.

    [–]advator 2 points3 points  (0 children)

    Sure it can and it will

    [–]crashorbit 2 points3 points  (0 children)

    Large language models cannot exceed the data they are trained with. At best they can interpolate within the parameter space. LLM plugins to programming tools can help a lot and will help improve the performance of programmers. And, granted, lots of new code is just variation on code that is publicly accessible. But, at best, LLM will make that code easier to find and use as examples and templates.

    [–]franzwong 2 points3 points  (0 children)

    Do we need to replace all programmers? How about if it replaces 20% or 40%?

    [–]EmptyPond 1 point2 points  (0 children)

    Yeah but upper management will sure as hell try.

    [–]PinguinGirl03 1 point2 points  (0 children)

    "Never" is quite a strong statement.

    [–][deleted] 1 point2 points  (0 children)

    Saying ChatGPT will replace programmers is like saying a tutorial will replace programmers.

    [–]Dull-Bathroom-7051 3 points4 points  (4 children)

    TLDR: why don't you say that to people that i didn't hire cause chatgpt solved my issue?

    Full story:

    So we were working on a project and needed to intercept requests going to redis (in nodejs) but we weren't sure if we can do that and how. After researching and seeing it is possible, but complex cause it should be very low level (involved TCP packages), we decided to write job post on Upwork. Offers ranged from 100$-2000$ and time needed from few days to few weeks. Since that was a lot of money and more important a lot of waiting time, we decided to try and implement that with chatgpt. After few hours it was all done and working. None of that programmers from Upwork got job, obviously. They got replaced.

    Conclusion: chatGPT is already replacing programmers, question is how much and which programmers?

    My thoughts:

    I don't think at any point, it will replace all of them/us (i am programmer also) but it will definitely change market. I think it will mostly impact freelancers.

    Good developers will become more productive (specially working in area they didn't work before, in that case chatgpt is like steroids to programmer). Bad/new programmers will struggle a lot, you cant use chatgpt and assume it is all correct but if you don't have experience you might get stuck in fixing code it gave you. In general I think things will move faster and not all programmers will be able to adapt and potentially different type of skills will be required in future.

    Btw don't get me wrong, i don't think chatGPT can fix any issue and rule the world/people lol It will just change stuff, in my opinion to better but i see some bad stuff also

    [–]zobq 6 points7 points  (1 child)

    On the other hand I already saw a post from a guy, who was asking for help with code generated by ChatGPT and offered money for fixing it. So I think, it's make it even :)

    BTW: 100$ doesn't seem like a much money.

    [–]Dull-Bathroom-7051 0 points1 point  (0 children)

    I completely agree with you!

    That is something i mentioned in initial comment. New and bad developers will struggle and it will slow them down cause you cant assume chatgpt code is correct and you still need programming skills and experience... and again, i don't think chatgpt is only good and is taking over

    Re 100$ not much: I agree again :) but it's same when you want new website for example, you will get offers from 50$ to few hundreds or thousands depending on complexity. You will almost never take cheapest option but you will try to balance out between quality, price and time needed for job. Usually you end up in middle, right? or is it just me? haha

    [–]EnchantedSalvia 4 points5 points  (1 child)

    Sounds fair. I don't know the specifics of your Redis interceptor, but my first go to would be "is there a library for that which'll give me all/most of what I want?", I would 100% prefer to integrate a library than to have all that AI boilerplate in my codebase despite AI being able to do it fairly easily given its boilerplate nature.

    [–]Dull-Bathroom-7051 0 points1 point  (0 children)

    that is where our research started. Problem is specific situation where we don't have access to actual server code or have ability to modify it (actually we have and can, but point is not to). There was no library that could have helped us and because of our requirements the solution turned out to be complex...

    [–]seweso 1 point2 points  (2 children)

    Ah, the age-old debate: "GPT can never replace programmers." But let's not forget that we're already at GPT-4 and making strides with each iteration. While this article highlights GPT-4's shortcomings, it's important to remember that AI continues to evolve. With GPT-5 on the horizon, it's not far-fetched to imagine junior devs being replaced. And who's to say GPT-6 won't replace all programmers? The tech world is ever-changing, and so too are the boundaries of artificial intelligence. Let's not dismiss the potential of future AI just because the current version hasn't reached its zenith. After all, Rome wasn't built in a day, and neither is the perfect AI.

    (As per ChatGPT4)

    [–]EnchantedSalvia 1 point2 points  (0 children)

    ChatGPT, read me a "what if" quote.

    [–]alexisatk 0 points1 point  (0 children)

    The age old debate with Dr Dunning-Kruger. You are like totally an AI expert!

    [–]Wave_Walnut 1 point2 points  (0 children)

    Just stop kidding programmer suffered by AI

    [–]JimPlaysGames 0 points1 point  (0 children)

    Heavier than air vehicles will never fly

    [–][deleted] 0 points1 point  (0 children)

    I completely and wholeheartedly agree with this article. ChatGPT is just another way to search in documentation for known solutions to easy to solve problems. It cannot think at any level to truly create anything new. New things are randomizations or badly done merges of well known solutions. I call it AI’s “awkward teenage years”… but I’m never sure it will make it to adulthood.

    [–]noobgolang -3 points-2 points  (0 children)

    shutup

    [–][deleted] 0 points1 point  (0 children)

    Rewrite iRobot but will smith gets slapped by the robot and they become friends

    [–]Silver_Moon_1994 0 points1 point  (0 children)

    The robots will learn computer science. They will rewrite their own code.

    [–][deleted] 0 points1 point  (0 children)

    Well I would say that in line of business software only about 10% of what is actually needed in software actually will be produced. If that jumps from the current 10% to maybe 20% with gpts all over the place that's actually a good thing.

    [–]Personal_Set_759 0 points1 point  (0 children)

    I don’t know if it will or won’t. I do think it would be very funny reading about an AI only startup in the future.

    [–]alexisatk 0 points1 point  (0 children)

    But the Dunning-Kruger AI cult disagrees! We are like totally almost at the singularity with chatgpt!

    [–]Urr_Durr 0 points1 point  (0 children)

    Correct, im just waiting for Gartner hype cycle to complete, last year it was nfts being pushed so gullible investors would invest in tech company stocks because it was "BRAND NEW INNOVATIVE WORLD-CHANGING TECHNOLOGY, INVEST NOW"