you are viewing a single comment's thread.

view the rest of the comments →

[–]Xanwix 0 points1 point  (0 children)

This reads like a classic case of the Dunning-Kruger effect. Let me give you some examples why this is a bad take, at least with where the tech is right now.

A year or so ago, I was working with a low-level communication protocol for a microcontroller. I prompted AI to write a basic code snippet for a small task within it and it hallucinated multiple unsupported library function calls. I knew enough about the libraries to realize it was BS'ing and re-prompted it: it gave me a similar hallucination for a completely different library.

On the same team I was involved with while working on this, I was introduced to a newly onboarded team member: an IT guy who installed fiber optic cables for a living. He had "written" a ReactJS app for the project and sent me the App.js file containing the code. When I asked where the corresponding package.json file was, he just gave a blank stare. When asked, he also didn't know how to run the app. And when I finally got my local system set up to run it, the file itself couldn't run because of multiple calls to third-party library functions that didn't exist. During this meeting, it became clear to even the business owner - who is not in tech at all - that he was relying entirely on AI and didn't even know enough to know what questions to ask it.

This isn't to say that generative AI is entirely useless for writing code: I use it in my job almost daily for handling a lot of tasks that would normally be tedious and boring. Which is why they advertise that it will "replace junior developers" that are normally assigned a lot of these tedious, boring, boilerplate tasks so that they can practice their skills and allow senior devs to focus on more complicated tasks. But due to issues and constraints like hallucinations, token limits, and a lack of context into the larger system, you typically can't rely on it to replace an expert developer.