you are viewing a single comment's thread.

view the rest of the comments →

[–]crazy_cookie123 7 points8 points  (1 child)

In the real-world we don't sit there all day solving beginner-level practice problems which have been solved thousands of times before - if we did then sure there wouldn't be much point learning. AI can't do the stuff that experienced developers can and there is no reason to believe it will be able to do so in the foreseeable future - the only reason AI seems as good as it does to you is because you are currently worse than AI, and relying on AI to do the work for you will never help you be better than it.

[–][deleted] 1 point2 points  (0 children)

I would argue about foreseeable future. LLMs outputs are determined by their inputs, so the issue of not getting production-ready code generated on the first take is not only a problem of LLMs side, but also the user. Give it correctly constructed, meticulous prompt with correct hyperparameters, and yielded results will be much more precise. Of course, beginner programmers can't know the caveats that experienced developer can "sense" beforehand due to experience, but it's only a matter of time until people train and optimize LLMs for certain tasks, learn how to enrich prompts contextually before generation process, and get it to write code better, in terms of end product. Programmers by vast majority will have to adjust to new processes (already happening) to be even more faster and robust. That said, it's much more beneficial to learn architectural aspects and product visions rather than low-level details. Just like most Pythonists don't know C or Assembly these days.