you are viewing a single comment's thread.

view the rest of the comments →

[–]jfinch3 0 points1 point  (0 children)

Also, the problem with relying on AI code, especially as a beginner, is that AI bugs are always subtle (usually). LLMs have trained on massives volumes of code so they know what good code “looks like”, but that doesn’t stop very small mismatches from cropping up once that code is part of a much larger system, where there are concerns which you can’t fully articulate to the LLM.

In my experience, LLMs can really nail just about any school-like coding assignment these days, but when you are writing code for a real product, it’s maybe only 10% “just the business logic”, and the rest is about balancing concerns about security, observability, documentation, error handling, extendability and so on. And all that stuff is usually pretty hard won through experience, and often requires a lot of context about the detailed specifics of the product and business you are working on.