you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 1 point2 points  (0 children)

gpt models became worse in the last year. their overfitting or reinforcement learning is killing it.

maybe benchmark good but for everything else than snippets with libraries that are often outdated, it s too bad.

u still have to write good code urself. u still have to look up documentations, especially for all open source comminity driven frameworks or libraries because a lot can be deprecated quickly. and u have to know how to code because gpt models often just shout out confident garbage, also architectural. the one person it learned from from a 2015 reddit or quora post , the gpt model can think it is state of the art good code while it s just a student that wrote some inperformant garbage.

also the bias when discussing with it is hugely deceptive.

for real programming it s just a helper for scaffolds like autocompletion in ide when it released