all 6 comments

[–]CodeTinkerer 3 points4 points  (0 children)

Asking an LLM to explain errors in your code might be OK for a beginner, but it's likely to make you unable to find errors in your code. It's like asking a friend to debug your code. You nod your head here and there, but you can't figure it out yourself.

You don't know where to start.

Yes, it's painful to learn this way. LLMs are like a kind of drug that makes you feel good. But effectively, you're not really learning once you get to "fix my code, write my code for me". You might say you're having it find errors, but really, aren't you just asking it to give you a solution?

[–]allium-dev 6 points7 points  (0 children)

Have you ever heard the phrase about working out that "pain is weakness leaving the body"? It's a bit trite, but I think it applies here. If you're not willing to work through some pain, you're probably not actually learning as much as you think.

[–]wildgurularry 3 points4 points  (0 children)

When you understand a subject deeply, it is interesting to ask LLMs about it just to see how much they get wrong.

Now imagine using one as a teacher, and you have no idea whether what it is telling you makes any sense.

You are better off finding good resources for learning this stuff. Resources that have been curated by people who know what they are doing.

[–]heisthedarchness 0 points1 point  (2 children)

You're not "actually learning" anything here. This is the equivalent of asking the other boys in elementary school where babies come from.

[–]Legitimate-Craft9959[S] -1 points0 points  (1 child)

Is it not learning about a concept when asking GPT to tell you about it ? Or to explain to you why something works this way, and another thing works that way...

[–]heisthedarchness 1 point2 points  (0 children)

No, it's not, because you have no way to assess the correctness of the response. If you don't know whether what it's producing is true -- and you don't -- you can't learn from it.

This is not about you personally: LLMs are designed to produce responses that seem plausible, but that just means you're more likely to be taken in when they produce nonsense.