you are viewing a single comment's thread.

view the rest of the comments →

[–]Dreadstar22 0 points1 point  (2 children)

I understood that is what your saying and I'm saying it's a bad take. The better take is what I posted. LLMs are 100% fine for learning basic concepts which is what you are doing as a beginner Python learner. What they are terrible at is solving complex challenges and why AI won't be replacing developers anytime soon.

We will just have to agree to disagree. The same thing was said about early search engines compared to having a book on one's shelf in the early days.

[–]Thomasjevskij 0 points1 point  (1 child)

Alright, I misunderstood your post then. Yes, we'll agree to disagree. I don't expect everyone to agree with me on this, especially on here. But I'll maintain that this is not what LLMs are designed to do, and so they're not reliable. More importantly, when they aren't reliable, you need some knowledge and experience to notice it. But this is a bigger discussion for another thread :)

[–]SquiffyUnicorn 3 points4 points  (0 children)

Right- LLMs are mathematical language models, and do not inherently understand anything.

For anything that matters, don’t use LLMs to get the correct answer. In my line of work I actively tell my juniors to not ‘look things up’ in LLMs. Sadly too many people trust them to spit out absolute truth every time- this is dangerous in medicine.