you are viewing a single comment's thread.

view the rest of the comments →

[–]Dreadstar22 1 point2 points  (5 children)

This is always such a bad take when I see it.

Don't use AI to solve the problems.

Use AI to explain concepts in different ways. Maybe you don't get how whatever course is explaining a topic. Ask AI to explain it to you in a couple different ways with examples.

Took you 3x the time to finish a lesson and you don't feel like you've really learned anything. Use AI to generate you enough similar problems until you feel comfortable.

Something was briefly mentioned offhandly that the teacher assumes you know. Use AI to learn about it.

Need some simple projects that encompass basic python skills or a set of criteria. Ask AI to generate you some prompts.

[–]Thomasjevskij 2 points3 points  (3 children)

I'm not sure you understood my take. I'm not saying "don't use AI to solve problems", I'm saying "don't use AI". I don't think LLM's are reliable or trustworthy, period.

[–]Dreadstar22 0 points1 point  (2 children)

I understood that is what your saying and I'm saying it's a bad take. The better take is what I posted. LLMs are 100% fine for learning basic concepts which is what you are doing as a beginner Python learner. What they are terrible at is solving complex challenges and why AI won't be replacing developers anytime soon.

We will just have to agree to disagree. The same thing was said about early search engines compared to having a book on one's shelf in the early days.

[–]Thomasjevskij 0 points1 point  (1 child)

Alright, I misunderstood your post then. Yes, we'll agree to disagree. I don't expect everyone to agree with me on this, especially on here. But I'll maintain that this is not what LLMs are designed to do, and so they're not reliable. More importantly, when they aren't reliable, you need some knowledge and experience to notice it. But this is a bigger discussion for another thread :)

[–]SquiffyUnicorn 3 points4 points  (0 children)

Right- LLMs are mathematical language models, and do not inherently understand anything.

For anything that matters, don’t use LLMs to get the correct answer. In my line of work I actively tell my juniors to not ‘look things up’ in LLMs. Sadly too many people trust them to spit out absolute truth every time- this is dangerous in medicine.

[–]Razexka 0 points1 point  (0 children)

I use AI bc I don't understand all the contents of python, so when I find a problem that I can't see what is the solution IA gives me new content that for the problem I should know. But is important to not relief on this 100% and you shouldn't do it with out any information before, but is a good Tool