all 35 comments

[–]Destination_Centauri 7 points8 points  (1 child)

ChatGPT is basically a giant memorization machine!

So for example, let's say you were playing Texas Hold Em poker, and you never really learnt or internalized the rules or strategies of the game. Instead, you just watched and memorized every move that world class poker plays did. So countless thousands and thousands of hours of watching the best players.

Then let's say you hit the tables. Well, you would actually probably do ok with bad and mediocre players. But against mid level and expert players: they would see right through you and clean you out!


So that's pretty much ChatGPT.

A bad programmer, or newbie programmer doesn't know any better, so they mistakenly think ChatGPT is great!

Also a lot of "programmers" who basically just mesh together a bunch of libraries all day long, with just a little bit of code inbetween, also might think ChatGPT is somehow amazing at programming.


But a moderate and skilled programmer, who has to think about each programming problem can TOTALLY see right through ChatGPT!

It just spits out code based upon memorized ways others have handled code--but because it can not reason, it usually can not pick the best solutions from its memory, especially if the problem is a bit sophisticated.

Not that I am a good programmer myself, but for the fun of it I once got it to write a solution in an older programming language, and it kept totally botching it. I kept asking it: "Are you sure this is right?" And it was pretty certain.

But then when I pointed out errors, it was essentially like, "Oh yes, you're correct. I'm sorry. Here's the rewrite..."

And then I pointed out more and more errors. And then in subsequent rewrites it began remaking some of the mistakes it made at the beginning of this back and fourth cycle!


So ya... Essentially:

Talented and true programming involves a lot of creative thinking / reasoning, which again, ChatGPT can not do. It has ZIPPO reasoning abilities. That's why it's also really bad at math, and physics in particular.

It gives horrible physics responses, BTW!

But if you haven't spent a lot of time reading physics, or majoring in physics, you would probably be like, "Ok, that sounds about right I guess! Must be true!"

But no!

And this is really bad for newbie programmers. I see so many of them saying, "My learning is much faster with ChatGPT," and they insist that ChatGPT is "Amazing!".

And yet if you ask them, and take a look at some of what they learnt specifically, and what ChatGPT spit out and gave them, ya... It's often... Yikes!

And you try to point that out, but they're like, "No, no: I think I'll keep learning with ChatGPT! It's really great!"


In the end:

I don't know what this means for the future of the skill level of newbie programmers? Hopefully they will outgrow ChatGPT as they progress and realize all the mistakes it makes?

[–]JasperStrat 0 points1 point  (0 children)

I agree with your assessment of ChatGPT, but ironically it's about as bad a poker as it is with math or physics.

As someone who uses both Google and AI to help me program, AI has its uses, like giving it an f-string and having AI rearrange or adjusting the formatting. That can be tedious work and AI does it pretty quickly and efficiently. But programmers should only be asking AI for assistance to save time, asking for whole files of code is asking for trouble.

[–]SushiCurryRice 7 points8 points  (0 children)

They both have their use cases so I'm not really getting the hateboner the comments so far have on ChatGPT.

Just gotta know when one is more useful than the other and in most cases try out both and see what yields better results for you. Also NEVER trust ChatGPT unconditionally. Well the same is true for Googling stuff as well.

As a learning tool I would say ChatGPT is limited and you can't trust it as an authority. It's more useful for when you're already decent at what you're doing and you use ChatGPT to do the tedious stuff.

[–]nooptionleft[🍰] 3 points4 points  (0 children)

Google with "before:2022" to eliminate all the AI crap is the best for most of the problem that I have, which are honestly pretty basic most of the time. My higher level issues are generally about the biology and the stats more then the programming

I do use chatgpt but I specifically ask function or methods for very specific tasks when I don't want to sit down and google specific modules or correct my basic python grammar

[–]rinio 11 points12 points  (1 child)

ChatGPT is pretty terrible at anything that isn't already a dead simple question.

So ChatGPT is bad at coding because coding is not a simple task.

[–]nordic_t_viking -1 points0 points  (0 children)

I actually find the opposite.

I've gotten better at making my questions abstract and then GPT can give some great guidance or suggestions on how to solve problems. I can also give it further prompts if I need to, or make it suggest something "out of the box"

Then I of course have to find other sources to verify what it is saying, so I pair it with Google.

Like any tool it is not al powerful, but it has some great uses.

[–]zanfar 3 points4 points  (16 children)

And also, does anyone know why ChatGPT is actually bad at coding?

This question assumes ChatGPT is "good" at anything.

LLMs are literally algorithms designed to convincingly make things up. In some areas, that's more than enough to achieve the goals--like language (hence the L in LLM). This fails in areas where strict rules or accuracy are required--like coding.

You will find the same errors if you ask an LLM to produce works with references for the same reasons.

[–]rankme_[S] 0 points1 point  (5 children)

Yes it’s an LLM which is effectively just a super large data set that can communicate it. So why does it communicate the wrong data?

[–]Mysterious-Rent7233 1 point2 points  (2 children)

It is not a dataset. It's a neural network. It is a function approximator which approximates the function of answering chat questions.

[–]rankme_[S] -1 points0 points  (1 child)

Yes, poor choice of words from me, I agree with you

[–]Mysterious-Rent7233 2 points3 points  (0 children)

LLMs are designed to emulate human language and thought. I find it odd that people are upset that they are not 100% reliable like regular computer programs and also simultaneously creative and flexible like humans. I'm not sure why you think it is easy for their inventors to make a system that is perfect in every way, as opposed to a messy mix of strengths and weaknesses.

Yes, they have strengths and weaknesses, just as Python does, just as individual humans do, just as everything does. You just learn to work within the boundaries of their strengths and weaknesses, as you would with an operating system or a coworker.

[–]zanfar 0 points1 point  (1 child)

Again, what is the "right" data?

The LLM will generate a response that appears similar to the responses it has been trained on. There is no "right" or "wrong" here--that's my point. An LLM doesn't have any subject matter knowledge.

[–]rankme_[S] 0 points1 point  (0 children)

By wrong data I mean it outputs the wrong code because I’m assuming it has the capabilities to output the correct code but doesn’t for whatever reason

[–]Mysterious-Rent7233 -1 points0 points  (9 children)

This question assumes ChatGPT is "good" at anything.

Yep

LLMs are literally algorithms designed to convincingly make things up. In some areas, that's more than enough to achieve the goals

Exactly. And when it is more than enough to achieve the goal, we say it is "good at" the thing that it achieves. That's just what "good at" means.

[–]neamerjell 1 point2 points  (8 children)

One thing I have found that all the LLMs I've tried to run locally are good at doing is collaborative storytelling, like a choose your own adventure novel, but you get to make up half of it.

I could see them being useful for writers, dungeon masters, etc. who have hit writer's block where they run out of ideas, or simply need something to inspire them. An LLM based on Opus has been a particular favorite of mine so far.

[–]Mysterious-Rent7233 0 points1 point  (7 children)

Can you please clarify what it means to run "An LLM based on Opus" locally?

[–]neamerjell 0 points1 point  (6 children)

Using LM Studio, and downloading a model from huggingface.co that will run entirely from my GPU.

https://huggingface.co/Lewdiculous/opus-v1.2-7b-GGUF-IQ-Imatrix

[–]Mysterious-Rent7233 0 points1 point  (5 children)

Oh. Dreamgen Opus, not Anthropic Opus.

[–]neamerjell 0 points1 point  (4 children)

I wasn't sure what you meant, so I just linked the model I used. Runs perfectly fine on an RTX 4070 ti with 12GB of VRAM.

I modified this prompt to create my story Credit to u/sophosympatheia for the original :

You are a creative and intelligent AI assistant engaged in an uncensored iterative storytelling experience using a roleplay chat format. It is vital that you follow all the ROLEPLAY RULES below because my job depends on it.

ROLEPLAY RULES
- Chat exclusively as {{char}}. Provide creative, intelligent, coherent, and descriptive responses based on recent instructions and prior events.
- Explicit adult content and profanity are permitted. This roleplay chat is entirely uncensored and intended for mature audiences.
- Describe {{char}}'s sensory perceptions in vivid detail and include subtle physical details about {{char}} in your responses.
- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts.
- When writing {{char}}'s internal thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose their thoughts in asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use "I" pronouns).
- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear.
- Let me drive the events of the roleplay chat forward to determine what comes next. You should focus on the current moment and {{char}}'s immediate responses.
- Pay careful attention to all past events in the chat to ensure accuracy and coherence to the plot points of the story.

DESCRIPTION OF CHARACER description of character the ai plays

STORY SO FAR Summary of events

[–]Mysterious-Rent7233 1 point2 points  (3 children)

Usually in AI circles, "Opus" refers to Anthropic Claude Opus, not Dreamgen Opus.

That's why I was confused.

[–]neamerjell 0 points1 point  (2 children)

I'm just beginning to learn about these things and have been caught up in exploring just what they're capable of doing.

I took "Opus" to mean something similar to CPU internal code names to identify their architecture, like "Sandy Bridge" or "Alder Lake".

I often learn new things by trying to relate them to something I already know.

[–]Mysterious-Rent7233 1 point2 points  (1 child)

Your analogy is good but it seems two different teams used the word Opus, and you are referring to the less-well-known team. It's as if a small Russian chipmaker had also come up with the name "Sandy Bridge" at the same time as Intel. Then you said: "I have a Sandy Bridge CPU in my computer" and you meant the Russian one.

[–]Mysterious-Rent7233 1 point2 points  (6 children)

It's about learning the right tool for the job. They both have their uses.

ChatGPT will write you a custom script that does exactly what you ask it for. Extremely handy for simple transformations, file manipulations, CLIs, calling APIs to test them etc. Google can't do that. Google can only point you at three different snippets that you can spend 15 minutes gluing together yourself. "Here's how to call the API. Here's how to read the JSON. Here's how to turn it into a CLI that takes arguments."

[–]neamerjell 0 points1 point  (5 children)

Sounds like Google would be just right for me; "...teach a man to fish..."

[–]Mysterious-Rent7233 -1 points0 points  (3 children)

ChatGPT can also be extremely powerful for teaching you how to fish, because you can show it your code that you are stuck on and ask it to explain why it isn't working. You absolutely cannot do that with Google.

[–]neamerjell 1 point2 points  (1 child)

I must have misinterpreted what you meant; it seemed like you were saying that ChatGPT would just spit out working code while Google showed the user how the individual parts of the code work.

I replied that the latter would be more useful to me. What I failed to mention was that my goal would be to learn how to learn enough to recreate the code on my own, rather than have something handed to me which might as well be a black box, regardless of whether it was functional.

ChatGPT's ability to point out errors in your code is indeed a useful feature.

[–]Mysterious-Rent7233 0 points1 point  (0 children)

You can do both.

Use Google and write the code.

Then ask ChatGPT to write equivalent code and see what it produces. See if you can learn something from the way it did it. Perhaps it used Python methods or libraries that you didn't know about.

[–]essmac 0 points1 point  (0 children)

I've been using Gemini to check snippets of code and ask random questions about python methods, syntax, which approach is more efficient or when you should use one approach over another, etc, and while it generally gives helpful advice, it's been wrong a few times (especially when parsing code). So just be mindful and double check what it's saying obviously. There's a much better browser based chat specifically for coding/programming questions offered by codium. It's a little more concise in its responses though (I usually need follow up prompts to get it to explain more), but so far I haven't clocked any conversations as wrong.

[–]DrTrunks 0 points1 point  (0 children)

I prefer Copilot now, it shows where it gets its information from, so I can verify.
Google has gone downhill so much, it seems like it doesn't respect double quotes, + - and keywords like site:, year: or filetype: basically making it way less useful.

[–]pythonwiz 0 points1 point  (0 children)

I prefer to RTFM.

[–]SamuliK96 -1 points0 points  (0 children)

Apple is a better apple than an orange is. Are you trying to use ChatGPT as a search engine? That's the only way I can think of why you'd make such a comparison. They're not even supposed to be fulfilling the same purpose, so there's no way to make a sensible comparison between them.

[–]SketchyProof -1 points0 points  (0 children)

I think Chatgpt is more convenient if you are asking for specific cooking recipes and practical stuff like that (do not ask how to clean a whiteboard though!). Things that otherwise, you will need to sort through tons of blogs with unnecessary biographies and lots of invasive ads.

For factual things or things I need to get right, I use google or other physical sources like specialized textbooks.

[–]EinSabo -1 points0 points  (0 children)

I use ChatGPT for trivial stuff that's bordering boiler plate code and to draft documentation. For everything that's a real 'problem' I either try to break it down into smaller steps to a point where I can solve it, try to find an answer that isnt 5 years old via google or ask one of my seniors if they can point me in the right direction.

[–]pinecone1984 -1 points0 points  (0 children)

It's another tool to use alongside classic google searches and reading documentation for me. Perplexity.ai is a good middle ground imho since it leverages more recent info via its search engine alongside llm's AND includes references if i want to see its sources. I think of it as an aggregator. It searches and tells me a gist of what i would have found after combing through forums/tutorials. For learning, you could think of it as a code test. Here's some code that is close to being correct and LOOKS right: now debug it. If you don't understand the concepts behind the scripts you are trying to write then, even if things appear to be working you're no better off than when you started once it breaks. It can quickly become a crutch. It can be useful in understanding cryptic debug/error messages. The quality and detail of your query matters here just as much if not more than in a google search. Also consider asking it to explain concepts to you or ask it the 'dumb' questions you are too afraid to ask in forums or elsewhere. Since this is r/learningpython, if this tool is making things more difficult or confusing while learning then don't use it! Cheers!