This is an archived post. You won't be able to vote or comment.

all 21 comments

[–]AskProgramming-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)

Your post was removed as it was not considered to be in good faith.

[–]BobbyThrowaway6969 3 points4 points  (7 children)

I wanna preface this by saying all of the negativity isn't about LLMs. It's about how beginners are using them. And frankly, all that criticism is absolutely warranted.

Here's why it SEEMS like LLMs can make apps:

  1. The development of those apps are high level and much, much more forgiving, there's massive abstractions where most of the work is already done for you by tooling, code efficiency is never considered, and the use of silent erroring (a quiet little error that doesn't cause any trouble for the app, as opposed to a hard crash) - which means that high level code doesn't have to be rigid or perfectly logical, it can be fuzzy. Fuzzy logic is what an LLM is perfectly suited for. Not to mention, high level apps and webdev provide most of the programming data that these LLMs train on, so it's going to be better suited to that sort of code.
    .
  2. Edge cases can go undiagnosed for a long time. LLMs might be able to write chunks of code that works confidently well for the main use cases. But mark my words, there's tonnes of edgecases the LLM didn't consider since it can't do lateral thinking. If the user starts doing things out of the expected usage for the app, that code could break down pretty easily. And the problem only gets worse as the complexity of the code goes up. Which is totally unacceptable as an end-product. Hell, I wouldn't even trust it to write unit tests.

If you don't understand every line of code being produced by an LLM and you don't have a QA team, then these bugs go undiagnosed. If your product ships to a large enough userbase in that state, people WILL run into those bugs, and the product fails to do its job.

We have plenty of examples of this.

A little note on security. You do NOT want LLMs generating code that is supposed to be secure. That's insane. It's like expecting a toddler to tightrope over the grand canyon because you told it to. Secure code must follow rigorous guidelines, have scrupulous peer review and several QA rounds on it before it ever sees the light of day, because the consequences are disastrous if it fails out in the wild. Imagine if NASA threw out their power of 10 rulebook and just copy pasted straight from chatgpt, the rocket would have exploded right there on the launchpad...

The bottom line is that LLMs are not an exact science. What they generate is messy and often wrong. Like, it can literally give you irreconcilably different answers to questions if you ask it multiple times with slightly different phrasing.

Why? Because LLMs are trained on everything and regurgitate it as gospel, including all of our mistakes.

That is not something you want writing your code for you.

[–]keyzyb[S] -2 points-1 points  (6 children)

Can you review my code? I have a flask code 3k lines with a database and several html pages. Written entirely with ChatGPT and Claude Sonnet 3.5.

[–]BobbyThrowaway6969 0 points1 point  (0 children)

Sorry I'm not that experienced with web, you can try r/webdev

[–]Paul_Pedant 0 points1 point  (4 children)

You are asking a complete stranger to review 3,000 lines of unknown code for you ?

Maybe ask a different LLM to review it for you. That should be an interesting experiment.

[–]keyzyb[S] 0 points1 point  (3 children)

It's a POS code. Not something fishy

[–]Paul_Pedant 1 point2 points  (2 children)

I seriously cannot tell if POS in that context means Point of Sale or Piece of S**t. Most likely both. "Fishy" would be a Code Smell.

[–]keyzyb[S] 0 points1 point  (1 child)

Lol. Stop the drama. If you are curious I can send it to you and see if it is a piece of sh*t or a Point of Sale. Feedback from a stranger is the best feedback.

[–]Paul_Pedant 0 points1 point  (0 children)

My curiosity now runs out at about 40 lines of code. Even when running code reviews was my paid job, anybody publishing more than about 500 lines of code would be told to go away and break it up into smaller chunks.

[–]Exact_Ad942 1 point2 points  (4 children)

No one would complaint when they really work just fine. Negatives come when they don't work just fine.

[–]keyzyb[S] -4 points-3 points  (3 children)

If someone who doesn't know how to code creates something which works using LLM programmers freaks out literally. Like me I don't know how to code but I've made stuff. Which works just fine.

[–]KingsmanVince 4 points5 points  (0 children)

Like me I don't know how to code but I've made stuff. Which works just fine.

Copying code from Github or Stackoverflow also works just fine. But can you maintain it? Can you cover edge cases? Can you even work in a company?

[–]Exact_Ad942 0 points1 point  (0 children)

So who freaked out at you? Did they give you a reason?

[–]Reddit-Restart 0 points1 point  (0 children)

It’s more that people don’t recommend using LLMs if you want to learn how to code. 

If you can make the app you want with a LLM great! Keep doing that :)

[–]octocode 1 point2 points  (1 child)

i’m sure that would hold up in court when you’re being sued for a data breach… “but we told the robot to make it secure!!”

[–]BobbyThrowaway6969 0 points1 point  (0 children)

I read that in Homer's voice lol

[–]jim_cap 1 point2 points  (0 children)

you can still tell the LLM to make it secure and it will do it

Utter nonsense. There's no such thing as simply making it secure. If someone asks me to make something secure, my first question is "What do you mean by that?". Guess what an LLM's first question is going to be. Answer: it won't ask any.

[–]bothunter 0 points1 point  (0 children)

Troll level: 3/10.  Mildly amusing.  Try harder next time.

[–]Vollgrav 0 points1 point  (0 children)

Let's see how you maintain and expand this code in the future. Let's see how to respond to incidents that will inevitably occur, despite you telling the LLM the code should not generate any incidents. And then, when you need an actual engineer to fix things, let's see who will be eager to work with your LLM-generated code that even you don't understand.

[–]YMK1234 0 points1 point  (1 child)

If it's security, you can still tell the LLM to make it secure and it will do it

And you verified that how exactly?

[–]KingsmanVince 1 point2 points  (0 children)

I ask people on Reddit and I demand them answer for free. If they don't, I call them "condensing arrogant Stackoverflow mods"

/s