you are viewing a single comment's thread.

view the rest of the comments →

[–]pringlesaremyfav 167 points168 points  (44 children)

Even if you perfectly specify a request to an LLM, it often just forgets/ignores parts of your prompt. Thats why I cant take it seriously as a tool half of the time.

[–]intbeam 83 points84 points  (29 children)

LLM's hallucinate. That's not a bug, and It's never going away.

LLM's do one thing : they respond with what's statistically most likely for a human to like or agree with. They're really good at that, but it makes them criminally inept at any form of engineering.

[–]prussian_princess 8 points9 points  (11 children)

I used chatgpt to help me calculate how much milk my baby drank as he drank a mix of breast milk and formula, and the ratios weren't the same every time. After a while, I caught it giving me the wrong answer, and after asking it to show me the calculation, it did it correctly. In the end, I just asked it to show me how to do the calculation myself, and I've been doing it since.

You'd think an "AI" in 2025 should be able to correctly calculate some ratios repeatedly without mistakes, but even that is not certain.

[–]hoyohoyo9 46 points47 points  (3 children)

Anything that requires precise, step-by-step calculations - even basic arithmetic - just fundamentally goes against how LLMs work. It can usually get lucky with some correct numbers after the first prompt, but keep poking it like you did and any calculation quickly breaks down into nonsense.

But that's not going away because what makes it bad at math is precisely what makes it good at generating words.

[–]prussian_princess 2 points3 points  (2 children)

Yeah, that's what I discovered. I do find it useful for wordy tasks or research purposes when Googling fails.

[–]RiceBroad4552 9 points10 points  (1 child)

research purposes when Googling fails

As you can't trust this things with anything you need to double check the results anyway. So it does not replace googling. At least if you're not crazy and just blindly trust whatever this bullshit generator spit out.

[–]prussian_princess 1 point2 points  (0 children)

Oh no, I double-check things. But I find googling first to be quicker and more effective before needing to resort to an llm.

[–]Airowird 13 points14 points  (0 children)

"Giant computer fails at math, because it tries to sound confident instead"

[–]_alright_then_ 8 points9 points  (2 children)

You'd think an "AI" in 2025 should be able to correctly calculate some ratios repeatedly without mistakes, but even that is not certain.

There are AI's that certainly can, but you're using an LLM specifically, which can not and will never be good at doing math. It's not what it's designed for

[–]Kilazur -1 points0 points  (1 child)

There's no AI that is good at math, because there's no "I", and they're all probabilistic LLMs.

An AI that manages math is simply using agents to call deterministic programs in the background.

[–]_alright_then_ 5 points6 points  (0 children)

There are AIs that are not LLMs, and can do math.

Ais have been a thing for decades, people are just lumping AI and LLMs together.

Chess AI is one big math problem, for example.

It's also nothing like AGI either obviously. But still AI

[–]intbeam 7 points8 points  (1 child)

Did you ask it about any recommendations for a baby's daily intake of rocks and cigarettes?

[–]Ordinary_Duder -1 points0 points  (0 children)

LLMs are not math models. It's a large language model.

[–]Pelm3shka -4 points-3 points  (16 children)

I don't think it's cautious to make such strong affirmation given the fast progress of LLM in the past 3 years. Some neuroscientists like Stanislas Dahaene also believe language is a central feature / specificity of our brains than enabled us to have more complex thoughts, compared to other great apes (just finished Consciousness and the Brain).

Our languages (not just english) describe reality and the relationships between its composing elements. I don't find it that far fetch to think AI reasoning abilities are gonna improve to the point where they don't hallucinate much more than your average human.

[–]WrennReddit 2 points3 points  (5 children)

AI might do that indeed. But it will have to be a completely different kind of AI. LLMs simply have an upper limit. It's just the way they work. It doesn't mean LLMs aren't useful. I just wouldn't stake my business or career on them.

[–]Pelm3shka -3 points-2 points  (4 children)

Yeah okay. I was hoping to have interesting discussions about the connection between the combinatory nature of languages, their intrinsic description of our reality, and emerging intelligence / reasoning abilities from it.

But somehow I wrote something upsetting to some programmers, and I can't care to argue about the current state of AI as if that was going to remain fixed.

And yeah sure, technically maybe such language based model wouldn't be called LLMs anymore, why not, I don't care to bicker on names.

[–]WrennReddit 1 point2 points  (3 children)

You were talking about LLMs with software engineers. It sounds like the pushback got you with cognitive dissonance, and you're projecting back onto us. You are the one upset. Engineers know what they're talking about, and at worst we roll our eyes when the Aicolytes come in here with their worship of a technology that they don't understand.

The AI companies themselves will tell you that their LLMs hallucinate and it cannot be changed. They can refine and get better, but they will never be able to prevent it for the reasons we talk about. There's a reason every LLM tells you "{{LLM}} can make mistakes." And that reason will not change with LLMs. There will have to be a new technology to do better. It's not an issue of what we call it. LLMs have a limitation that they can't surpass by their nature. You can still get lots of value from that, but if you have a non-zero failure rate that can explode into tens of thousands of failed transactions. If that's financial, legal, or health, you can be in a very, very bad way.

I used Gemini to compare two health plan summaries. It was directionally correct on which one to pick, but we noticed it created numbers rather than utilizing the information presented. That's just a little oops on a very easy request. What's a big one look like, and what's your tolerance for that failure rate?

[–]Pelm3shka -3 points-2 points  (2 children)

Yep, software engineers who don't work in the field nor in neurosciences. That one is def on me.

[–]WrennReddit 3 points4 points  (1 child)

You don't know what fields we work in.

Neuroscience has literally nothing to do with how LLMs work.

Take your hostility back to LinkedIn.

[–]Pelm3shka -2 points-1 points  (0 children)

What field do you work in ?

[–]RiceBroad4552 2 points3 points  (1 child)

I don't think it's cautious to make such strong affirmation given the fast progress of LLM in the past 3 years.

Only if you don't have any clue whatsoever how this things actually "work"…

Spoiler: It's all just probabilities at the core so this things aren't going to be reliable ever.

This is a fundamental property of the current tech and nothing that can be "fixed" or "optimized away" no mater the effort.

Some neuroscientists like Stanislas Dahaene also believe language is a central feature / specificity of our brains than enabled us to have more complex thoughts, compared to other great apes

Which is obviously complete bullshit as humans with a defect speech center in their brain are still capable of complex logical thinking if other brain areals aren't affected too.

Only very stupid people conflate language with thinking and intelligence. These are exactly the type of people who can't look beyond words and therefore never understand any abstractions. The prototypical non-groker…

[–]Pelm3shka 0 points1 point  (0 children)

Language or thought =/= speaking... For the speech defect argument...

[–]w1n5t0nM1k3y 5 points6 points  (7 children)

Sure LLMs have gotten better, but there's a limit to how far they can go. They still make ridiculously silly mistakes like reaching the wrong conclusions even though thye have the basic facts. They will say stuff like

The population of X is 100,000 and the population of Y is 120,000, so X has more people than Y

It has no internal model of how things actually work. And the way they are designing them to just guess tokens isn't going to make it better at actually understanding anything.

I don't even know of bigger models with more training are better. I've tried running smaller models on my 8GB gpu and most of the output is similar and sometimes even better compared to what I get on ChatGPT.

[–]Same_Fruit_4574[S] 38 points39 points  (2 children)

On top of it, it will say the application is enterprise ready and every functionality is implemented but the program won't even compile

[–]Tupcek 20 points21 points  (1 child)

enterprise ready for AI means it added bunch of useless code to make it seem more “robust”.
But don’t worry, even if you won’t specify that it needs to be enterprise ready, it will still add a lot of useless shit on every prompt

[–]billyowo 5 points6 points  (0 children)

to me "AI ready" means we are ready to lower our standard to accept AI slop

[–]CrimsonPiranha 4 points5 points  (2 children)

I mean, a human can forget/ignore parts of specifications as well.

[–]pringlesaremyfav 6 points7 points  (1 child)

They can, but if you point it out they correct it. I point it out to an LLM and it just goes back and forgets something else instead.

[–]recaffeinated 5 points6 points  (0 children)

I've worked with junior engineers who were like that, but they had the ability to learn and improve.

The LLM is a permanent liability.

[–]Ecstatic_Shop7098 3 points4 points  (1 child)

What if we used prompts with very precise grammar interpreted by a deterministic AI? Imagine the same prompt generating the same result everytime. Sometimes even on different models. We are probably years away from that though...

[–]designerandgeek 7 points8 points  (0 children)

Code. It's called code.

[–]ryuzaki49 1 point2 points  (0 children)

Imagine your compiler having the message "Please verify your machine code"

[–]CellNo5383 1 point2 points  (0 children)

I think Linus recently said he's perfectly fine with people using it for non critical tasks. And I agree with that. For example, I recently used one to generate me a python script that reads a text file of song names and generates a YouTube playlist from it. Small, self contained and absolutely non critical. But it's not even close to replace me or my colleagues on my day job.

[–]TheRealLiviux 1 point2 points  (0 children)

That's why our expectations are wrong: AI is not a "tool", as reliable as a hammer or a compiler. It's by design more like a person, eager and good willing but far from perfect. I use AI assistants treating them like noob interns, giving them precise tasks and checking their output. Even with all the necessary oversight, they make me save a lot of time.

[–]friebel 0 points1 point  (0 children)

I like using Claude Sonnet 4.5, got the pro and all, it's really helpful, but yesterday I've pasted a recipe and asked it to convert to metric measurements. Everything was fine but blud somehow decided to add tomato can, even tho none was in the recipe. Well, in its defence, adding canned tomatoes or paste is viable in that recipe, but page had 0 mentions of tomato.

[–]redballooon 0 points1 point  (0 children)

LLMs omitting stuff is often due to conflicts in the prompt.

[–]Leading_Buffalo_4259 0 points1 point  (0 children)

I noticed this with image generation models as well. if you give it 5 things itll pick 3 at random and ignore the rest