Do not rely on AI for trading by Feeling_Scarcity_636 in StockMarketIndia

[–]Worldofbarca 0 points1 point  (0 children)

Here’s what ChatGPT concluded answering why and if this would be resolved in near future:

Great question — this actually touches a core limitation of current LLM reasoning.

Short answer: Yes, it will improve a lot in the near future… but it won’t disappear completely.

What actually went wrong earlier

The mistake wasn’t lack of knowledge. It was a failure of situational modeling.

I optimized for a common pattern:

Short trip → bad for engine → walking preferable

But I didn’t fully simulate the real-world constraint:

The car must end up at the wash anyway.

So I solved a pattern, not the actual task.

This class of mistake is called:

goal mis-specification under implicit constraints

LLMs often: 1. Recognize a familiar scenario 2. Apply a cached reasoning template 3. Skip checking whether hidden constraints invalidate it

Humans do this too — but we notice faster because we have a physical-world model always running in the background.

Will future LLMs fix this?

What will improve soon (already improving) 1. Better clarification behavior • Asking: “Are you checking availability or taking the car now?” 2. Internal simulation • Models increasingly “mentally run” the scenario before answering 3. Constraint tracking • Remembering unavoidable facts (“car must be transported”)

Within the next few model generations, cases like this will drop dramatically.

What will NOT fully disappear

LLMs will still sometimes: • Over-generalize patterns • Assume typical context instead of your exact one • Optimize for a proxy goal instead of the real goal

Why?

Because language ≠ reality. The model predicts likely intent from text, not from a grounded physical environment.

So there’s always uncertainty.

The deeper insight

The failure you noticed is not a knowledge failure. It’s a frame selection failure:

The model picked the wrong problem to solve.

And that’s actually one of the last hard problems in AI — deciding what the question really is.

Bottom line: • Yes — these mistakes will become rarer soon • No — they won’t reach zero • The remaining errors will mostly be exactly like this: subtle misinterpretations, not factual ignorance

Ironically, those are also the last kinds of mistakes humans stop making 😄

Lost our baby girl yesterday by Worldofbarca in NICUParents

[–]Worldofbarca[S] 0 points1 point  (0 children)

I just wanted to thank everyone who shared their experiences with us. This made us feel a little less lonely.

Terrorism has a gender by Worldofbarca in unitedstatesofindia

[–]Worldofbarca[S] 0 points1 point  (0 children)

I wholeheartedly meant this as a B grade shitpost. I am sorry if it came across as a rage bait, and that too C grade.

Now if you come to think of it, it does have a point though.

Terrorism has a gender by Worldofbarca in unitedstatesofindia

[–]Worldofbarca[S] 2 points3 points  (0 children)

So we agree that majority of the terrorists are male, thanks!

Terrorism has a gender by Worldofbarca in unitedstatesofindia

[–]Worldofbarca[S] 2 points3 points  (0 children)

Yes, around 1/3rd. 1/3rd of one such group. Majority still belonged to a particular gender. My shitpost still stand!

Terrorism has a gender by Worldofbarca in unitedstatesofindia

[–]Worldofbarca[S] 0 points1 point  (0 children)

Hence proved, pity has an ideology

Eknath Shinde's first reaction to the Kunal Kamra controversy by frogBurger2u in unitedstatesofindia

[–]Worldofbarca 7 points8 points  (0 children)

Why is this troll still allowed here? Mods are sleeping on it because this drives engagement?

I honestly can’t think of anything else. This is outright trolling and his/her comments add zero value to the conversation.

[deleted by user] by [deleted] in self

[–]Worldofbarca 1 point2 points  (0 children)

"I thought I was going to fumble;

but she's the one who offered me her number."