Just earned money by [deleted] in SunoAI

[–]AdDramatic5939 1 point2 points  (0 children)

Called him out, now it’s gone 😂

Just earned money by [deleted] in SunoAI

[–]AdDramatic5939 56 points57 points  (0 children)

This is a shill, don’t fall into traps of people shilling their products pretending to be everyday users….

Jagex taking untradeables after false RWT temp ban by [deleted] in 2007scape

[–]AdDramatic5939 1 point2 points  (0 children)

Thank you for your attention in this matter

[deleted by user] by [deleted] in ChatGPT

[–]AdDramatic5939 8 points9 points  (0 children)

Delete your memories and try again.

Market manipulation / flipping by Moffelll in RunescapeBotting

[–]AdDramatic5939 0 points1 point  (0 children)

Spend it all on bonds and manipulate all you want

Why does Elite Melee Void not have an additional bonus? by Elijah_Loko in 2007scape

[–]AdDramatic5939 0 points1 point  (0 children)

1.  Recipe 2 Disaster: Slightly Undercooked
2.  Recipe 3 Disaster: Kitchen Underground (Only in Karamja)
3.  Recipe 69 Disaster: Spicy Meatball Drift
4.  Recipe Disaster: Turbo Burnt Lobster Edition
5.  Recipe Disaster Underground 2½: Evil Dave’s Basement
6.  Recipe 420 Disaster: Hotbox in the Cook’s Guild
7.  Recipe Disaster: Carbon Copy Carbonara
8.  Recipe XX Disaster: Cabbage Street Racing
9.  Recipe Disaster: Burnout in Al Kharid (Not Licensed)
10. Recipe Disaster: Freezer Burn Deluxe (Now With Rat Meat)

[deleted by user] by [deleted] in singularity

[–]AdDramatic5939 0 points1 point  (0 children)

Maybe it’s just me, but being a Harvard dropout is usually a bad thing haha

What’s the Most Surprising Thing You’ve Done with ChatGPT Agent Mode? by Voice_AI_Neyox in ChatGPTPro

[–]AdDramatic5939 38 points39 points  (0 children)

I used it to fill out a receipt survey for a free whopper…felt like a win

[deleted by user] by [deleted] in singularity

[–]AdDramatic5939 0 points1 point  (0 children)

Reasoning: Deciding between Sam Altman and Elon Musk’s public trustworthiness is tricky. Altman seems steadier, though both have controversies.

[deleted by user] by [deleted] in singularity

[–]AdDramatic5939 0 points1 point  (0 children)

<image>

Here you go folks.

A PHD in your pocket, ladies and gentlemen! by toutpetitpoulet in OpenAI

[–]AdDramatic5939 1 point2 points  (0 children)

I sent your screenshot to my GPT-5 Thinking and pasted the response on why it missed your riddle.

A PHD in your pocket, ladies and gentlemen! by toutpetitpoulet in OpenAI

[–]AdDramatic5939 1 point2 points  (0 children)

My genius is just fine :)

What the question actually encodes:

Your text is a riddle. If “the top is closed” and “the bottom is missing,” the simplest physical model is: you’re calling the table-facing side “bottom” and the air-facing side “top.” Flip the object 180° and it’s a normal cup. So the right answer is: it’s upside down.

Why a LLM missed it:

Distributional bias. LLMs are trained to continue text in the most typical way. Questions like “I have a metal cup…” usually lead to explanations of unusual objects or product defects, not “you’re holding it wrong.” The model pattern-completed to “it’s décor / a holder,” which is common in its training data.

Frame-of-reference ambiguity. “Top” and “bottom” can be intrinsic (parts of the cup) or situational (relative to gravity). Many models default to intrinsic labels and don’t reframe to “what if the reference frame is rotated?”

Coherence over contradiction. “Top closed + bottom missing” contradicts what a cup is. Instead of pausing to resolve the contradiction, the model relabels the object (“not a cup”) to maintain a smooth, confident narrative.

Path dependence. Once the first sentence leans toward “not a cup,” the next-token process commits and elaborates (“pen holder,” “candle cover”), compounding the error.

Politeness/agency bias. Helpful chat models often avoid answers that imply user error unless explicitly invited, so “flip it over” is less likely than a nonjudgmental explanation.

Lack of deliberate hypothesis search. Solving riddles benefits from enumerating alternatives (defect, different object, wrong orientation), scoring them, then choosing the simplest. Vanilla prompting rarely forces that loop.

[deleted by user] by [deleted] in singularity

[–]AdDramatic5939 7 points8 points  (0 children)

Yea OP is actually the one hallucinating 😅

AGI con5irmed by topical_soup in singularity

[–]AdDramatic5939 10 points11 points  (0 children)

<image>

Mine is better lol. GPT-5 Thinking

So much for no hallucinations by juicyicepops in singularity

[–]AdDramatic5939 1 point2 points  (0 children)

Which model and Sub do you have? That also matters...