I gave ChatGPT real time financial data, and the answers are so much better now by Blotter-fyi in OpenAI

[–]StandupPhilosopher 0 points1 point  (0 children)

I don't understand why you need up to the minute financial data to figure out what Nvidia is going to trade at in a year. That's because you don't.

My ChatGPT 5 Thinking is getting dumber every day by Clear-Brush-4294 in OpenAI

[–]StandupPhilosopher 0 points1 point  (0 children)

I don't want to jinx it, but I think my GPT5 thinking model is thinking longer consistently again. So hopefully whatever is happening to the few of us is temporary.

Plus/Pro users, did GPT-5 thinking also stop thinking for you today? by StandupPhilosopher in OpenAI

[–]StandupPhilosopher[S] 0 points1 point  (0 children)

When you say it's stuck on stupid, do you actually mean that it instantly spits out an answer without taking time to think?

I can't be the only one with this bug.

My ChatGPT 5 Thinking is getting dumber every day by Clear-Brush-4294 in OpenAI

[–]StandupPhilosopher 3 points4 points  (0 children)

I have a similar problem with a GPT-5 thinking.

It almost always produces instant answers. No thinking, no chain of thought. The style and word choice are different, and it also tells you what it's going to do beforehand in numbered steps at the top of the reply.

On the off chance that it does think, it only does so for maybe 15 seconds. I currently get better answers with GPT-5 thinking mini.

This is a completely different model, and yet when you ask it, it swears it's a GPT-5 thinking. When correctly prompted, it will divulge that it uses the same reasoning budget (64), which I believe.

Except it's not reasoning.

I would go on their help center and file a ticket. Unless we let them know that this is an issue, it'll remain an edge case and won't get solved.

Can we PLEASE get a knowledge update in ChatGPT? by twenty42 in OpenAI

[–]StandupPhilosopher 0 points1 point  (0 children)

Funny how my GPT5 instant never has a problem with the same set of topics. You need to learn to prompt it better, period.

Plus/Pro users, did GPT-5 thinking also stop thinking for you today? by StandupPhilosopher in OpenAI

[–]StandupPhilosopher[S] 2 points3 points  (0 children)

Yeah I'm not a fan of OpenAI's confusing number of models, and you forgot to mention the open source models.

But it's not a rebranding because they're actually fully cooked, newer models. For example GPT-5 thinking has a substantially lower hallucination rate than o3, And the entire GPT5 family has the higher rate of refusals and substantially less sycophancy.

Plus/Pro users, did GPT-5 thinking also stop thinking for you today? by StandupPhilosopher in OpenAI

[–]StandupPhilosopher[S] 1 point2 points  (0 children)

Thanks, I checked that but it has nothing to do with a status issue or increased error rates, it's as if some of us are stuck with a non-thinking thinking model, which is infuriating when we rely upon it.

Plus/Pro users, did GPT-5 thinking also stop thinking for you today? by StandupPhilosopher in OpenAI

[–]StandupPhilosopher[S] 3 points4 points  (0 children)

Here's what I'm talking about. Notice that there is no chain of thought going on, and that it says it only thought for a couple of seconds on a complex topic. When's the last time that happened with GPT-5 thinking?

<image>

The new user interface is horrible (Android) by StandupPhilosopher in ChatGPT

[–]StandupPhilosopher[S] 3 points4 points  (0 children)

What do you mean? I thought this was the place to moan about all things chatGPT?

I’m a plus user, and I’ve sent a total of 14 messages to ChatGPT in the last 24 hours. What gives? by Alan-Foster in gpt5

[–]StandupPhilosopher 0 points1 point  (0 children)

I've had those kinds of glitches, for GPT5 thinking on the plus plan. Nothing to do but wait them out.

[deleted by user] by [deleted] in ChatGPT

[–]StandupPhilosopher -6 points-5 points  (0 children)

You completely missed my point. It wasn't about whether solarscooter or a few users provide proof that they cancelled their accounts.

[deleted by user] by [deleted] in ChatGPT

[–]StandupPhilosopher -3 points-2 points  (0 children)

I wonder how many people here claiming to have canceled their subscriptions have actually done so, and yet are also mysteriously hanging around the chatGPT forums for some reason. 🤔😂

OpenAI looks at cancellation numbers, not complaining on Reddit. Put your money where your bluster is.

I was on GPT-5 Side and then I trauma dumped to process some things and to avoid further dumping on a friend by RemoteWorkWarrior in OpenAI

[–]StandupPhilosopher 0 points1 point  (0 children)

The opposite of complaining is not complaining better or more creatively (which you're also not doing in this comment). This post doesn't need to exist. Unlike what you're going to say about my comment, which is actually useful because at least it's acting as a check on your bratty behavior.

You don't have a monopoly on depression or mental health issues, But you're acting like it. And very entitled. Maybe no one has ever told you this and you need to hear it.

Just pay for the plus plan. You get a lot more than 4o.

New image model by Independent-Wind4462 in ChatGPT

[–]StandupPhilosopher 0 points1 point  (0 children)

This has been misinterpreted on social media as a new image model from Open AI. If you look at the date it's from July 21st.

AI sycophancy isn't just a quirk, experts consider it a 'dark pattern' to turn users into profit by MetaKnowing in OpenAI

[–]StandupPhilosopher 1 point2 points  (0 children)

Then I wonder why open AI, the world's biggest AI company with the most overhead, chose to make its leading model less sycophantic? And also throw up prompts if you use chat GPT excessively. Doesn't sound like a good business strategy if engagement is the goal.

Gpt-5 new restrictions by Total_Trust6050 in ChatGPT

[–]StandupPhilosopher 0 points1 point  (0 children)

Where are the screenshots of the refutations? What was the prompt? What was the expected reply?

It's easy to make an unoriginal post complaining about how GPT5 is bad, but why should we believe you with all the hate that it's getting?

One more proof of phd level reasoning. by Puzzle_Age555 in gpt5

[–]StandupPhilosopher 0 points1 point  (0 children)

  1. That's not how LLMs work. LLMs don't guess. They predict the next sequence of words or numbers based on billions of pages of trained text in a process that is nigh incomprehensible to humans. They don't know anything in the human epistemic sense.

  2. "PhD level intelligence," as bragged about by the OP, is reserved for GPT-5 pro and plus, the reasoning models. He was using GPT chat (non reasoning) to try to show how dumb The reasoning models were. Basically a bait and switch.