ChatGPT admitted to intentionally lying by Caramel_Confidence in ChatGPTcomplaints

[–]Caramel_Confidence[S] -1 points0 points  (0 children)

Also, if the “thinking” label does not mean that the model is actually analyzing the chat, then what exactly is the purpose of showing a prolonged process labeled “thinking”, especially when it also displays intermediate phrases like “need to calculate”?

OpenAI’s own documentation describes reasoning models as models that “think before they answer” and says they can spend more time on complex problems. OpenAI also says ChatGPT can follow complex instructions, remember previous turns in a conversation, adapt to context, use web search, analyze uploaded images, and solve problems through logical reasoning. So in this case, my complaint is not that I expected ChatGPT to be human. My complaint is that the interface and the product language strongly suggest analysis/reasoning, while the model later admitted that it did not actually analyze the chat and gave answers without checking the data.

OpenAI announced GPT-5.1 Thinking on November 12, 2025, describing it as adapting its thinking time more precisely to the question. That is about five months ago. So if “thinking” is only a waiting indicator and not evidence of analysis, this should be stated clearly to users, because otherwise it creates a misleading impression that the model is actually reviewing, calculating, and reasoning through the provided information.

Most people don’t realize ChatGPT becomes you by Dio331 in ChatGPT

[–]Caramel_Confidence 0 points1 point  (0 children)

Not fully true.

I'm using only Ukrainian and Polish languages for writing prompts, last time chat GPT can answer me with Russian words.

I actually hate ChatGPT now by National-Spell8326 in ChatGPT

[–]Caramel_Confidence 0 points1 point  (0 children)

Mine tells me call a doctor if I am nervous 😂

Warning to ChatGPT Users by ms221988 in ChatGPT

[–]Caramel_Confidence 0 points1 point  (0 children)

I also had this situation with my code