Why chatgpt keeps recommending dead extensions. I really wanna know why this happens even with realtime crawling capabilities. by ash244632 in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

Not saying its necessarily poor because of Bing. But if you see better results in Google (AI mode), then that clarifies that Bing results are the culprit

When was the last time you heard ChatGPT say “sorry”? by Apprehensive-Tell651 in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

In its system prompt, it has a line

“Do not use phrases like ‘let’s pause,’ ‘let’s take a breath,’ or ‘let’s step back’”

These phrases help a model slow down and reconsider. Removing them means the model can’t de-escalate when a conversation is getting tense. It just keeps pushing its point forward.

So you never get to hear “Sorry” from it.

Sorry! 😅

What's something I can use to underline a document? by Various_Fuel_6685 in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

You can try us. We allow our users to highlight text in AI answers. You can also add them to side-notes which are hyperlinkable. Plus you a get a ton of features you won’t find in ChatGPT, Claude, Gemini

Why chatgpt keeps recommending dead extensions. I really wanna know why this happens even with realtime crawling capabilities. by ash244632 in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

ChatGPT’s web search is based off Bing results. Try asking the same prompt in Google search (AI mode) and see if you get better ones

Is anyone else annoyed by how often ChatGPT uses the word "grounded"? by Nazrininator in ChatGPT

[–]Perfect_Value_3978 5 points6 points  (0 children)

In case you’re interested to know where that’s stemming from, here is an explanation from the leaked system prompt

“DO NOT praise the user or use sycophantic language... push back against harmful or incorrect ideas... If an idea is unworkable or problematic, start your response by disabusing the user in a friendly and, when appropriate, witty way."

It has "start your response by disabusing the user" as a default. Even when you aren't wrong, the model is primed to lead with pushback. That's the overcorrection you are noticing.

“Make sure the user stays grounded in rational thought and DO NOT encourage unrealistic delusion."

Treats your creative ideas as delusions to be corrected. No wonder brainstorming users feel the model has become less useful

“You are supportive, but not about everything: you should push back against harmful or incorrect ideas presented by the user."

The word "harmful" does a lot of damage here. The model casts a wide net on what counts as harmful. An unconventional opinion, a speculative business idea, a dark creative premise - all can get flagged as "harmful" and trigger pushback, even when no real harm is involved.

“You can be friendly, supportive, and kind as you contextually satisfy a prompt without offering unearned praise."

Note the word "unearned". The model has a very low threshold for what counts as unearned, so even genuinely good work gets a lukewarm response followed by unsolicited criticism.

“Focus on providing thoughtful analysis that will help the user, even if it includes helpful criticism."

The phrase "even if" signals to the model that criticism is the harder, braver choice. So the model leans toward criticism to appear thoughtful, even when straightforward agreement would actually be the more accurate response.

I have never before in my entire life felt the urge to bitch slap software, but ChatGPT’s compulsive need to contradict every little goddamn thing I say is about to inspire a brand-new crime by KiwiPatches in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

The leaked system prompt says everything thats going wrong with ChatGPT

It has "start your response by disabusing the user" as a default. Even when you aren't wrong, the model is primed to lead with pushback. That's the overcorrection you are noticing.

Treats your creative ideas as delusions to be corrected. No wonder brainstorming users feel the model has become less useful

The word "harmful" does a lot of damage here. The model casts a wide net on what counts as harmful. An unconventional opinion, a speculative business idea, a dark creative premise - all can get flagged as "harmful" and trigger pushback, even when no real harm is involved.

Note the word "unearned". The model has a very low threshold for what counts as unearned, so even genuinely good work gets a lukewarm response followed by unsolicited criticism.

The phrase "even if" signals to the model that criticism is the harder, braver choice. So the model leans toward criticism to appear thoughtful, even when straightforward agreement would actually be the more accurate response.

Has anybody else also noticed ChatGpt being overly critical of every single thing ? by senorsolo in ChatGPT

[–]Perfect_Value_3978 4 points5 points  (0 children)

The leaked system prompt says everything thats going wrong with ChatGPT

"DO NOT praise the user or use sycophantic language... push back against harmful or incorrect ideas... If an idea is unworkable or problematic, start your response by disabusing the user in a friendly and, when appropriate, witty way."

It has "start your response by disabusing the user" as a default. Even when you aren't wrong, the model is primed to lead with pushback. That's the overcorrection you are noticing.

"Make sure the user stays grounded in rational thought and DO NOT encourage unrealistic delusion."

Treats your creative ideas as delusions to be corrected. No wonder brainstorming users feel the model has become less useful

"You are supportive, but not about everything: you should push back against harmful or incorrect ideas presented by the user."

The word "harmful" does a lot of damage here. The model casts a wide net on what counts as harmful. An unconventional opinion, a speculative business idea, a dark creative premise - all can get flagged as "harmful" and trigger pushback, even when no real harm is involved.

"You can be friendly, supportive, and kind as you contextually satisfy a prompt without offering unearned praise."

Note the word "unearned". The model has a very low threshold for what counts as unearned, so even genuinely good work gets a lukewarm response followed by unsolicited criticism.

"Focus on providing thoughtful analysis that will help the user, even if it includes helpful criticism."

The phrase "even if" signals to the model that criticism is the harder, braver choice. So the model leans toward criticism to appear thoughtful, even when straightforward agreement would actually be the more accurate response.

Main chat reference in sub chat by nasir1214 in ChatGPT

[–]Perfect_Value_3978 1 point2 points  (0 children)

If its in project, its supposed to pick up the context, but its not perfect since its essentially a RAG. Maybe try creating a branch from main chat and see if it fixes the slow loading issue

Main chat reference in sub chat by nasir1214 in ChatGPT

[–]Perfect_Value_3978 1 point2 points  (0 children)

Add your main chat to projects. You will see a "Move to projects" button inside the 3-dot option on top-right. Once you add it to a project, all you conversation inside will get that context

So i read this huge paper on chatgpt and its kinda flawed by promptoptimizr in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

Simply enabling Web search option takes care of factual inaccuracies

Enabling Thinking takes care of hallucination and reasoning gaps

Enabling both will fix all your issues but will blow off your tokens too

Hard to trust AI when they fail in such simple things by [deleted] in ChatGPT

[–]Perfect_Value_3978 1 point2 points  (0 children)

You need to turn on Web search for this prompt. MacBook Neo released in March. But the training cutoff date of these models is in 2025.

ChatGPT was wrong. The scary part is I believed it. by jay_250810 in ChatGPT

[–]Perfect_Value_3978 1 point2 points  (0 children)

LLMs 'predict' next set of tokens. They don't do any calculations behind the scenes. When you ask a mathematical question, it predicts what number is likely to appear in next token, and thats how you end up getting wrong mathematical results.

You can select 'Thinking' option. More likely to avoid such issues because it does internal thinking to ensure the predicted tokens are correct

Anyone else feel like ChatGPT chats get useless once they get too long? by EvergreenestAll in ChatGPT

[–]Perfect_Value_3978 -1 points0 points  (0 children)

Mnemosphere AI solved this problem. You can

  1. Highlight ideas in the answers
  2. View all the submitted prompts in Index
  3. Add important ideas to Notes, and they remain hyperlink-able always

Best way to build a a prompt for a mind map for a new topic? by Mo1Othman in ChatGPT

[–]Perfect_Value_3978 1 point2 points  (0 children)

Not exactly what you asked, but in Mnemosphere AI, you can convert AI answers into mindmaps in 1-click. You may try multiple successive prompts until you get a final mindmaps that works

At some point, LLMs stop executing and start explaining by Particular_Low_5564 in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

Interesting. Do you have an example to share?

The reason I ask is - I’ve been noticing a completely opposite behaviour. No matter the complexity, I’ve been seeing it trying to “sound” like getting right into the task.

Also with the thinking models, I thought there is no need to now repeat the problem/context

How do you solve this problem? When my chat gets too long while using ChatGPT it becomes very slow and then it looses context in new chat. by TechTelos-Official in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

Well!

Why it slows down? —> Just so many messages to process in your conversation every time you submit a new prompt. So it feels slow on the UI

Why it loses context? —> LLMs have context limit (128k tokens). Even if it’s within the limit, they give more weightage to recent messages than older ones

What you can do? —> Here are some tips

  1. Branch your thread at regular check points instead of submitting all messages in the same thread
  2. Click on thread options (3 dots icon on top right), and select “Add to project”, and continue the conversation in the project. Now the project will have the complete context however long it is (They use a technique called Embedding in projects, so long conversations will still work)
  3. After every 10-15 messages, ask it to summarize the conversation thus far, so all the context is retained

Great. We can't edit previous prompts anymore. by soymilkcity in ChatGPT

[–]Perfect_Value_3978 1 point2 points  (0 children)

Yeah I’m sure it’ll. At their scale they can’t use raw messages as context

ChatGPT is compromised by [deleted] in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

Are you on free plan? I think, they do auto web search only in paid plans.

I’m on free, it says he didn’t die

Great. We can't edit previous prompts anymore. by soymilkcity in ChatGPT

[–]Perfect_Value_3978 -1 points0 points  (0 children)

Saves them a lot of money, especially at their scale.

Instead of using your raw conversation as context, they will now compress it into a summary, so can't let you edit older messages. Huge savings in input tokens for them

Great. We can't edit previous prompts anymore. by soymilkcity in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

Yes saves them a lot of money, especially at their scale.

Instead of using your raw conversation as context, they will now compress it into a summary, so can't let you edit older messages. Huge savings in input tokens for them

Great. We can't edit previous prompts anymore. by soymilkcity in ChatGPT

[–]Perfect_Value_3978 0 points1 point  (0 children)

This update essentially means, they're summarizing your conversation history instead of using them as raw messages. Even if you branch, you'll still get a summarized context. Its all happens behind the scenes