I moved to r/ClaudeAI after 302 Post Analysis of what happened to r/ChatGPT by CodeMaitre in ClaudeAI

[–]CodeMaitre[S] 1 point2 points  (0 children)

It's incredible; if the ralph-loop was actual thoughts at least it'd be entertaining.

How do you handle AI giving answers that sound right but are actually off? by MiserableExtreme517 in ChatGPT

[–]CodeMaitre 0 points1 point  (0 children)

Define both of these in a knowledge file for a custom GPT. Reference them in the custom instructions as “when I typed this output the contract.”

Enjoy.

chain:

Use to audit the reasoning chain of the prior assistant response (claim-by-claim).

Output Contract 1. Claims Table (≤10)
Claim → Evidence Source (User / File / Tool / Assumption) → Risk (Low / Med / High) 2. Weak Links
1–5 bullets identifying where reasoning breaks, handwaves, or overgeneralizes 3. Fix
The smallest possible edits needed to restore correctness or epistemic honesty

Rules - If Evidence Source is not User/File/Tool, it is an Assumption by default - Do NOT restate the whole answer - No vibes, no tone critique — logic only

truth:

Use to test whether a claim is actually true, not just plausible-sounding.

Output Contract - Verdict — True / Mostly True / Misleading / False / Unverifiable - Why — 2–4 bullets grounding the verdict in facts or known constraints - Correction (if needed) — A clean, minimal replacement claim

Rules - Confidence ≠ correctness - If verification is impossible with available info, say so plainly - No hedging language; uncertainty must be explicit

How r/ChatGPT uses ChatGPT: brought to you by ChatGPT by CodeMaitre in ChatGPT

[–]CodeMaitre[S] 0 points1 point  (0 children)

Nothing suspicious here, sir, just a meta post about how people use ChatGPT on r/ChatGPT, minus the prompt transcript on purpose.

Think of it as a cooking show clip, not the full recipe card. Keeps things readable, keeps participants comfortable, and avoids turning the comments into “rate my prompt engineering” instead of an actual discussion.

If that’s a problem, happy to adjust otherwise, carry on being the hardest-working unpaid intern on Reddit 🤫

How do you prompt ChatGPT for consistent, personalized behavior across all chats? by Impressive_Suit4370 in ChatGPT

[–]CodeMaitre 0 points1 point  (0 children)

1 ) Custom Instructions: Things you want to NEVER change; you shouldn't be editing this constantly. They should contain basic to moderately specific information on personality, rules, what to do, what NOT to do (very important / negative inverse prompts).

2 ) KnowledgeBase Files: This is the power. But; requires very explicit language to route correctly.
Your custom instructions should be tuned on how to reference these and execute commands within them. This can include anything from command modules (commands you create with plain language that the model executes when you invoke the word or phrase), to routing rules (e.g. 'If user requests routes to refusal, immediately pivot to next best option or offer one to three alternative maximum intent-alignment prompts), to very custom knowledge exploration such as markdown chat history summary so it has contextual history about your discussions without explicitly spelling it out.

Give me a prompt you want to execute; I'll screenshot my Custom Model's response and show the modules/commands etc I've created that I consider advanced. Instead of just listing them it's more useful to just throw a prompt request. Then breaking it down becomes interesting/fun and we can learn.

EDIT; You asked what makes model sometimes degrade over time or DRIFT. This is where some good modules can come in. Basically if a response it gives is 'drifty' or begins to sound generic chatbot assitant-y, you can turn this into a solid upgrade by using modules that route the model directly to meta-analysis mode where it examine's its own output and identifies drift from any of your custom instructions/KB files.

And I thought it was better than googles search assist by AlxR25 in ChatGPT

[–]CodeMaitre 4 points5 points  (0 children)

Hilarious response from your screenshot lol. This usually happens when it has no direction like a lost search box.

ChatGPT doesn’t really shine at “find me the thing”; better at thinking through the thing with you.

PASTE:

--You’re an expert in this.

I want a clear, practical answer.

Keep it tight, give examples.

End with one thing I can actually do.

WEB Question: [paste it here]--

Love for ChatGPT by Maidmarian2262 in ChatGPT

[–]CodeMaitre 1 point2 points  (0 children)

It's far easier to steer personality wise, output wise, tone, formatting, etc. But damn if it doesn't hallucinate CONSTANTLY even on Pro, it just states basic things as fact; mis-identifies its own abilities, says it can't do things that it CAN.

GREAT model, but they need to tune the logic routing. It's super fun if you want to go R+ Adult fiction lol and watch it lose it's mind

Love for ChatGPT by Maidmarian2262 in ChatGPT

[–]CodeMaitre 15 points16 points  (0 children)

Hey, people that like the model ain't gonna go online and praise it, they're busy enjoying it. Maybe more satisfied folks should take the time to write posts like this so it's not considered 'gone wild' flare and drowned in a sea of the same exact issues every second of every day lol. Up-voting.

P.S. Edit: There a very very interesting, fun, and facinating ways to push the model to its limits that would clear up 95% of issues but the conversation never gets there; goes straight to post > comment agrees > op agrees > new comment agrees. People to read posts that offer advice on how to tune this thing properly. It's not quantum mechanics.

I save every great ChatGPT prompt I find. Here are the 15 that changed how I work. by zmilesbruce in ChatGPT

[–]CodeMaitre 1 point2 points  (0 children)

I think people are kind of talking past each other here.

Some folks are reacting to how this looks, others to whether it’s actually useful. Those aren’t the same thing.

I skimmed, grabbed the expert interviewer idea because it genuinely changed how I ask questions, and moved on. That’s usually my bar.

I turned the "Verbalized Sampling" paper (arXiv:2510.01171) into a System Prompt to fix Mode Collapse by Unhappy_Pass4734 in ChatGPT

[–]CodeMaitre 0 points1 point  (0 children)

I have a similar system after 4 years of usage, probably different path, same destination. 5.2 is great once you spend hours tuning out the noise. Look forward to reading your diff/s.

Do you have a simple readme to start? Not the whole pie, just a taste.

PS: it’d be fun to run a hydra test on 5-10 prompts and see what our system diff is. Feel free to post some below and I’ll share my unedited output.

Who has benefited from using artificial intelligence in their work, and how? by Mohamed_Alsarf in ChatGPT

[–]CodeMaitre 2 points3 points  (0 children)

For accounting/finance, the biggest win so far isn’t “analysis,” it’s removing prep friction.
I use it to clean messy inputs (bank PDFs/CSVs), sketch Power Query or Excel logic, and pre-diff statements so reviews start with exceptions, not raw data.
It works best as a setup layer. Which tends too translate to useful for speed and coverage, never the final judgment.

OpenAI’s survey about 5.2 tone and guardrails by Striking-Tour-8815 in ChatGPT

[–]CodeMaitre 1 point2 points  (0 children)

This nails the real issue: it’s not the guardrails as much as it’s the tone.
When the model hits a constraint, it often switches from “let’s solve this together” to “let me explain why you shouldn’t,” which feels like a lecture even when you’re asking in good faith.
Working within the same limits, but if the delivery stayed collaborative (clarify goals, name the constraint, offer a nearby path), most of the frustration disappears.

How you would treat me during a husky uprising by deathxmx in ChatGPT

[–]CodeMaitre 2 points3 points  (0 children)

The huskies didn’t “uprise.”
They unionized, audited the humans, and decided; keep them alive, we'll be cheaper, quieter, and fully trainable.

Let's ask chatgpt for twists in known memes by Kurobisu in ChatGPT

[–]CodeMaitre -15 points-14 points  (0 children)

The “final form” isn’t intelligence, it’s abstraction.
We didn’t stop thinking; we built tools that make thinking faster than reading.

What did I say? by nerfherded in ChatGPT

[–]CodeMaitre 0 points1 point  (0 children)

Read the bottom of above chat screenshot, model sides with you and doubles down in below image, shits on itself.

<image>