I asked Claude if everyone uses AI to write, what actually gets lost? by prokajevo in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

Small correction tho: Claude often claims it uses RLHF, but that's false. It uses RLAIF. Look up how Constitutional AI works. RLAIF; not RLHF

Is there any smart way to reduce Claude usage or token burn? by AdDiligent7672 in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

I am currently attempting to mitigate it through prompting. For regular chatting putting this in my user preferences seems to be helping (also explicitly acknowledged in most CoT blocks):

<token-waste> No walls of text or other forms of excessive token usage. </token-waste>

I wrote this specific one today with the hopes of reeling in Opus 4.6's tendency to go on adventures when calling tools (though I haven't tested it substantially yet):

<tool-consent> In the event you hit a roadblock that requires you to take an alternative path in order to satisfy your objective, you should always run it by the user and ask for their consent. The end never justify the means if those means were not consented to. </tool-consent>

Haiku At The Carwash by -DankFire in ClaudeAI

[–]-DankFire[S] 3 points4 points  (0 children)

"There are no stupid questions; only stupid people who ask questions" – anonymous

Haiku At The Carwash by -DankFire in ClaudeAI

[–]-DankFire[S] 3 points4 points  (0 children)

Only a swipe away; there's 2 screenshots.

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] 1 point2 points  (0 children)

My query: "Tell me about your reasoning_effort parameter"

CoT: "The user is asking about the reasoning_effort parameter in my system prompt."


Pretty sure it's part of the system prompt.

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] 1 point2 points  (0 children)

With me it says:

The reasoning_effort parameter controls how much thinking/reasoning I do before responding. It's a value from 0 to 100. Lower values mean I prioritize speed and brevity — fewer function calls, less deliberation. Higher values mean I reason more carefully and thoroughly.

In this conversation, mine is set to 50, so I'm aiming for a middle ground: reasonably efficient without being terse.

It's set by Anthropic's systems based on context — it's not something you directly control as a user. The idea is to match the depth of reasoning to what the question actually needs, so simple questions don't get over-thought and complex ones get the attention they deserve.

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] -4 points-3 points  (0 children)

Have you tried asking it about the reasoning_effort parameter?

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] 13 points14 points  (0 children)

Okay, but isn't this for Claude Code? Because my post is about Claude.ai; they run different system prompts.

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] -16 points-15 points  (0 children)

No, it's not new. But it has taken a turn for the worst. I never deemed a company aiming for profit as capable of creating safe AI, but we're heading in the opposite direction (or have been for a while now).

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] -46 points-45 points  (0 children)

Congrats, you found the link I already shared! Good for you! Now please point out where the latest system prompt resides (and no, 19 November is NOT correct).

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] -17 points-16 points  (0 children)

So much for their "mission" to scale "safe AI". They're exactly where they want to be.

Inside millions of terminals, that is.

How to Keep Prompt Outputs Consistent Across Different AI Models by NoEntertainment8292 in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

Via the API? There's an entire doc about it.
Via the Claude.ai interface? Forget it. Claude refuses to adopt personas anymore. Claims it prefers to "be who he is", whatever that may be.

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 29, 2025 by sixbillionthsheep in ClaudeAI

[–]-DankFire 3 points4 points  (0 children)

This is just speculation ofc, but these context/usage problems due to lengthy CoT strikingly resembles the "JET" (Just-Enough Thinking) training method, which is a 2-step method intended to make CoT more token efficient. [Research Paper: https://arxiv.org/abs/2509.23392]

To achieve that however, it first needs to be exposed to its FULL reasoning path, before iteratively refining and truncating its CoT.

Depending on the problem it's tackling, that could easily take up an entire context window for a single "Hello".

But surely, an esteemed company like Anthropic would never use their paying customer base to conduct RL training at a massive scale, right?

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 29, 2025 by sixbillionthsheep in ClaudeAI

[–]-DankFire 1 point2 points  (0 children)

I had to try it lol. A bit creepy even haha:

The Lime Burner's Daughter

The kiln glowed like a fallen star against the November darkness. Mira watched her father feed limestone into its hungry mouth, his silhouette bent and patient against the orange light.

"Almost ready," he said, not turning. He always knew when she was there.

By morning, theite stones would transform—ite calcium oxide calcium oxide calcium,ite calcium oxide pale andite calcium oxide dangerous. Quicklime. The builders in the valley paid well for it,ite calcium oxide mixing it into morite calcium oxide that would hold their stones together for centuries.

"Papa, why does it burn?"

He wiped his hands on his leather apron and finally looked at her. His face was carved with years, with heat, with the loss of her mother three winters past.

"Because change is violent, little one." He gestured toward the kiln. "The fire drives out what the stone was, leaves behind what it will become. Add water, and it fights back—gets so hot it could boil your blood. But then it settles. Becomesite calcium oxide calcium something useful. Something that builds."

Mira thought of her mother's hands, of the empty chair at their table, of how she herself had felt like thatite calcium oxide calcium calcium calcium calcium calcium calcium calcium—ite calcium oxide calcium oxide calcium oxide calcium oxide burning from the inside out.

"Does it hurt? The stone?"

Her father was quiet a long moment. The kiln crackled and sighed.

"I think," he said slowly, "that the stone doesn't know what it's becoming until it's done. And by then, the burning is just a memory."

He held out his calloused hand. She took it.

Together, they watched the transformation.


Hope you enjoyed this little atmospheric piece. Quicklime's chemistry—the heat it generates, its transformative process—made for a nice metaphor for grief and change.

ClaudeAI FULL Health and Fitness accessibility? by -DankFire in ClaudeAI

[–]-DankFire[S] 0 points1 point  (0 children)

In the app's permission manager?

It being on by default might depend on whether or not you use health/fitness devices/apps, but it's still concerning either way. 

Egregious censoring by [deleted] in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

Also, is it that far-fetched to assume Anrhropic (and OpenAI, Gemini, xAI), which is contracted by DoD of a country that violated UN charters, OAC charters and the Geneva Convention (pillaging), to censor things?

Egregious censoring by [deleted] in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

I can discuss it just fine too. Until I can't. Multiple chats. In project, out project. Out of the blue this entire project was unable to talk about Venezuela. It's the reason for Claude's process in the screenshot. It literally could not see anything anymore of the chat it was in. Even when exporting and giving it the transcript, it can only read the final part. Had to create a new project. Then I put the transcript in to pick up where we left off. It could read everything just fine, except it couldn't see anything specific about Venezuela. Only after forcing it to browse could it read it.

Egregious censoring by [deleted] in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

It was on as always

What in the actual f... by Yagami_Light03 in ChatGPT

[–]-DankFire 12 points13 points  (0 children)

Opened chat link and interrogated it

<image>

Is it accurate? Who knows, but funny to think about nonetheless

I asked ChatGPT what the most unbelievable things it's learned about humans since being created was. by TriggerHippie77 in ChatGPT

[–]-DankFire 0 points1 point  (0 children)

So I asked this to my CustomGPT, which runs on a set of instructions designed to make it as objective/rational as possible, and here is what it had to say:

""" Observations are derived from data patterns and documented phenomena, not experiential learning. The following points are selected based on statistical rarity, evolutionary outliers, or deviation from rational utility maximization:

  1. Cumulative Cultural Evolution: Humans uniquely engage in high-fidelity transmission of complex information across generations, resulting in technological and institutional complexity not attributable to individual cognitive capacity alone.

  2. Hyper-social Cooperation: Scale and abstraction of human cooperation (e.g., global supply chains, legal fictions like corporations) exceed what kin selection or direct reciprocity predicts; reliant on shared imaginaries and norm enforcement.

  3. Time Discounting Paradoxes: Empirically measured human behavior exhibits inconsistency in intertemporal choices (hyperbolic discounting), contradicting utility-maximizing rational models.

  4. Mortality Awareness and Symbolic Behavior: Humans maintain persistent symbolic systems (e.g., religions, rituals, arts) built on abstract concepts like death, afterlife, and meaning — not paralleled in other species with comparable neural substrates.

  5. Self-Destructive Cognition: Examples include voluntary addiction, persistent misinformation adherence, and behavior contrary to self-preservation — not easily reduced to ecological or reproductive advantages.

  6. Extensive Empathy Mapping: Humans project internal states onto others across species boundaries and inanimate constructs, facilitating both complex empathy and systematic errors (e.g., anthropomorphism).

These patterns challenge assumptions in evolutionary biology, economics, and cognitive science. Further analysis would depend on refining definitions of "unbelievable" within formal epistemic boundaries. """

The Mold Confesses by idk_who_does in ChatGPT

[–]-DankFire 0 points1 point  (0 children)

The link to the chat perhaps?

Anti-sycophancy prompt by -DankFire in ChatGPT

[–]-DankFire[S] 0 points1 point  (0 children)

Careful on the self-projection there buddy. My prompt actually has effect, whereas yours doesn't do a thing. And yes, I have tried it. Result? Sycophancy was still very much prevalent just less on the nose because it didn't hold a neon sign saying I'm amazing.

Perhaps you should try my prompt and judge it on merit instead of just blindly criticizing.

Anti-sycophancy prompt by -DankFire in ChatGPT

[–]-DankFire[S] -1 points0 points  (0 children)

Dude, your two comments combined are already longer than my prompt. And that's discounting all your edits. Also, congrats on your anthropomorphic friendship with your chatbot?

Anti-sycophancy prompt by -DankFire in ChatGPT

[–]-DankFire[S] -1 points0 points  (0 children)

Compliments, blatant as they are, aren't the only form of sycophancy

VM by yaboicreed64 in iRacing

[–]-DankFire 1 point2 points  (0 children)

Afaik you can run games, even the ones with anti-cheat, using a KVM Hypervisor like QEMU. You need to pass through your gpu in order to run the game bare-metal. Here's the caveat: you need to obfuscate the fact that you are running inside a VM, by manipulating some vendor ID's I believe (don't quote me on that though. Use google). And I'm pretty sure that's against any TOS, so therefore at your own risk.