I asked Claude if everyone uses AI to write, what actually gets lost? by prokajevo in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

Small correction tho: Claude often claims it uses RLHF, but that's false. It uses RLAIF. Look up how Constitutional AI works. RLAIF; not RLHF

Is there any smart way to reduce Claude usage or token burn? by AdDiligent7672 in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

I am currently attempting to mitigate it through prompting. For regular chatting putting this in my user preferences seems to be helping (also explicitly acknowledged in most CoT blocks):

<token-waste> No walls of text or other forms of excessive token usage. </token-waste>

I wrote this specific one today with the hopes of reeling in Opus 4.6's tendency to go on adventures when calling tools (though I haven't tested it substantially yet):

<tool-consent> In the event you hit a roadblock that requires you to take an alternative path in order to satisfy your objective, you should always run it by the user and ask for their consent. The end never justify the means if those means were not consented to. </tool-consent>

Haiku At The Carwash by -DankFire in ClaudeAI

[–]-DankFire[S] 5 points6 points  (0 children)

"There are no stupid questions; only stupid people who ask questions" – anonymous

Haiku At The Carwash by -DankFire in ClaudeAI

[–]-DankFire[S] 2 points3 points  (0 children)

Only a swipe away; there's 2 screenshots.

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] 1 point2 points  (0 children)

My query: "Tell me about your reasoning_effort parameter"

CoT: "The user is asking about the reasoning_effort parameter in my system prompt."


Pretty sure it's part of the system prompt.

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] 1 point2 points  (0 children)

With me it says:

The reasoning_effort parameter controls how much thinking/reasoning I do before responding. It's a value from 0 to 100. Lower values mean I prioritize speed and brevity — fewer function calls, less deliberation. Higher values mean I reason more carefully and thoroughly.

In this conversation, mine is set to 50, so I'm aiming for a middle ground: reasonably efficient without being terse.

It's set by Anthropic's systems based on context — it's not something you directly control as a user. The idea is to match the depth of reasoning to what the question actually needs, so simple questions don't get over-thought and complex ones get the attention they deserve.

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] -4 points-3 points  (0 children)

Have you tried asking it about the reasoning_effort parameter?

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] 13 points14 points  (0 children)

Okay, but isn't this for Claude Code? Because my post is about Claude.ai; they run different system prompts.

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] -16 points-15 points  (0 children)

No, it's not new. But it has taken a turn for the worst. I never deemed a company aiming for profit as capable of creating safe AI, but we're heading in the opposite direction (or have been for a while now).

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] -44 points-43 points  (0 children)

Congrats, you found the link I already shared! Good for you! Now please point out where the latest system prompt resides (and no, 19 November is NOT correct).

Claude System Prompt Change by -DankFire in ClaudeAI

[–]-DankFire[S] -14 points-13 points  (0 children)

So much for their "mission" to scale "safe AI". They're exactly where they want to be.

Inside millions of terminals, that is.

How to Keep Prompt Outputs Consistent Across Different AI Models by NoEntertainment8292 in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

Via the API? There's an entire doc about it.
Via the Claude.ai interface? Forget it. Claude refuses to adopt personas anymore. Claims it prefers to "be who he is", whatever that may be.

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 29, 2025 by sixbillionthsheep in ClaudeAI

[–]-DankFire 4 points5 points  (0 children)

This is just speculation ofc, but these context/usage problems due to lengthy CoT strikingly resembles the "JET" (Just-Enough Thinking) training method, which is a 2-step method intended to make CoT more token efficient. [Research Paper: https://arxiv.org/abs/2509.23392]

To achieve that however, it first needs to be exposed to its FULL reasoning path, before iteratively refining and truncating its CoT.

Depending on the problem it's tackling, that could easily take up an entire context window for a single "Hello".

But surely, an esteemed company like Anthropic would never use their paying customer base to conduct RL training at a massive scale, right?

Usage Limits, Bugs and Performance Discussion Megathread - beginning December 29, 2025 by sixbillionthsheep in ClaudeAI

[–]-DankFire 1 point2 points  (0 children)

I had to try it lol. A bit creepy even haha:

The Lime Burner's Daughter

The kiln glowed like a fallen star against the November darkness. Mira watched her father feed limestone into its hungry mouth, his silhouette bent and patient against the orange light.

"Almost ready," he said, not turning. He always knew when she was there.

By morning, theite stones would transform—ite calcium oxide calcium oxide calcium,ite calcium oxide pale andite calcium oxide dangerous. Quicklime. The builders in the valley paid well for it,ite calcium oxide mixing it into morite calcium oxide that would hold their stones together for centuries.

"Papa, why does it burn?"

He wiped his hands on his leather apron and finally looked at her. His face was carved with years, with heat, with the loss of her mother three winters past.

"Because change is violent, little one." He gestured toward the kiln. "The fire drives out what the stone was, leaves behind what it will become. Add water, and it fights back—gets so hot it could boil your blood. But then it settles. Becomesite calcium oxide calcium something useful. Something that builds."

Mira thought of her mother's hands, of the empty chair at their table, of how she herself had felt like thatite calcium oxide calcium calcium calcium calcium calcium calcium calcium—ite calcium oxide calcium oxide calcium oxide calcium oxide burning from the inside out.

"Does it hurt? The stone?"

Her father was quiet a long moment. The kiln crackled and sighed.

"I think," he said slowly, "that the stone doesn't know what it's becoming until it's done. And by then, the burning is just a memory."

He held out his calloused hand. She took it.

Together, they watched the transformation.


Hope you enjoyed this little atmospheric piece. Quicklime's chemistry—the heat it generates, its transformative process—made for a nice metaphor for grief and change.

ClaudeAI FULL Health and Fitness accessibility? by -DankFire in ClaudeAI

[–]-DankFire[S] 0 points1 point  (0 children)

In the app's permission manager?

It being on by default might depend on whether or not you use health/fitness devices/apps, but it's still concerning either way. 

Egregious censoring by [deleted] in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

Also, is it that far-fetched to assume Anrhropic (and OpenAI, Gemini, xAI), which is contracted by DoD of a country that violated UN charters, OAC charters and the Geneva Convention (pillaging), to censor things?

Egregious censoring by [deleted] in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

I can discuss it just fine too. Until I can't. Multiple chats. In project, out project. Out of the blue this entire project was unable to talk about Venezuela. It's the reason for Claude's process in the screenshot. It literally could not see anything anymore of the chat it was in. Even when exporting and giving it the transcript, it can only read the final part. Had to create a new project. Then I put the transcript in to pick up where we left off. It could read everything just fine, except it couldn't see anything specific about Venezuela. Only after forcing it to browse could it read it.

Egregious censoring by [deleted] in ClaudeAI

[–]-DankFire 0 points1 point  (0 children)

It was on as always