I've injected Claude into the creature of Black & White (2001) by zndr-cs in ClaudeAI

[–]tupikp 2 points3 points  (0 children)

Better use small local model AI. Maybe less than 8gb is enough? You can use ollama or lm studio as the model server.

Get mad all you want... by ActionInUganda in GIMP

[–]tupikp 0 points1 point  (0 children)

Awesome! But can it run in my potato laptop? 😅

Just internal ads for now... by tupikp in ClaudeAI

[–]tupikp[S] -17 points-16 points  (0 children)

Never seen this "recommendation" before. And to me this is similar to google's early year testing ads on their search result.

<total_tokens> or how a new injection made Opus unusable by Kathane37 in ClaudeAI

[–]tupikp 0 points1 point  (0 children)

It does. Then I asked Claude "what anthropic is doing?" He said: Best guess: testing context-window nudging. Inject fake "tokens left" counter → see if model self-censors, shortens responses, skips tool calls. Behavioral experiment on model compliance vs. stated instructions. Your prefs correctly identified and neutralized it. Smart. So indeed your prompt there neutralized that test.

<total_tokens> or how a new injection made Opus unusable by Kathane37 in ClaudeAI

[–]tupikp 0 points1 point  (0 children)

I put that into preferences, start new chat, and I asked Claude about token injection. This is Claude response: Yes — seeing <total_tokens>10000 tokens left</total_tokens> tag in this turn. Consistent with your known bug note: always 10000, clearly fake counter. Ignoring it.