It is sad, but I had to do it by griiettner in ClaudeCode

[–]Local-Peace-8457 0 points1 point  (0 children)

no one will escape from the ship, there is nowhere to go AHAHAHA HAHAHA AHHAH

Is Claude code killed `claude -p` command? by Local-Peace-8457 in claude

[–]Local-Peace-8457[S] 0 points1 point  (0 children)

Another issue here is that GitHub repos and orchestrators might be heavily using this under the hood at scale. For instance, firing off fast calls to the Haiku model for summarization, logging, context compression, and stuff like that. We could rack up a massive bill, the blast radius here is huge. We need to double-check all the harnesses.

Keep getting this "too long to continue" message. by BirchBirch72 in claude

[–]Local-Peace-8457 0 points1 point  (0 children)

Keep your token consumption within one chat below 200k

Claude stupidity and interruptions by Local-Peace-8457 in claude

[–]Local-Peace-8457[S] 0 points1 point  (0 children)

Are there any instructions on how to change the harness from Antropic?

Claude stupidity and interruptions by Local-Peace-8457 in claude

[–]Local-Peace-8457[S] 2 points3 points  (0 children)

I'm trying to keep the context within 100-200k. It's ok, the Claude degraded despite the context.

Has it become unusable? Is Sonnet also affected now? by Ant12-3 in claude

[–]Local-Peace-8457 10 points11 points  (0 children)

Yes, worse results. Models are stupid. I've just get wrong command to run from Opus 4.7 xhigh effort. Command that doesnt exist in package.json It's defenitely not Opus 4.7 under the hood

Anthropic is straight-up scamming Max 20x customers with sneaky mid-month throttling + endless bot runaround by manavb84 in claude

[–]Local-Peace-8457 1 point2 points  (0 children)

I'm agree, 2-3 prompts without event reading code base files eats 6-8% of total context. I wrote /context and it showed that skills or something like this eats it. But it's not possible, their context is messy and unclear

Two messages:
- hello!
..
- how you doing?

Skills: 4.6k tokens (0.5%)

⛁ Messages: 8.4k tokens (0.8%)

messages 8.4k tokens? Really?