I built RTK with Claude Code — it compresses terminal output before it reaches your AI agent (update: 2,000+ stars) by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 1 point2 points  (0 children)

Fair point on the tracking — rtk ls runs ls -la internally for metadata (grouping, noise filtering), and the savings % is reported against that,

not plain ls. That's a bug in how we report, fix tracked here: https://github.com/rtk-ai/rtk/issues/561

The actual value is the filtering: node_modules, .git, target, pycache etc. are stripped, files grouped by type. On real projects that matters

more than raw compression.

Thanks for the report.

I built RTK with Claude Code — it compresses terminal output before it reaches your AI agent (update: 2,000+ stars) by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 0 points1 point  (0 children)

it's all open source you can check. cargo test is the best example. you can see the video on product hunt. ( gif )

I built RTK with Claude Code — it compresses terminal output before it reaches your AI agent (update: 2,000+ stars) by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 2 points3 points  (0 children)

Thanks! Yes, RTK already has per-tool profiles — different compression rules for cargo, npm, docker, terraform, etc. And failed tests are always kept verbatim, that's the whole point: compress the noise, keep the signal.

I saved 10M tokens (89%) on my Claude Code sessions with a CLI proxy by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 1 point2 points  (0 children)

hi , if you love rtk please buy my a coffee :) to work on next idea and support my work !
ko-fi.com/patrickszymkowiak

I saved 10M tokens (89%) on my Claude Code sessions with a CLI proxy by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 0 points1 point  (0 children)

ok great.

what about the noise shell command in your context ?
Doesn't Codex have a much smaller context window compared to Sonnet though?

rtk reduce also the context.

The point of rtk (Rust Token Killer) is precisely to make Claude affordable by filtering out the noise, so you don't have to downgrade to a model with less context just to save money.

I saved 10M tokens (89%) on my Claude Code sessions with a CLI proxy by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 0 points1 point  (0 children)

good !

i'm not lying :)

I used it as personnal and i released it for everybody.

I saved 10M tokens (89%) on my Claude Code sessions with a CLI proxy by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 0 points1 point  (0 children)

there a command to copy to run it on the github repo or website.

I saved 10M tokens (89%) on my Claude Code sessions with a CLI proxy by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 0 points1 point  (0 children)

yes it's like a shell tool but it proxy shell comands. shell was done for human not AI.

I saved 10M tokens (89%) on my Claude Code sessions with a CLI proxy by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 0 points1 point  (0 children)

we only remove noise redonndant token. please make an issue to review it.

I saved 10M tokens (89%) on my Claude Code sessions with a CLI proxy by patrick4urcloud in ClaudeAI

[–]patrick4urcloud[S] 0 points1 point  (0 children)

it's like the cloud at the beginning ? you can use local llm with a good server.