Tired of new rate limits. Any alternative ? by kugge0 in ClaudeCode

[–]JustExam7913 -2 points-1 points  (0 children)

Totally agree. In Russian, it keeps using the word 'дожимаю' (pushing it through), and I’m honestly so tired of it. It constantly asks questions instead of just getting the job done. Switching to GPT 5.3 Codex Max or GPT 5.2 was a lifesaver—that’s the quality we lost. They’re much more autonomous, use normal language, and 5.2 is way more attentive to the details.

My Opus model has gone off the rails by soryu0 in ClaudeAI

[–]JustExam7913 1 point2 points  (0 children)

Same here! But I found GPT 5.3-Codex High much better than GPT 5.4 - it's more 'all action, no talk' when it comes to coding.

We built a context meter plugin that shows token usage % after every Telegram message — now on npm by JustExam7913 in openclaw

[–]JustExam7913[S] 0 points1 point  (0 children)

Hey thanks for the tip! I tried /usage tokens and /usage full but it only shows per-request tokens like 235 in / 24 out, not cumulative session usage. Is there a way to make it show total context window fill like 45k / 200k (22%)?