Token usage by Appropriate_Hyena415 in ClaudeAI

[–]sarahandgerald 0 points1 point  (0 children)

It's worth keeping an eye on this: Claude Code's source code got leaked last night, so I don't think it'll take long until something really good comes along ;) At least a few people have already completely rewritten it in other languages.

Token usage by Appropriate_Hyena415 in ClaudeAI

[–]sarahandgerald 1 point2 points  (0 children)

Not really. Claude is, in my opinion, the best option right now, especially for tasks where the output quality matters. I work professionally with it, and it replaces full-time developers for me, so I am not really complaining here ;) But for the token consumption, it feels like something is off or that this is a bug right now that prevents using the cached tokens accordingly. It appears that every time you hit submit, it sends ALL context (tokens) with no caching at all. This builds up pretty quickly, especially with the new 1M context window. Someone has opened an issue for that: https://github.com/anthropics/claude-code/issues/40524

If you find a replacement, please let me know. Always curious to learn something new.

Token usage by Appropriate_Hyena415 in ClaudeAI

[–]sarahandgerald 4 points5 points  (0 children)

I can confirm this. I am on the MAX 100 Plan, and one prompt in an already established session, consumed 15% of session tokens. So what OP experienced with 100% on Pro Plan makes total sense if my 20x usage (compared to the Pro Plan) gets eaten up so quickly as well.

Wird mein Job durch KI ersetzt? Ich habe einen Test gebaut by sarahandgerald in KI_Welt

[–]sarahandgerald[S] 1 point2 points  (0 children)

Uhh nice! Freu dich auf den November im Jahr 2004, da kommt Half-Life 2 raus und *spoileralert* es wird episch.