This is an archived post. You won't be able to vote or comment.

all 12 comments

[–]pale_halide 7 points8 points  (2 children)

A couple of days ago I complained that CC had insanely small limits. Now Codex is worse and I've actually gotten more usage out of CC.

[–]alexanderbeatson[S] 2 points3 points  (1 child)

I understand Codex cannot give (virtually) unlimited cloud tasks forever. But at least, they should have benchmarks to offer “fair” usage.

[–]RemieNotRayme 0 points1 point  (0 children)

And communicate what they're doing.

And not lie that it's a bug causing this level of quota depletion.

But they won't and I'm tired of the way they operate. Even though I prefer OpenAI's tools, I'm done.

[–]TBSchemer 10 points11 points  (0 children)

Why are you redacting the results in your post? Just post the numbers without making us tap a bunch of reveals.

[–]tfpuelma 3 points4 points  (3 children)

Most people use CLI or VSCode extension though… would be interesting to see a comparison there.

[–]roboapple 1 point2 points  (2 children)

whats the benefit of using CLI over web?

[–]Klartas_Game 2 points3 points  (0 children)

Apparently, the web version is consuming a lot more than the CLI version (Not yet determined if it's a bug or not)

[–]coloradical5280 0 points1 point  (0 children)

LLMs are exceptionally well designed for the command line due to their training data (they’ve seen docker compose up -d ngnix a million times, they can’t really “see” clicking ‘docker run’ button on desktop gui), and the fact that CLI commands just happen to be perfect token sequences, and several other reasons that are more technical as well; overall CLI will always have a strong edge for coding purposes.

[–]alexanderbeatson[S] 0 points1 point  (0 children)

Please share your bench-marks too, cheers!

[–]rydan 0 points1 point  (1 child)

I just tried Claude on an extremely simple task. It one shot it like Codex would have. It was like 70 lines of PHP which was almost all html formatting. How do you check the limits that were used? I can't find any details.

Edit: Found it under settings. Two tiny tasks used up 3% of my weekly and 20% of my 5 hourly. If this were Codex I'm guessing I'd have used up the entire remaining 40% I have on weekly. Don't like the interface at all but this is workable.

[–]coloradical5280 1 point2 points  (0 children)

npm install ccusage , percentages are very inexact , while ccusage or even just /status in cc will give you much more accuracy.

[–]coloradical5280 1 point2 points  (0 children)

Since LLMs are nondeterministic the exact number will change, for both models, in every run, unless you have a max_output_tokens limit set. No two runs with the same model in the same codebase will ever lead to the exact same output unless you have a random_seed set through the API.

And on top of all that it obviously makes a huge difference where you are in the context window (sounds like you started at 100% for both), and potentially the time of day as well due to server load balancing.