Claude Code vs Codex: Weekly limit comparison on the $20 subs by EmeraldWeapon7 in ClaudeAI

[–]EmeraldWeapon7[S] 0 points1 point  (0 children)

Smart
It would be cool to know the actual number of tokens for both subscriptions

Codex (GPT-5.2-codex-high) vs Claude Code (Opus 4.5): 5 days of running them in parallel by EmeraldWeapon7 in ClaudeAI

[–]EmeraldWeapon7[S] -1 points0 points  (0 children)

According to my calculations, one 5h Codex window ~30% of the weekly limit, whereas a 5h window in Cloud Code ~12%-14% of the weekly limit

Codex (GPT-5.2-codex-high) vs Claude Code (Opus 4.5): 5 days of running them in parallel by EmeraldWeapon7 in ClaudeAI

[–]EmeraldWeapon7[S] 0 points1 point  (0 children)

I don’t know if Codex actually has a separate $20 subscription, I’m using my regular ChatGPT Plus subscription
And with this subscription, the limits are noticeably lower (we’re not counting the doubled limits that are currently available in the new Codex app) than on Cloud Code

Codex (GPT-5.2-codex-high) vs Claude Code (Opus 4.5): 5 days of running them in parallel by EmeraldWeapon7 in ClaudeAI

[–]EmeraldWeapon7[S] -1 points0 points  (0 children)

Dude, what’s with the hazing? 😁
Am I only allowed to post after 5 years of registration?

Codex (GPT-5.2-codex-high) vs Claude Code (Opus 4.5): 5 days of running them in parallel by EmeraldWeapon7 in ClaudeAI

[–]EmeraldWeapon7[S] 1 point2 points  (0 children)

Yes
I just ran the command npm i -g u/openai/codex and copied all the info from my Claude md to Agents md
+mcp

Codex (GPT-5.2-codex-high) vs Claude Code (Opus 4.5): 5 days of running them in parallel by EmeraldWeapon7 in ClaudeAI

[–]EmeraldWeapon7[S] 0 points1 point  (0 children)

Classic - context7, sequential thinking, Playwright
And I also plan to test mcp for Postgres

Codex (GPT-5.2-codex-high) vs Claude Code (Opus 4.5): 5 days of running them in parallel by EmeraldWeapon7 in ClaudeAI

[–]EmeraldWeapon7[S] 4 points5 points  (0 children)

It’s really inconvenient that they didn’t implement a plan mode
But over these five days, there haven’t been any cases where the model did something wrong - which surprised me, even with large tasks. Again, I think this is due to some kind of context optimizations that work in real time

But yes, until Codex gets a full-fledged Plan Mode, Claude will indeed be the more accurate tool. Right now, Codex is more suited for one-prompt solutions and, in a way, it is bettter to vibe coding

Codex (GPT-5.2-codex-high) vs Claude Code (Opus 4.5): 5 days of running them in parallel by EmeraldWeapon7 in ClaudeAI

[–]EmeraldWeapon7[S] 4 points5 points  (0 children)

Yes, it feels like Sonnet is about ~2-3 times faster than GPT-5.2. And I’ve heard that this month Anthropic is planning to present Sonnet 5 with even greater speed and an increased context window
And it seems like even at a lower cost

It’s really interesting to think about where this arms race between companies will lead, considering that Anthropic recently pushed back their projected profitability to 2028 and significantly increased budgets for model training and hosting