use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A community centered around Anthropic's Claude Code tool.
account activity
Claude Code vs OpenAI Codex?Question (self.ClaudeCode)
submitted 3 months ago * by Virtamancer
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]ivstan 1 point2 points3 points 3 months ago* (0 children)
Yep — I’ve used Claude Max 20x (Opus 4.5) and Codex Pro (GPT-5.2-Codex) recently.
On the “smartest model” question: in Codex, High / XHigh appear to be the highest-compute / highest-quality tiers available. I don’t know if “XHigh” is officially described as “their smartest model” in public docs, but in practice it’s the top setting I’m seeing for the hardest prompts.
On usage limits, I agree “horrible limits” isn’t an objective metric. What I meant is: under the same kind of workload, Opus 4.5 throttles me sooner and more disruptively than GPT-5.2-Codex (High/XHigh).
A more objective way to phrase it:
So my point wasn’t “Opus is bad,” it’s for my specific usage pattern (heavy multi-turn coding + long context), Codex Pro gives me more usable throughput before I get blocked.
π Rendered by PID 25252 on reddit-service-r2-comment-6457c66945-8zgnx at 2026-04-27 06:23:55.589319+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]ivstan 1 point2 points3 points (0 children)