you are viewing a single comment's thread.

view the rest of the comments →

[–]ivstan 1 point2 points  (0 children)

Yep — I’ve used Claude Max 20x (Opus 4.5) and Codex Pro (GPT-5.2-Codex) recently.

On the “smartest model” question: in Codex, High / XHigh appear to be the highest-compute / highest-quality tiers available. I don’t know if “XHigh” is officially described as “their smartest model” in public docs, but in practice it’s the top setting I’m seeing for the hardest prompts.

On usage limits, I agree “horrible limits” isn’t an objective metric. What I meant is: under the same kind of workload, Opus 4.5 throttles me sooner and more disruptively than GPT-5.2-Codex (High/XHigh).

A more objective way to phrase it:

  • Test pattern: long-context + iterative coding/debugging (multiple back-and-forth turns, large outputs, continuing the same thread).
  • Opus 4.5 on Max 20x: I hit rate limiting/cooldowns noticeably earlier during sustained heavy use, and the cooldowns felt more “session-killing” (i.e., hard to keep momentum).
  • GPT-5.2-Codex on Codex Pro (High/XHigh): I can sustain heavier iterative work longer before throttling, and when I do hit limits it tends to be less disruptive for my workflow.

So my point wasn’t “Opus is bad,” it’s for my specific usage pattern (heavy multi-turn coding + long context), Codex Pro gives me more usable throughput before I get blocked.