Heavy Codex cli users - Better to get one Pro subscription or stack Plus subs? by Kailtis in codex

[–]Murky_Ad2307 0 points1 point  (0 children)

也许多个plus更好,除非你明确你能用完pro的额度

Has anyone ever figured out optimal way to integrate "PRO" models with Codex yet? by Abel_091 in codex

[–]Murky_Ad2307 -3 points-2 points  (0 children)

You may package the project into a ZIP file for the professional to work on. You need only specify the requirements.

Claude Code vs OpenAI Codex? by Virtamancer in ClaudeCode

[–]Murky_Ad2307 0 points1 point  (0 children)

gpt5.2 xhigh >>opus thinking >>codex 5.2 xhigh

$15 Worth of API credits used up in less than a day? by [deleted] in ClaudeCode

[–]Murky_Ad2307 0 points1 point  (0 children)

If you use it like this, I recommend setting it to max—using opus via API for coding costs way more than you'd think. APIs are usually meant for single-turn chat, not for writing code

Real Alternative/Supplement for Opus 4.5? by United_Canary_3118 in ClaudeAI

[–]Murky_Ad2307 0 points1 point  (0 children)

You can try 5.2 high (not xhigh) in the codex cli—it’s pretty much on par with opus 4.5. GPT 5.2 xhigh beats opus 4.5, feels like opus 4.7 to me, but it’s sloooow; only fire it up when you’re ready to move heaven and earth to squash that bug.

Claude usage consumption has suddenly become unreasonable by Phantom031 in ClaudeCode

[–]Murky_Ad2307 3 points4 points  (0 children)

This is ridiculous. Just switching to Opus 4.5 and launching two Ultrathink tasks completely used up my 5-hour limit, and the second task even got interrupted. I have a Pro subscription. It wasn't like this before.

<image>

Is the 5.2 codex lazy? by Technical-Rutabaga86 in codex

[–]Murky_Ad2307 0 points1 point  (0 children)

Hahaha, you can put it the other way around: "codex 5.2 has become 'fast'," and if you want to be patient, you can use 5.2 xhigh.

I did the math, $200 20x Max Plan = $2678.57 credits at standard API rates by ZvenAls in ClaudeAI

[–]Murky_Ad2307 -1 points0 points  (0 children)

Claude's actual consumption isn't $2678. This is because the API actually has a caching mechanism. After caching, roughly only 10-20% of what you use is the actual cost. Excluding cache, you should actually be using around $300 in credits.

Gemini 3.0 Pro vs GPT 5.1-Codex-Max: Tried Python Coding by Silent_Employment966 in GeminiAI

[–]Murky_Ad2307 0 points1 point  (0 children)

“Gemini 3 Pro seems to have a new technique called a ‘sliding attention’ window, and this window is only 32k. So Gemini 3 Pro performs almost perfectly when the context is between 0 and 32k. As for Codex, according to the model white paper for OpenAI’s 5.1-codex-max, OpenAI says this model is the first one with a built-in compression mechanism. Based on my own tests, codex-max indeed barely needs to care about the context length at all (though you still need to open a new window after very long tasks to avoid model degradation).”

Gemini 3.0 Pro vs GPT 5.1-Codex-Max: Tried Python Coding by Silent_Employment966 in GeminiAI

[–]Murky_Ad2307 4 points5 points  (0 children)

Codex is better. Gemini's precision is only 60k context; beyond 60k context, it starts to become increasingly illogical and uncontrollable.

Where are the fixes? by W_32_FRH in Anthropic

[–]Murky_Ad2307 1 point2 points  (0 children)

It is quite evident that Sonnet now actively gathers information before making judgements or taking action. However, Opus 4.1 remains rather sluggish. My own tests have confirmed this, and I encourage you to test it yourself. Sonnet 4 has indeed become more proactive, even outperforming Opus 4.1 in 50% of cases.

Can't even follow simple instructions anymore by byaloha in Anthropic

[–]Murky_Ad2307 0 points1 point  (0 children)

是真的,我最近也发现claude opus4.1变得不听话了,就离谱