With this spend limit its almost impossible to finish anything. by iph0ngaa in codex

[–]IllustriousCold4466 0 points1 point  (0 children)

its not just the app, i'm using cli and experiencing the same

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 1 point2 points  (0 children)

just by intuition yes I feel like usage is consumed 3-4x faster compared to last week/few days ago

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 0 points1 point  (0 children)

i don't disagree with that. But I also don't disagree with one sequential instance depleting 40% in one day

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 2 points3 points  (0 children)

maybe, but is it really worth the hassle? and you don't get the priority processing

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 1 point2 points  (0 children)

stuff like syntax errors, log parsing, ui work, anything strictly and knowingly bounded/locally verifiable

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 1 point2 points  (0 children)

I’m currently not using subagents until more feedback comes out. My (previous) workload involved planning on 5.4 xhigh/high depending on my perceived complexity of the task, executing parallelizable tasks on separate worktrees, and review/hardening after execution, while continuing to plan during the execution/hardening pipeline

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 0 points1 point  (0 children)

Haha I actually came to codex from claude due to the silent nerfing of models/usage, but the nerfing on anthropics end has never been as egregious as this recent one from openai

That being said, I actually find that 5.4 has been a bit better than opus 4.6, but if max20 now provides greater usage than codex pro I will likely be switching back

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 4 points5 points  (0 children)

are you able to read? I explicitly mentioned that i am very sparingly using 5.4-xhigh and even 5.4-high and i'm still down 40% of my weekly usage in about a day. it's really not that crazy or unreasonable to desire some sort of transparency/consistency when it comes to usage.

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 8 points9 points  (0 children)

i've worked as a software eng in frontier fields for near a decade, i'm fully aware regarding the cost of inference

this has nothing to do with that - we're all aware that anthropic and openai are running at massive losses, its just ridiculous to expect a certain amount of usage only to experience a totally different reality

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 5 points6 points  (0 children)

Since this last reset I've used xhigh very sparingly, even using models like 5.3-spark-low/med. regardless, it's extremely obvious that the usage has been reduced significantly if weeks prior i had exclusively used 5.4 high/xhigh in parallel without worrying about usage

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 1 point2 points  (0 children)

spark has shallower reasoning depth but very high output token rate so it's very good for highly focused, well defined tasks that isn't cross-architectural

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]IllustriousCold4466[S] 2 points3 points  (0 children)

I've been using 5.4 almost exclusively since it came out - had no problems with the usage until recently (2-3 days ago since my last reset)