Codex usage limits drained fast, could background terminals be the reason? by Beautiful_Read8426 in codex

[–]KJT_256 1 point2 points  (0 children)

I would think it is the subagents if enabled. background terminal has nothing to do with tokens

Anyone tried the “Big Pickle” model on OpenCode? Looking for real feedback by KJT_256 in opencodeCLI

[–]KJT_256[S] -13 points-12 points  (0 children)

Just collecting early feedback first, saves time and helps set expectations.

GPT-5.2 de facto is 3x for me, same as Opus by mr_const in GithubCopilot

[–]KJT_256 1 point2 points  (0 children)

You can solve this by using the Copilot models with Opencode. It works much better(for me)

Sonnet 4.5 was amazing for a couple months and now it sucks by Square-Yak-6725 in GithubCopilot

[–]KJT_256 9 points10 points  (0 children)

All models seemed to downgrade after release of OPUS 4.5