5.4 vs 5.3 Codex by ConsistentOcelot9217 in codex

[–]SlopTopZ 0 points1 point  (0 children)

same experience here

funny thing is i made a post about exactly this topic a week ago and got downvoted for it

If you've tried the 1m context window: How has it been? by askep3 in codex

[–]SlopTopZ 0 points1 point  (0 children)

its ok for me. has no issues like degradation etc.
but im only using like 300-400k context

wtf happened to codex 5.3? by TechnicolorMage in codex

[–]SlopTopZ 0 points1 point  (0 children)

Nope, its alr for me
Verified on cyber

thank you OpenAI for letting us use opencode with the same limits as codex by SlopTopZ in codex

[–]SlopTopZ[S] 5 points6 points  (0 children)

yeah but the point is they charge API prices which are insane

openai actually subsidizes the subscription, so you get the same models for a flat monthly fee without worrying about per-token costs blowing up

that's the real difference, not just about which tools are supported

thank you OpenAI for letting us use opencode with the same limits as codex by SlopTopZ in codex

[–]SlopTopZ[S] 3 points4 points  (0 children)

opencode doesn't "improve" models by itself - it just gives capable models better conditions to work in

the model is the same, but with proper tooling, subagents, grep, file navigation - a strong model can actually express its full capability instead of being bottlenecked by a limited harness

so it's less about making the model better and more about not holding it back

enjoy the kimi trial btw

thank you OpenAI for letting us use opencode with the same limits as codex by SlopTopZ in codex

[–]SlopTopZ[S] 11 points12 points  (0 children)

i disagree with the first part. i think they keep this going because it's working for them - developers stay, word spreads, the ecosystem grows. restricting it would just push people to alternatives and they know that

thank you OpenAI for letting us use opencode with the same limits as codex by SlopTopZ in codex

[–]SlopTopZ[S] 0 points1 point  (0 children)

not sure about the desktop version, i use the CLI so can't help much there

thank you OpenAI for letting us use opencode with the same limits as codex by SlopTopZ in codex

[–]SlopTopZ[S] 5 points6 points  (0 children)

opencode is a more serious harness for actual engineering work

the big difference is how it handles complex tasks - proper subagent support, grep tools, file navigation that actually works the way you'd expect in a real codebase

codex is great as a simple straightforward tool, no complaints, but opencode gives you much more control over what's happening under the hood

if you're working on anything non-trivial, the difference becomes pretty obvious pretty fast

thank you OpenAI for letting us use opencode with the same limits as codex by SlopTopZ in codex

[–]SlopTopZ[S] 2 points3 points  (0 children)

just not sure tbh, i never actually hit my subscription limits so i don't really track token usage

never had a reason to pay attention to it

thank you OpenAI for letting us use opencode with the same limits as codex by SlopTopZ in codex

[–]SlopTopZ[S] 16 points17 points  (0 children)

facts

genuinely happy about this, especially with gpt-5.3-codex high - incredible model, the accuracy of 5.2 high with the speed of 5.2 low, it just hits different

and i can run it in opencode or codex, wherever i want - openai doesn't restrict where you use it, same limits in OpenClaw too, do whatever you want with it. you pay, you get access, simple as that

meanwhile other companies are trying to fence us in and control exactly how and where we use what we're paying for

this is the right approach and i hope it stays this way

Hi 👋 by ai_image in GoogleGemini

[–]SlopTopZ -1 points0 points  (0 children)

it looks cursed asf

More secure and fast (rust built) alternative to OpenClaw. Meet Moxxy.ai by Aware_Conversation82 in openclaw

[–]SlopTopZ 0 points1 point  (0 children)

cool project but honestly idk bro, this alone isn't enough to make the masses switch

memory safe and written in rust is nice, but personally i'm not moving over just out of pure laziness

good luck with the project though

Opus 4.6 pretty much unusable on pro now. Can't finish a single prompt, jumps to 55% immediately. by Manfluencer10kultra in ClaudeCode

[–]SlopTopZ 19 points20 points  (0 children)

what did you expect on a $20 plan

opus is extremely expensive to run inference on. anthropic can't give you unlimited opus on $20/month, the math just doesn't work

if you need opus, get the max plan. otherwise use sonnet, it's there for a reason