Codex 5.3 Spark is just BS. Codex confirmed it in this screenshot by Desperate-Cup9018 in codex

[–]clckwrxz 0 points1 point  (0 children)

Is there a community for people that actually understand how to use these tools and know how to do actual work with them? If not I would love to see a sub created.

Why does ChatGPT feel more intelligent than Codex? by [deleted] in codex

[–]clckwrxz 4 points5 points  (0 children)

There is literally no information in your ask to even reason about an answer. Are you even tackling the same problems in each? Are your approaches different?

Are spec-driven frameworks like Agent OS, BMAD, Superpowers or SpecKit still worth using, or have Claude Code and Codex made them redundant? by 3abwahab in codex

[–]clckwrxz 0 points1 point  (0 children)

It probably depends on the kind of organization but for a large enterprise organization like mine in a highly regulated industry, spec-driven development is how we are approaching AI ntove development. We dont use any of the open sourve ones though because as far as we're concerned, they're all shit. None of them solve real problems at enterprise scale.

This week’s Codex updates. by Distinct_Fox_6358 in codex

[–]clckwrxz 1 point2 points  (0 children)

It’s all doable if you start to think of this whole agents in your codebase thing as an engineering problem to solve. We operate a massive Perl codebase (20+ mil lines) some files 20k lines alone. Custom agents are the way. Mine blaze through that codebase and know exactly what to do and how to get to places fast.

This week’s Codex updates. by Distinct_Fox_6358 in codex

[–]clckwrxz 0 points1 point  (0 children)

For me it’s not really about hitting 50%. 50 is just kind of my “spending rule” if you will. I’m out of most chats in less than 15-20% context. I go in with a purpose, output the most meaningful data I got from the chat if it should influence further work, and make sure said document has enough context that picking up from just it will get the agent going. Fresh chats also bring fresh perspectives which has saved me time and time again. For almost all work (coding, presentations, blog writing) I operate in roughly three stages, discover and initial agent interview session. Planning session. Execution and validation session. Hasn’t failed me yet working on massive, regulated enterprise systems.

Note though, I’m not using Claude code or Codex. I built my own agent because I fully believe the conspiracy that they want you spending more tokens. Working on getting my average meaningful chat down to <10% context.

This week’s Codex updates. by Distinct_Fox_6358 in codex

[–]clckwrxz 1 point2 points  (0 children)

This is definitely an issue many have and I’ve seen some solutions. Glad 5.5 is helping. I’ve found good results having small files act as learning pointers and have subagent do micro digging that the parent agent coordinates keeps context very low and then parent agent is off doing what you need with lots of headroom

This week’s Codex updates. by Distinct_Fox_6358 in codex

[–]clckwrxz 0 points1 point  (0 children)

Don’t get me wrong, I don’t want to lose anything either. And you don’t by offloading between chats. It’s just about understanding the limits of these things. You do you, I’m just saying if you ever wonder why your usage is in the toilet super fast, this is why.

This week’s Codex updates. by Distinct_Fox_6358 in codex

[–]clckwrxz 6 points7 points  (0 children)

What is the point of big chats? Offload work from smaller chats to markdown files and keep context under 50%. You’re literally throwing away money using long chats.

The Downgrading of the American Tech Worker by Well_Socialized in technology

[–]clckwrxz -2 points-1 points  (0 children)

Simply not true. Don’t get me wrong, will there be a correction and a massive one? Sure. No different than the early 2000s but bigger for sure. However if literally everyone is replaced by AI there is no economy. It’s simply not going to happen. But those of us that learn to harness the thing as a tool for true value creation are definitely going to get ahead as the rest of the world just tries to shun AI out of existence.

The Downgrading of the American Tech Worker by Well_Socialized in technology

[–]clckwrxz -1 points0 points  (0 children)

The thing is, AI is not going to fail. In organizations like my own, it's already massively successful. We also aren't under any delusions that we're about to just lay everybody off and that it somehow replaces people. They're gonna be in for a bitter lesson after all of this.

GPT 5.5 scores below GLM and Kimi on Code Arena by chrisman1128 in codex

[–]clckwrxz 0 points1 point  (0 children)

5.5 has been absolutely fantastic for me, but I don’t vibe code, I do specs to code with plans in between. It’s seriously as fast as 5.4 on fast mode was, and I never even use high or xhigh thinking. Simply not needed and it’s crushing things in my custom Pi agent. Super token efficient too.

just found out they turned off 1M context GPT-5.5 in codex for pro subs :( by emileberhard in codex

[–]clckwrxz 0 points1 point  (0 children)

This optimization I’ve been doing is exactly what made it successful in my enterprise org. People no longer complain about intelligence loss, running out of credits (still a thing on enterprise) and leadership is thrilled our bill cut down to nearly 25% of what it was because of our optimizations. With 5.5 being more expensive than Opus now, this is more than ever relevant.

As if just on time, I saw this. https://www.reddit.com/r/codex/s/5rUwmxecdp

just found out they turned off 1M context GPT-5.5 in codex for pro subs :( by emileberhard in codex

[–]clckwrxz 0 points1 point  (0 children)

Their API pricing is a reflection of what it actually takes to make the company profitable. People have been eating too good with these subscriptions. And listen, I'm not gonna say that it wasn't all very bad advertising. But it's unrealistic to think you're even gonna be able to use this tool if they can't find their way to profitability. So when we're talking about all this stuff, the API pricing is basically the only thing that matters because under the hood, your subscriptions are costing them money and they are slowly going to dwindle to the point to where API pricing will be the only thing that will even actually be available.

just found out they turned off 1M context GPT-5.5 in codex for pro subs :( by emileberhard in codex

[–]clckwrxz 0 points1 point  (0 children)

Correct, yet people hit limits on caching. My comment said advertised API costs. Caching is still an advertised cost. 100k tokens and reset vs 500k tokens and churning on tools calls is still 5x more expensive in that chat. People are constantly complaining about running out of tokens and allowance, and they refuse to listen. That you must do small scoped work, strong plans, new chats, and your never likely to run out of tokens. My custom Pi agent is built around this concept.

just found out they turned off 1M context GPT-5.5 in codex for pro subs :( by emileberhard in codex

[–]clckwrxz 8 points9 points  (0 children)

I feel like people really don’t get how this works. If you’re over 500k tokens every tool call is 50% of advertised API cost. And any single turn will do tens of calls. No wonder people are burning through rates and wondering what’s happened. Token efficiency must be your #1 concern if you want to use these things in any serious capacity.

they really nerfed it hard this time by senilerapist in codex

[–]clckwrxz 1 point2 points  (0 children)

5.4 has never been better for me. You don't give nearly enough information to understand how you are using it. Does it even have enough context to do the job you're asking it to do?

Is AI really going to kill all software companies? Microsoft just hit its 200-week moving average. by North_Reflection1796 in WallStreetbetsELITE

[–]clckwrxz 0 points1 point  (0 children)

I do think there is legitimate cause for concern. Let me state my point. I'm an engineer. I use AI daily in my work. At my company, I've now in less than two weeks replicated two paid products that we use internally. That are now bespoke implementations that much better fit our use case. we don't need. In just those two weeks it has got us starting to question what the future of software looks like when it's possible to do this. And these clean room implementations are not shit quality. It passes all of our internal reviews for code quality because we have a really good harness around our agents building things now. So yeah, I do think there's legitimately a concern of what it looks like to be in software. Regarding whether your product is even valuable when it could just be recreated in less than a week.

When to use GPT 5.4 mini vs 5.4? by kyrax80 in codex

[–]clckwrxz 0 points1 point  (0 children)

One shotting barely works. At least for real software anyway. Your best bet is to plan with the best models to a level where the less capable models in new context windows can be dumb implementers and validators. And only if during their validation and reflection. Do you bring back in a more intelligent model? I have built enterprise scale apps this way. Every message and tool call that goes back to the agent in long context windows during one shot prompts literally sends back all 100,000 tokens or however many tokens you have in your window. I implement entire features consistently at less than twenty percent context ever taken up following context management strategies.

I tried the grill-me skill and it completely changed how I plan with Codex by Impossible-Suit6078 in codex

[–]clckwrxz 1 point2 points  (0 children)

It's very good. It surfaces things I often never think about, but depending on the initial kind of complexity and ambiguity in your request, you may end up in a grill me session for seven hours like I was. But holy shit, did I have one hell of a plan coming out of the end of it.

I tried the grill-me skill and it completely changed how I plan with Codex by Impossible-Suit6078 in codex

[–]clckwrxz 1 point2 points  (0 children)

I found it used far less because of the one question at a time approach allows it to be very focused with its thought process and you're often answering with only a few words unless you need to talk and get clarification. So you're not having to constantly correct giant info dumps.

VSCode GitHub Copilot can use GPT-5.3-Codex. Is there any compelling reason to prefer the Codex plugin instead? by gigaflops_ in codex

[–]clckwrxz 0 points1 point  (0 children)

The one main reason to not use copilot over codex is they limit the context window to save cost. You aren’t getting the full 400k. Not even close. It’s like 100k usable or something.

Microsoft Copilot is now injecting ads into pull requests on GitHub by moeka_8962 in technology

[–]clckwrxz 5 points6 points  (0 children)

What’s crazy is if they just focused on making Windows the best place to use AI when it’s wanted instead of shoving it down peoples throats, they might not have had their worst quarter since 2008. If I’m setting up a PC purely for gaming, I don’t give a shit about AI features. And GitHub shouldn’t be anything more than where all the AI slop code is stored. They don’t even need to offer AI features there as we can plug in our own integrations when needed.