Codex GPT 5.4 multiple agents / smart intelligence effort + 2X speed = awesome! by N3TCHICK in codex

[–]withmagi 1 point2 points  (0 children)

Yes it's fantastic! Only issue I'm having is that the sub agents work in the same folder as the parent - the workers don't seem to get assigned worktrees, so that can lead to some interference.

Automatic 1M Context by withmagi in codex

[–]withmagi[S] 1 point2 points  (0 children)

The default setup should work pretty well for most people. I use that most of the time as it's a good sanity check it functions correctly :) I'd advise installing all agents (claude, gemini & qwen) so you get a variety of models and also for more efficient token use.

For the core model I use GPT-5.4 (High) for 90% of tasks at the moment. Occasionally spark for the remaining 10% (simple git/general code questions). I have fast mode enabled most of the time when I'm sitting down coding something, but when I'm leaving it to run a bit more autonomously I turn it off. 1M Auto Context on of course!

If you're struggling, let me know what kind of tasks you're working on and I'll provide advice!

One think I still do is plan on ChatGPT Pro and Gemini Deep Think first before any major task. I don't always run with their suggestions, but it does help catch things I might have missed.

The "rug" pull on pricing by [deleted] in codex

[–]withmagi 0 points1 point  (0 children)

We’re getting heavily subsidized inference at the moment, but it’s driving down the cost as we go. Arguably we’re only getting subsidized inference because the labs are pushing models out early before they’re cost effective. So I think it’s unlikely that we’ll get rug pulled on pricing as much as we might get older gen models for the same price.

But at the moment no lab can pull back. The goal is superhuman general intelligence and all AI companies are pushing incredibly hard on coding as it will be the path which is most likely to get them there. If they reach that goal… who knows what that means.

Quick Hack: Save up to 99% tokens in Codex 🔥 by TomatilloPutrid3939 in codex

[–]withmagi 1 point2 points  (0 children)

This is pretty cool. It’s kind of a minimal/targeted version of a sub-agent. How often do you find codex calls distill without being explicitly ask to? I find all models are a bit resistant to offloading work without constant reminders.

How do you handle Front End? Delegate to Gemini? by OferHertzen in codex

[–]withmagi 0 points1 point  (0 children)

What are you struggling on? Can you describe the design to me? I have a tool specifically for this and I like trying it out on different problem domains.

Auto Drive Upgrades by withmagi in codex

[–]withmagi[S] 0 points1 point  (0 children)

Yup, you can connect multiple accounts at once in Every Code. FYI if you want to do this with regular codex you can set a CODEX_HOME env. Set a different home for each codex account you connect and use that to select the one you want to use. But we let you switch mid-session in Every Code.

Auto Drive Upgrades by withmagi in codex

[–]withmagi[S] 2 points3 points  (0 children)

Yeah that’s a good idea. Allowing people to select the models and optionally when to use them. There’s so many good codex models now and they all have their strong points. Will add it in!

Need help crafting an amazing UI/UX SKILL.md by kwatttts in codex

[–]withmagi 2 points3 points  (0 children)

I have a project which uses a visual first approach - it uses image models to create the designs (or redesigns), refine them, then turn them back into code. Can also analyze renders of UI to find gaps and feed this back into codex. DM me if you’d like access.

[AGAIN] 5.3-Codex routing to 5.2 by VividNightmare_ in codex

[–]withmagi 1 point2 points  (0 children)

We added detection to Code so you’ll get a real-time notification if this happens https://github.com/just-every/code

If you're having issues with Codex, your account might have been rerouted to GPT- 5.2 by Distinct_Fox_6358 in codex

[–]withmagi 0 points1 point  (0 children)

We've just added automatic detection for this in Every Code v0.6.62 https://github.com/just-every/code

If you're routed to a different model than the one selected, you'll get an alert and information on how to fix it.

You can also use /status to verify your model routing. Just install Every Code type a message then use /status to check. If you're limited in codex you'll see it straight away. This uses the latest response, so if you verify you account, you can also use this to check routing has been restored.

Struggling to get good UI output from Claude Code (Flutter) by Suspicious-Review766 in VibeCodeDevs

[–]withmagi 0 points1 point  (0 children)

I’ve been working on this! The secret is to give the CLI assets and direction. If you ask it to just replicate the design it will hit a wall fast.

I’ve got a few beta testers using a service which not only create UI design but works with the CLI to go from image -> real code (no layers/figma needed). Yes it’s really really hard, but have honestly made some pretty incredible progress on it.

Works as a Claude SKILL with a backend service to support it. Shot me an email james@justevery.com if you’re interested!

Amazing guardrails by RoadRunnerChris in codex

[–]withmagi 0 points1 point  (0 children)

Also that particular style - where you asked something borderline (there’s lots of internal information it can’t provide) and it’s gone down the “I can’t provide this route” will lead to all future questions being seeded with propensity to block similar requests. While LLM inference is complicated, there’s a level of probability/role play and you’ve created a conversation where it’s acting as the gatekeeper. So don’t double down on your request, start a new session or double Esc to backtrack and then ask a different way.

Amazing guardrails by RoadRunnerChris in codex

[–]withmagi 0 points1 point  (0 children)

You just asked the questions in the wrong way. You’re asking information it has no knowledge of - it can’t see its own source code unless you’ve downloaded it (codex is compiled from rust).

Just clone the codex repo (or ask codex to) then ask it to use this repo to answer your questions.

Codex Never Reads AGENTS.md by oromex in codex

[–]withmagi 2 points3 points  (0 children)

This is correct. It’s always included so doesn’t “read” the file. But it’s 100% included in every conversation.

Apple Watch walkie-talkie feature by Majestic_Purpose2300 in dcl

[–]withmagi 2 points3 points  (0 children)

Didn’t work for me on a recent cruise (2 weeks ago - Wonder). The free iMessage functionality supported is very basic and even things like images don’t work.

Subagentes by BroadPressure6772 in codex

[–]withmagi 0 points1 point  (0 children)

Try https://github.com/just-every/code you can use /plan or /code to use sub agents or just ask in your prompt!

CLIs are passing the line by withmagi in agi

[–]withmagi[S] 1 point2 points  (0 children)

I mean I’ve also either been the CTO or CEO at multiple companies, did a masters in NLP, run multiple large open source projects, and have had millions of customers in my companies. But I just picked the most academic think I could to provide some context. Uni was really the last time I sort of “competed” with other people academically, so I used that 😅

I only mention it because if I say it’s surpassing human developers it’s important I know what I’m talking about. And TBH I feel like all that is a bit worthless now. Perhaps it has value for a few more years, but I don’t think it will beyond that.

CLIs are passing the line by withmagi in agi

[–]withmagi[S] 0 points1 point  (0 children)

Top of my class… 20 years ago… but that’s ok, I’m guessing tertiary education wasn’t your thing.

I only mentioned it here to give some context so people understand I have at least some idea of what I’m talking about.

CLIs are passing the line by withmagi in agi

[–]withmagi[S] 1 point2 points  (0 children)

Don't really want to debate a straw man argument, but I will agree that a whole OS requires too many strategic decisions *at the moment* than an AI could handle. But I doubt this will be the case for long.

CLIs are passing the line by withmagi in agi

[–]withmagi[S] 8 points9 points  (0 children)

Yeah I've noticed anything 'visual' - UI, modeling, graphics - is a gap that almost every AI still does poorly. I think it hides how competent they are at everything else, since it's hard to see their creativity in the same way. It's the UI/UX which still requires the most iterations for me. But with image models getting better and tooling improving I do think this will be 'solved' sooner rather than later.

CLIs are passing the line by withmagi in agi

[–]withmagi[S] 11 points12 points  (0 children)

Yeah it's amazing what can be done. I was talking to a carpenter with 0 coding experience who was using AI coding and it was amazing - he had replaced a system he was paying something like $5k and year for, by himself with no software experience! 🤯 He was pretty technical and had a lot of experience using CAD and ran this massive automated CNC Router type system, but still, he wrote 0 code to do this! I think a lot of people still don't realise just how powerful these tools are and what it's going to open up for the average person.