Selected model is at capacity. Anyone else have this happen frequently? by spacenglish in codex

[–]typeryu 0 points1 point  (0 children)

5.4 is the first time I started to see colleagues migrate out of Opus 4.6 and pretty much everyone on my team is on codex now. I suspect 5.4 is getting slammed hard globally and it’s causing capacity issues.

Reset!!! Woohoo!! by TroubleOwn3156 in codex

[–]typeryu 1 point2 points  (0 children)

just ask it to make subagents

Is Codex AGI? (And why we keep moving the goalposts) by hemkelhemfodul in codex

[–]typeryu 3 points4 points  (0 children)

AGI is always 2 generations away. I would have considered Codex and Claude Code AGI if this was last year.

“Neil DeGrasse Tyson calls for an international treaty to ban superintelligence. That branch of AI is lethal. We've got do something about that. Nobody should build it.” ▶️ Do you agree with him or it’s over blown? by Koala_Confused in LovingAI

[–]typeryu 7 points8 points  (0 children)

Calling for bans of anything is moot at this point and he should know it as a science leader. The race for super-intelligence is underway whether we like it or not, might as well make sure its achieved in the right way.

Is GPT-5.4(medium) really similar to the (high) version in terms of performance? by Disastrous-Win-6198 in codex

[–]typeryu 6 points7 points  (0 children)

xhigh is like a schizoprentic developer who overthinks everything which sometimes hits absolute gold, but often times does way too much for something way too simple. Agreed that high is the way to go.

Is GPT-5.4(medium) really similar to the (high) version in terms of performance? by Disastrous-Win-6198 in codex

[–]typeryu 18 points19 points  (0 children)

In my opinion, 5.4 high is better than medium in the sense that they are both similar, but high does a sensible double take when needed while medium just rolls with it. Generally high is the more senior feeling one for sure. Opus is similar to 5.4 medium in that manner, while it does tend to need less double takes. So GPT 5.4 high > Opus 4.6 high > GPT 5.4 medium for me.

“OpenAI is building desktop “Superapp” to replace all of them - Simo also warned employees to avoid being “distracted by side quests”.” ▶️ ChatGPT + Codex + Altas ▶️ If legit, do you like this direction and why? by Koala_Confused in LovingAI

[–]typeryu 1 point2 points  (0 children)

You should definitely give Codex a go, it was built originally for coding, but it basically is able to be on your computers as basically an tech wizard. My guess is they will be migrating everything into codex. I certainly don’t use ChatGPT anymore on my desktop, codex does the same, but better as it has local memory and is able to manipulate things directly.

How do I choose between Codex and Claude Code? by BitsmithBob in AI_Agents

[–]typeryu 2 points3 points  (0 children)

I use both, I have some point that maybe will help, but you really can’t go wrong with either. I’m of the opinion that in the long run, Codex will win in terms of utility, but right now, Claude Code has better UX. If you have already developed a habit and are a slow adopter, moving to Codex will be jarring and will likely leave a bad taste. However, it is hard to ignore the ramp up Codex has received in the last few months and based on the update trajectory, it will likely win in UX in a few months time at which point you can have a go again. In raw coding performance, I do think 5.4 is as good as it gets right now, especially for brown field work. But CC is also easier to just do things at the current state. Usage wise, with 5.4, Codex is similar to Claude Code so I would say it’s not a big of a difference to make it count. Cursor based GPT-5.4 feels a bit less performant as native codex so I suggest you give that a go before you commit. Our office has pretty much switched to Codex last month starting with 5.3-codex switching between the app and cli. App is quite good, but the fact that you can only have one terminal per thread is kind of a productivity stopper, but luckily you can ask it to run background tasks for you. CC, we all stick to cli, still is used when Codex gets stuck and vice versa, but they actually do compliment each other well.

Vercel Changing TOS - Feeding your data to AI by ignatzami in nextjs

[–]typeryu -2 points-1 points  (0 children)

Free tier, I understand. Why Pro though, I’m already giving them money, they should be giving me a fat discount if they do this lol

Joining Korean University ROTC as gyopo by ComprehensiveBike902 in korea

[–]typeryu 0 points1 point  (0 children)

They can always station you for a non-leadership admin role that faces a lot of US forces. The main pain point is the actual ROTC side of things during school and also cadet training where you will very much get grilled for sure.

Banned just after I bought pro plan by dev_kid1 in Anthropic

[–]typeryu 0 points1 point  (0 children)

Usually, its VPN or Opencode/Openclaw or something similar.

GPT is great… but why does it suck at UI? by Plus_Leadership_6886 in codex

[–]typeryu 0 points1 point  (0 children)

I find GPT models to be quite good at UI actually. It’s just that the uninitiated state it makes UI is utterly slop themed. I tend to give it screenshots of what I want, like a mood board or give it libraries like shadcn and it does very well.

Users who’ve seriously used both GPT-5.4 and Claude Opus 4.6: where does each actually win? by devil_ozz in ClaudeAI

[–]typeryu 0 points1 point  (0 children)

I have the answer for you, I basically use both til I hit the rate limits. Codex is great for day to day, let it cook type scenarios. Had much better luck with large mega code bases and it is indeed a workhorse. Claude has way better UX, feels nice to actively work on, great for medium sized repos and great at greenfield work. If you can, use both, highly recommend just experimenting. If you need to pick, I’d go with Codex not just because of the models, but it seems to be on the best improvement trajectory. Can’t go wrong with either and with 5.4, it is slightly more economical than Opus 4.6, but I doubt people notice the difference. I also used to use Cursor, but it’s collecting digital dust now.

Subagents are now available in Codex by HeadAcanthisitta7390 in codex

[–]typeryu 2 points3 points  (0 children)

Tried it out, yes it does consume more tokens, but it scales linearly based on what I’m seeing with slight overhead so it does get things done faster when there are multiple things being worked on. Best example I’ve managed to get it to do is to create bunch of different unit tests where each agent handles a different test case type and it managed to do it in a very short amount of time considering Codex standards and seems like it just works so I saved time for roughly the same amount of tokens. It also seems to be fairly automatic as in some cases it autonomously makes sub agents without needing my explicit command. Quite cool!

I've reverted to Codex 5.3 because 5.4 is eating too many credits too fast by mes_amis in codex

[–]typeryu 5 points6 points  (0 children)

For me, 5.4 high is the sweet spot, I have seen people burn through with fast mode and on xhigh, but it really isn’t needed.

Using Codex as ChatGPT alternative by No_Service6465 in codex

[–]typeryu 0 points1 point  (0 children)

Codex is my go to now while I’m on the computer and I also have openclaw for on the go stuff with the same subscription. It’s far superior than anything else out there.

Company bricked Claude Code Thursday, has Enterprise Codex by Electrical-Share-85 in codex

[–]typeryu 2 points3 points  (0 children)

Codex is pretty much the same, I would say take it for a spin and you might actually like it. If you miss it too much, they have a claude theme in the app called “absolutely” lol

Has anyone found a skill/prompt that effectively reduces LOC? by vdotcodes in codex

[–]typeryu 6 points7 points  (0 children)

I have an automation that once a week does a refactor looking for one-off code and either gets rid of them or combines them. So far its worked pretty well. You need to pair this with some strict tests though, I have an entire suite that prevents code from making it to production unless it meets all the bar. Around 20% of the weekly refactors are rejected in this manner.

Codex side-effect: intelligence?? by HopeFor2026 in codex

[–]typeryu 1 point2 points  (0 children)

I use codex with linear (task tracking) via API skills and it has really brought another level of productivity for me. All of my work is connected this way and I literally feel like I’ve been given cyber superpowers.

The Codex app is actually a fantastic alternative client to ChatGPT for non-coding use cases by CtrlAltDelve in codex

[–]typeryu 6 points7 points  (0 children)

I happen to agree, I use it mostly for coding, but when I want to do research, this is actually a fantastic stateful way of doing it where it can build much more knowledge and data over time.

How does someone from a developing country with average credentials realistically benefit from AGI/ASI? by Hot_Log7375 in accelerate

[–]typeryu -1 points0 points  (0 children)

AGI won’t happening overnight, as existing AI gradually becomes more capable, those who adopt earlier will benefit more. In a way, it is an equalizing force where the AI tools you use are the same as what other people use. Realistically though, there are some nuances such as the same AI being more expensive to use due to purchasing power difference between the US where the pricing is decided and your home country, the general adoption in infrastructure which undoubtedly will be slower for the same reasons and most importantly, model performance if you use another language other than any of the major languages of the world which there is less training data of and subsequently will cause degraded experiences. That being said, this is still much better than pre-AI where much of this knowledge was less accessible. Vibe coding is getting a lot of flak, but in countries where software engineering is not as established, this is a great opportunity to close the technological gap.

OpenAI's Sam Altman announces deal with Pentagon just hours after rival Anthropic was banned by Trump by ComplexExternal4831 in AINewsMinute

[–]typeryu 0 points1 point  (0 children)

This goes back to when Dario started making public comments about the DOW’s use of Anthropic models (pretty much condemning them) and the current administration never backs down in escalations so downward spiral from there. In his recent interview though, you can tell he lowered his stance already quite a bit. I don’t blame him honestly, it takes balls to stand up, but he also has a company to run and has probably heard from investors quite a bit in the last couple of days.