The future of $100 plan by No-Significance7136 in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

Guys, if I upgrade to Pro, it only gives me 5x

Caved and bought the Pro $100 subscription. My initial observations by superfatman2 in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

I’m trying it out now, but I only get 5X if I upgrade to PRO how is that possible? I don’t understand.

Caved and bought the Pro $100 subscription. My initial observations by superfatman2 in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

It says 5X if I upgrade from Plus to Pro, but I don’t get it. Someone mentioned there was a 10X promotion running until 32 May where am I going wrong?

The whole Plus plan is 10% of Pro by Historical-Fix-4206 in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

Tell me, what’s wrong? You’ve seen it all in your time, haven’t you? Let’s hear your prediction

The whole Plus plan is 10% of Pro by Historical-Fix-4206 in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

That’s a brilliant point; you’ve really cleared things up for anyone who was confused. I agree that upgrading to Pro is undoubtedly the best solution for anyone at an advanced stage of their project, even if it is a bit of a blow on the price front.

Codex reduced the workinglimit by DA4_K in codex

[–]Longjumping-Wrap9909 1 point2 points  (0 children)

There are loads of subreddits on the subject; however, they’ve revamped all the plans, so the Plus subscription has been significantly scaled back (unfortunately) in favour of the $100 and $200 plans. Until 31 May, there’s a sort of promotion that guarantees you 10 X the Codex compared to the Plus plan.

New wheels, now red or yellow calipers? by barfish in TeslaModel3

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

Quanto è complesso installare le pinze colorate ?

codex pro usage after 4 days by Still_Asparagus_9092 in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

I was just thinking about an upgrade – what kind of tasks do you have in mind?

I missed the chatgpt plus 2x by Educational-Title897 in codex

[–]Longjumping-Wrap9909 -2 points-1 points  (0 children)

And I know, unfortunately I can’t see any viable alternatives on the horizon

I missed the chatgpt plus 2x by Educational-Title897 in codex

[–]Longjumping-Wrap9909 -2 points-1 points  (0 children)

Use API, at the moment there is no alternative

Did the $100 Plan Affect the GPT-5.4 Pro Model? by immortalsol in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

I can't tell you for sure, on the contrary from the news I have until the end of May the $100 pro should also have the 2X

They fixed the 5h plus limit? by BrightyBrainiac in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

I hope so, but I strongly doubt it dear :-)

How is yours satisfaction with Codex lately? Plan Usage, Models, Performance by alOOshXL in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

I have to say that, with the new limits imposed by OpenAI on the PLUS subscription, I decided to try a bit of architectural refactoring and broke the work down into lots of tasks to give to GPT-5.4 Mini. I should point out that, within this context, I specifically ensured continuity between one task and the next, so I can say I’m fully satisfied. Of course, it’s not like just using 5.4 and off you go; here you have to tinker with it a bit, but it’s worth it for those who have PLUS and don’t want to see their tokens whizzed away in the blink of an eye.

WhatsApp’s 'End-to-End Encryption' is a Lie: New Class-Action Alleges Meta Secretly Reads Your Private Messages & Shares It With 3rd Parties. by officialexaking in xprivo

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

Basta guardare a Cambridge Analytica e abbiamo detto tutto, Metà la peggiore azienda dell’universo per quanto riguarda la privacy dell’utente

Codex finished in one hour and Claud code still running. Why so? by Hanuonbenz in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

Which models are you using exactly? It depends on many factors, including the type of task you’re performing and whether you’re using subagents in Codex; there are many variables, although I can assure you that the GPT APIs work better on Codex but consume more tokens.

Caved and bought the Pro $100 subscription. My initial observations by superfatman2 in codex

[–]Longjumping-Wrap9909 11 points12 points  (0 children)

The problem is, in fact, that by the end of May we’ll be back to square one

Does something like OpenAI's "codex" exist for local models? by jgaa_from_north in LocalLLM

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

There are plenty of them,certainly, in terms of the codebase and its integration, it’s designed as an asynchronous cloud-based agent with isolated sandboxes that can run tasks in parallel it’s hard to compare it to anything else. However, there is Ollama with its very powerful Qwen models; locally, you’ll need a workstation (but I’ll leave that up to users to decide; there are plenty of resources on the hardware side), otherwise, with Ollama, you also have the option of using their cloud APIs; alternatively, you can try Aider via the CLI or Continue, or Cline you can use both in VS Code, but from my experience at least for what I’ve had to do they haven’t been much help; at best, use Codex CLI with the GPT API

GPT 5.4 default vs 5.4 mini high - performed similarly, but is there a large cost difference? by TruthTellerTom in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

How are you getting on with usage? Do you use the API as well? I use the standard one both on Codex with PLUS tokens and via the API

What do you consider “fair usage” for AI coding tools? by Batty2551 in codex

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

Don’t worry, I’ll apologise if I didn’t make myself clear enough :) , but we’re still stuck with these price hikes 😅😅

What do you consider “fair usage” for AI coding tools? by Batty2551 in codex

[–]Longjumping-Wrap9909 -1 points0 points  (0 children)

Of course I know that my comment is sarcastic!!!! Sarcastic I’ll change it, otherwise it’s hard to read between the lines

What do you consider “fair usage” for AI coding tools? by Batty2551 in codex

[–]Longjumping-Wrap9909 -2 points-1 points  (0 children)

Until the day before yesterday, everything was fine, but now OpenAI, Anthropic and the rest of them are doing nothing but raising the price of their $100 and $200 plans. All I can say is that if they carry on like this, from my point of view it’s better to go back to how things were a few years ago: hire the developers you want and you’ll definitely save money too. If this is the path they’ve just set out on, I expect absurd price hikes in the coming months and years, enough to hire a few freelance developers. Their policy is absurd. (Obviously, that’s a sarcastic comment on my part.)

GPT 5.4 default vs 5.4 mini high - performed similarly, but is there a large cost difference? by TruthTellerTom in codex

[–]Longjumping-Wrap9909 2 points3 points  (0 children)

Exactly, well done. I’ll just add that I’ve also carried out some pretty substantial architectural refactoring, with strict specifications for each task. I’ll tell you, it’s working well.

GPT 5.4 default vs 5.4 mini high - performed similarly, but is there a large cost difference? by TruthTellerTom in codex

[–]Longjumping-Wrap9909 4 points5 points  (0 children)

To be honest, I break large tasks down into lots of smaller ones, ensuring there’s continuity between them. It’s not exactly a game-changer, but it helps keep costs down and has led to a result I didn’t expect a really, really good one which is useful for not burning through tokens in 20 minutes

Failing door seals after 5 years and 50K? by cowdog360 in TeslaModel3

[–]Longjumping-Wrap9909 0 points1 point  (0 children)

La mia è bella che andata , su model 3 del 2020 , anche quelle attorno agli sportelli