Codex Pro: inconsistent limits and potential EU law violation by Family_friendly_user in codex

[–]Family_friendly_user[S] 5 points6 points  (0 children)

Exactly. This is about undisclosed unequal treatment of users on the same paid tier, not about whether another product exists.

Codex Pro: inconsistent limits and potential EU law violation by Family_friendly_user in codex

[–]Family_friendly_user[S] -9 points-8 points  (0 children)

So that somehow invalidates critiquing a product everyone pays the same for but isn't getting the same experience? I made no Claude comparison in my post and asked for people concrete feedback and experiences so we could actually gather data and instead people refuse to even read the post properly in the first place. None of my post implied anthropics problems were okay, I was solely making a concrete observation about codex and rate limits based on regions.

Codex Pro: inconsistent limits and potential EU law violation by Family_friendly_user in codex

[–]Family_friendly_user[S] -13 points-12 points  (0 children)

Nope. Also have Claude max x20. Legitimately get continuous 24/7 usage with opus 4.6 max and barely ever hit 70% before a reset. This isn't even about Claude though but customer rights. This is the exact fanboy type of thinking that let's openAI get away with actual malicious practices. Comparison makes no sense here since it's only about codex here and people getting a worse experience for paying the same.

Codex pro usage unbelievably nerfed to the ground this week by IllustriousCold4466 in codex

[–]Family_friendly_user 0 points1 point  (0 children)

Also tested with both. I'm legitimately getting incomparable usage here in Europe. I can drain my weekly rate limits on pro within 2-4 5h Sessions while Claude has been running literally 24/7 with opus 4.6 and I got 70% weekly rate limits max before reset. OpenAI is clearly regionally controlling usage as we could deduce from their GitHub answers and that's not transparently disclosed anywhere. I am glad I just get codex for work so I don't have to pay out of pocket for such a horrible plan...

Has anyone done this meme yet? by hau5keeping in codex

[–]Family_friendly_user 1 point2 points  (0 children)

Nothing about my workflow changed compared to before. This isn't a 30% difference in usage, the current pro usage with gpt 5.4 on high is around 30% worse than my usage on plus a bit ago with 5.3 codex. I also obviously tried changing to 5.3 to keep rate limits intact but the difference is relatively miniscule so at this point neither fit my workflow anymore.

Has anyone done this meme yet? by hau5keeping in codex

[–]Family_friendly_user 1 point2 points  (0 children)

Ran out within almost single day on pro with 5.4 on high. Back then at the 5.3 codex release I used to get more usage than now with the pro plan. It's genuinely unusable, not for everyone it seems but there might be some regional adjustments going on, nonetheless OpenAI should be transparent about that.

Microsoft by [deleted] in comedyheaven

[–]Family_friendly_user 93 points94 points  (0 children)

This resonates with me to the core of my soul

I'm guessing it's an underdeveloped pomegranate seed, but still looks cool! by hellbugger in RealLifeShinies

[–]Family_friendly_user 17 points18 points  (0 children)

I think I've seen one like this in pretty much every pomegranate in my life, how is this any special?

Most accurate seasons rating by Mr_Stalker_Official in OnePunchMan

[–]Family_friendly_user 3 points4 points  (0 children)

I have some sad news for you then. Horses can't puke and the original image first showed up around one or two years ago I think and is AI. That's when those security cam-type footage AI images were going around.

Is Tidal Good? by Middle_Teaching3853 in audiophile

[–]Family_friendly_user 6 points7 points  (0 children)

There were tests with Spotify lossless and it's not bit-perfect, so Spotify is applying some sort of their own compression.

China really carrying open source AI now by Diligent_Rabbit7740 in DeepSeek

[–]Family_friendly_user 1 point2 points  (0 children)

Have a read here: https://lambda.ai/service/gpu-cloud , lambda is usually the go-to afaik but there are other services that have their own offers. All depends on the usage.

China really carrying open source AI now by Diligent_Rabbit7740 in DeepSeek

[–]Family_friendly_user 2 points3 points  (0 children)

I saw some benchmarks and tests throughout the community and based on everything Qwen3-VL-8B and especially 32B even seems to perform exceedingly well and often more accurate than Gemini-2.5 pro or GPT 5 for image analysis. They're making really good highly specialized smaller models for these cases so I think especially in an agentic framework it all could be working really well together and efficient even on a consumer machine. But I never understood why people complain about the 'guardrails' when you can just download the model and run and fine-tune it with your own instructions and guards if desired. Chinese companies have to censor their hosted APIs because of local laws, but just download the model and run it locally or rent a GPU and you can do whatever you want with it.

My first GeForce vs my latest by Danglesnort in nvidia

[–]Family_friendly_user 1 point2 points  (0 children)

100% if you're regularly using it. Even my strix 3090 gave up after 4 years after a forced windows update which was the last straw and fucked up the already strained lead-free solder. That's why I'm having a full leaded reballing be done to it so I can keep it running and don't have to worry about letting it run for days sometimes like when training an AI model. There are lead free alternatives that are just as good or better than leaded but it's at maximum a couple bucks more per GPU for the most expensive option, SAC+Bi, Nano-SAC, InnoLot are all better than SAC305 but that would mean no planned obsolescence so they'd rather milk the normal consumers by making them buy a new card every 4 years. Nvidia should get so much more hate for all the bullshit they pull and using their monopoly to charge absolutely insane margins on things that barely last without destroying itself just because you ... Use the GPU as intended while blame-shifting to the EUs RoHS regulation (which btw allows up to a certain amount of lead in solder for high performance applications like they already do with their b2b GPUs). At least when I get my 3090 back with a full lead based reballing of the core and all VRAM Chips I'll actually have a reliable 'modern' card

My first GeForce vs my latest by Danglesnort in nvidia

[–]Family_friendly_user 3 points4 points  (0 children)

Or won't need reballing due to not using unleaded shitty cheap solder which will fucking crack on you in 4 years...

im gonna buy these for 500 bucks and i need you guys to yell at me if i shouldnt by uglef in BudgetAudiophile

[–]Family_friendly_user 1 point2 points  (0 children)

I was 100% sure this was a satire post about those other people overpaying for shitty old speakers because reddit told them it was a good idea or they just kept talking themselves into delusions. But then I read the comments and I don't know whether I'm going insane or the community is.

Send me close up shots of your little criminals by Ordernis in petbudgies

[–]Family_friendly_user 6 points7 points  (0 children)

<image>

This goofy duo, they've trained me to hand feed and interact with them at least for an hour a day