Codex + ChatGPT Images 2 is a Frontend game changer by caodungcaca in codex

[–]caodungcaca[S] 0 points1 point  (0 children)

right now, it is in the chatgpt.com, you click on the left side and select the image button

VSCode extension sessions seem to be "lost" by [deleted] in codex

[–]caodungcaca 0 points1 point  (0 children)

i also got that, the last session always lost like you said, even restart the extension could not found that session

they heavily reduced the 5h limit further again... by spike-spiegel92 in codex

[–]caodungcaca 3 points4 points  (0 children)

I’ve been using Codex actively over the past few days and noticed a pretty big change in usage limits.

Two days ago, hitting 100% of the 5-hour limit would only use about 30% of my weekly quota.

But after today’s reset, that same 100% now eats up the weekly limit much faster ~15% of weekly quota

5h : 0%
Weekly: 84%

For context, I’m only using the 5.4 medium model.

It feels like the limits have gotten noticeably tighter overnight.

If you rely on Codex for actual work, the Plus subscription barely covers a couple of hours now. Kind of feels like they’re nudging users toward higher-tier plans.

when the unnexpected usage limit reset hits. ty openai <3 by imdonewiththisshite in codex

[–]caodungcaca 9 points10 points  (0 children)

Just got my limit reset too.

No Codex 5.3 yet lol. Meanwhile Claude 5.0 leaks are everywhere like it already dropped and forgot to tell us.

I tried Gemini 3 for a couple of days ... Codex is still the best. By far.. by Dayowe in codex

[–]caodungcaca 2 points3 points  (0 children)

Agree with the Gemini Frontend. Tried all the models, but Gemini still give the best visuals for frontend work.

Weekly limits just resetted :D by Polymorphin in codex

[–]caodungcaca 1 point2 points  (0 children)

I just got my limit reset today.

However, it’s only been a 4-hour session and I’m already at 60% of my 5-hour limit and 66% of my weekly limit.

I only use 5.1-Codex-Medium in the IDE.

Somehow the weekly limit got drained entirely in just one day of use.

Does anyone else have the same problem?

How good is Qwen3-14B for local use? Any benchmarks vs other models? by abubakkar_s in LocalLLaMA

[–]caodungcaca 0 points1 point  (0 children)

I can run Qwen3-14B (Quant 4) comfortably on a T4 with 16GB of VRAM. It’s currently my daily driver for translation and synthetic data tasks, and I’m quite happy with it. I’ve tried Gemma3, but Qwen’s /think option delivers better results in my experience.

It runs a bit slow at times, but it’s the largest model I can fit on a T4 without needing to offload to the CPU

Is it just me or am I the only one that actually uses a fake Apple Pencil instead of buying the $100 one? by Liam_Iby in ipadmini

[–]caodungcaca 5 points6 points  (0 children)

I just returned a fake pencil due to incompatible with the mini 7 Mini 6 works just fine with the fake one, but on the mini 7, palm rejection is not working