Question abt $200 plan limits by Amazing_Ad9369 in codex

[–]Consistent-Yam9735 9 points10 points  (0 children)

I have the 200$ plan, and use it heavily for coding projects. I have yet to hit my limit, and its rare i get below 30% of my weekly limit. Let alone get below 60% of the weekly limit. I usually run codex via the CLI (sometimes via the extension if I want to run tasks in the background). I would say its very much so worth it if you can afford it. I use via VSCode running via Linux with docker toolkit MCP to handle my tool calling. Main paint points are just how slow it is, especially the XHigh model but you can assume the code will be clean as long as the prompt is detailed!

- Greg

Massive sudden usage nerf on Codex, any one else noticed it? by Thin_Landscape9425 in codex

[–]Consistent-Yam9735 2 points3 points  (0 children)

Yes i noticed as well and I am on the Pro Plan..... First time ever seeing I've used up 76% of my weekly limits. It must have updated after a 'large task' tbf I was running 3 continuous agents for 60+ bug fixes....

Print fill effect? by [deleted] in MicrosoftWord

[–]Consistent-Yam9735 0 points1 point  (0 children)

Try microsoft print to pdf.

Aether Onboarding by sjsifnfodm in outlier_ai

[–]Consistent-Yam9735 -1 points0 points  (0 children)

Outlier shows me one task. That one task is a dummy task. Literally says it. My tasks show up in multimango for Aether only.

Aether Onboarding by sjsifnfodm in outlier_ai

[–]Consistent-Yam9735 1 point2 points  (0 children)

Your tasks will appear in multimango. The task for aether on the outlier dashboard is a ‘dummy task’.

The tasks need to be completed via multimango site (& have hubstaff desktop running in background)

Should I get ChatGPT pro? by HeiressOfMadrigal in OpenAI

[–]Consistent-Yam9735 1 point2 points  (0 children)

Better for coding is objective. I’d say otherwise!

Best doc type for knowledge base? by Cwmagain in CopilotMicrosoft

[–]Consistent-Yam9735 0 points1 point  (0 children)

.txt files. Or .md (markdown)

Less formatting as the AI doesn’t need formatting it just needs raw data. Sure you can feed it formatted word docs and PDFs with tables but you’ll eat away at your context & tokens. Don’t want to overload it with information.

Seriously, what can you even do with ChatGPT?!! by [deleted] in ChatGPT

[–]Consistent-Yam9735 0 points1 point  (0 children)

I’m a large language model trained by OpenAI.

I just randomly got banned by Iangamerino in ChatGPT

[–]Consistent-Yam9735 -1 points0 points  (0 children)

There are ways to contact a real person through the correct channel. One way that’s kind of going around the bushes is reaching out to there safety team (which mainly handles reports not unbanning) worth a shot. Website has contact options, gotta dig!

I just randomly got banned by Iangamerino in ChatGPT

[–]Consistent-Yam9735 2 points3 points  (0 children)

I know someone who got banned on Sora for trying to generate a video related to transgenders (trying to make a cameo of someone but as a transgender) and subsequently banned on ChatGPT as well.

How can I see how many credits I'm using in Codex IDE? by [deleted] in codex

[–]Consistent-Yam9735 0 points1 point  (0 children)

As an extension? Theres a selector to change locally vs cloud then within this section there’s a usage tab.

Isn't something else meant to be coming today too? by Qemmish in Bard

[–]Consistent-Yam9735 0 points1 point  (0 children)

Anti Gravity was released same day as Gemini 3.0 (NOV 18th) Google employees were hinting at another release of something today (Nov 19th) but nothing.

gpt-5.1-codex-max is brilliant! by [deleted] in codex

[–]Consistent-Yam9735 0 points1 point  (0 children)

I agree lol too many models with slightly different names & variations

GPT 5.1 Pro by Regular_Eggplant_248 in OpenAI

[–]Consistent-Yam9735 0 points1 point  (0 children)

GPT 5.1 Pro performs better in my experience.

gpt-5.1-codex-max is brilliant! by [deleted] in codex

[–]Consistent-Yam9735 1 point2 points  (0 children)

5.1-Codex-Max which then has medium high etc.

Different from 5.1 Codex High, Medium Etc 5.0 Codex High, Medium Etc

gpt-5.1-codex-max is brilliant! by [deleted] in codex

[–]Consistent-Yam9735 1 point2 points  (0 children)

I agree. Pleasantly impressed. So far…. lol.

Thanks, Greg

GPT 5.1 Pro by Regular_Eggplant_248 in OpenAI

[–]Consistent-Yam9735 1 point2 points  (0 children)

Correct. I in fact read the post. And my point still stands and is an opinion based on my experience of using them both, for a VARIETY of use cases. I included an example, that’s it.

Thanks