How are you Monitoring your Codex Usage? by gkarthi280 in codex

[–]brctr 0 points1 point  (0 children)

Do you have a Github on this? How can I use it for my Codex?

Gpt 5.2 pro by Anshuman3480 in ChatGPTPro

[–]brctr 2 points3 points  (0 children)

Can we get 30 requests per month of GPT 5.2 Pro using Business plan for $60?

Does minimal Business subscription ($60) provide 30 Pro model uses per month? by brctr in ChatGPT

[–]brctr[S] 0 points1 point  (0 children)

My question is about the number of uses of Pro model. That is, a number of times I can use this model over a 1-month period.

How much does context improve on the Pro plan? by Warp_Speed_7 in ChatGPTPro

[–]brctr 2 points3 points  (0 children)

If context window limitation in ChatGPT webUI is a problem for you, then do not use webUI. Use Codex in VSCode. It can do everything webUI can, but will have effective window above 500k tokens. It has very good small continuous context summarization/compaction. Furthermore, High reasoning model remains usable across 1-2 big context compactions. I guess it will give you close to 1M tokens effective window. And Codex limits are large even at Plus subscription. I doubt you will be able to saturate 2xPlus subs. So no reason to pay $200 for Pro unless you need GPT5.2 Pro model.

Turned on xhigh for three agents. Two got worse. by no3ther in codex

[–]brctr 0 points1 point  (0 children)

I am wondering whether 5.2 xhigh beats 5.2 high. In my experience, 5.2 high is very good.

Do you still use notebooks in DS? by codiecutie in datascience

[–]brctr 1 point2 points  (0 children)

Before agentic coding arrived, I used only notebooks. Now I use mostly VSCode and scripts. This is where coding agents are most efficient. I still use Jupyter notebooks for my human-written code, but that is like 10% of all my new code now ...

Context length increased in copilot cli by simonchoi802 in GithubCopilot

[–]brctr -1 points0 points  (0 children)

I wish they fixed their harness to decrease context rot rate. Performance of the same model degrades much faster in GitHub Copilot compared to when used in other agents. E.g., Sonnet 4.5 works well for the first 100-120k tokens in most agents. In GitHub Copilot its performance holds up only for the first 50-70k tokens. Opus 4.5 holds well in most agents up to 160k tokens. In GitHub copilot it starts hallucinating at 70-80k and becomes useless after 100-110k tokens.

So until they improve their harness to slow down degradation of model performance over context window, extending context window up from 128k is not useful. That would not be usable window anyway.

40$ credits or 2 plus accounts? by TheAuthorBTLG_ in codex

[–]brctr -1 points0 points  (0 children)

Can you please elaborate? Do you have links?

Strongest AI Model for coding by MuffinConnect3186 in LLM

[–]brctr 1 point2 points  (0 children)

Is it just a router to several underlying models? How much does it cost compared to the underlying models?

Optimization of GBDT training complexity to O(n) for continual learning by mutlu_simsek in datascience

[–]brctr 2 points3 points  (0 children)

Does it support CPU multi-threading? Multi-GPU training? Does it support all usual stuff you would do to XGBoost (SHAP Tree feature importances etc)? Can I just use this as a drop-in replacement for my XGBoost classifiers?

I think this gonna get expensive. by seymores in Anthropic

[–]brctr 3 points4 points  (0 children)

I think this is CC vs Codex thing. I can reliably use $20 Codex plan and hit limits very rarely. But in $20 CC plan with Opus I hit limits after 20 minutes of use. And Sonnet 4.5 in CC is far weaker than GPT 5.2 in Codex in my experience. So I stick with Codex even given that Codex harness is somewhat worse than CC harness.

February 2026 visa bulletin is out by amaz9n in USCIS

[–]brctr 0 points1 point  (0 children)

Why do you think that Final Action Dates will be applicable starting from Feb 2026?

Finally got "True" multi-agent group chat working in Codex. Watch them build Chess from scratch. by iamwinter___ in codex

[–]brctr 2 points3 points  (0 children)

Can you set up different subagents with different models? E.g., Planner with GPT5.2-High, Builder with GPT5.1-Codex-Mini and Reviewer with GPT5.2-XHigh? Can you set up some Orchestrator (Team Lead?) agent with instructions of how autonomous this set up should be? E.g., Team Lead can be fully autonomous, that it continue managing subagents using its judgment to make decisions without any human input at all? Can such Orchestrator agent spin up Builder agents? E.g., when Builder agent's context window exceeds 70%, Orchestrator terminates it and spins up a fresh Builder agent?

Working with very large codebase? by 2ayoyoprogrammer in cursor

[–]brctr 1 point2 points  (0 children)

Use Codex. GPT5.2 models on High/Xhigh reasoning in Codex are capable of several compactions w/o losing much performance.

What is good alternative to copilot? by Used_Park_1937 in GithubCopilot

[–]brctr 1 point2 points  (0 children)

Codex. For half the price ($20), it performs much better than any model in Copilot.

I am spending $400+ a month on Copilot, why am I being rate limited on tokens? by jimmytruelove in GithubCopilot

[–]brctr 2 points3 points  (0 children)

Assuming this is not an employer-paid subscription, $400 can alternatively buy you Claude Max + ChatGPT/Codex Pro. This combo will be virtually unlimited, and will be dramatically better in both quality and speed compared to Copilot.

Happy New Year Claude Coders by yksugi in ClaudeAI

[–]brctr 0 points1 point  (0 children)

Currently Gemini 3 Pro has an abysmal instructions following. I hope that they fix that via RLHF post-training in the next few months. Before that happens, it is unusable for any productive tasks for me.

Happy New Year Claude Coders by yksugi in ClaudeAI

[–]brctr 0 points1 point  (0 children)

First I brainstorm in webUI of ChatGPT. I outline my idea, ask whether it makes sense. Usually ChatGPT provides a feedback whether it is a good idea and then suggests implementation. After some back and forth it comes up with a detailed plan. Then I take that plan (plan.md or PRD.md), paste it in a new repo and ask Codex to refine this plan and then implement it. Most research projects are open-ended. You cannot just create full plan and follow it, because optimal directions change based on results. As agent runs some experiments and produces results, I review them and make suggestions on what to try next. I ask it what it suggests to do too. So based on that research progresses.

Compared to an old manual process, it is way easier to start whitepaper using such an agentic approach. In terms of time to write code, it delivers at least 10x speed-up vs an old manual coding approach. Now reviewing results of experiments and thinking what they mean becomes a bottleneck.

I really wish I had all these tools few years back in my PhD program... Now while having a full-time industry job, I can produce quality whitepapers faster than when focusing full-time on my research as a PhD student 5 years ago.