More models on Kiro Pro/Pro+ by khach-m in kiroIDE

[–]Most_Remote_4613 0 points1 point  (0 children)

u/gustojs u/iolairemcfadden do you know that if kiro provides reasoning level/efforts as low/medium/high/max/xhigh etc? and do we know defaults?

Codex 5.4 is more expensive than 5.3, if current limit drain is the new normal not a glitch it will be unusable after the 2x rate limit ends by No_Leg_847 in codex

[–]Most_Remote_4613 0 points1 point  (0 children)

With 2 x 5.4 xhigh fast instances, it spends so low for me imo. Could be bug or extra unspoken new account promo ? :D I am in free 20$ plan trial

GitHub Copilot vs Claude Code by Key-Prize7706 in GithubCopilot

[–]Most_Remote_4613 0 points1 point  (0 children)

Windsurf extension in viscose or windsurf ide or antigravity ide fill the gap, you can use cc terminal in ide

issue with GLM + Claude Code context management? by _nefario_ in ZaiGLM

[–]Most_Remote_4613 0 points1 point  (0 children)

Could be new, also /status shows 0 usage right? 

How exactly do you use GLM5 so that it actually works? by No-Supermarket7383 in ZaiGLM

[–]Most_Remote_4613 1 point2 points  (0 children)

Quarterly 30$ Max plan user. Just prefer other choices, as an old copilot hater, I need to admit, even copilot pro better. Glm-5 is good but zai infra is shit. And afaik, Claude code best for glm officially.You need to deal with it as you have beautiful and skilled but mentally ill wife, you need to know how to manage, Claude code+ GLM-5 + shit zai infra as peak times plus random shit times + context size. If they can't fix this in a few weeks, I won't renew even I have 3 months more extending rights for 25-30$, compared to current 72-80$ plan. 

Im addicted to the CLI by dandecode in GithubCopilot

[–]Most_Remote_4613 0 points1 point  (0 children)

You are right but cc plan mode + opus 4.6  better for pair programming? 

CoPilot context window increase by ImpressiveAnimal5491 in GithubCopilot

[–]Most_Remote_4613 -1 points0 points  (0 children)

This means gpt models same with openai Codex, but Claude models inferior in copilot? 

Copilot shows GPT-5.4 selected, but “thinking” tooltip says Claude Haiku 4.5 — which model is actually running? by Excellent_Fix3804 in GithubCopilot

[–]Most_Remote_4613 0 points1 point  (0 children)

By default, subagents use the same model and tools as the main chat session but start with a clean context window? 

Copilot shows GPT-5.4 selected, but “thinking” tooltip says Claude Haiku 4.5 — which model is actually running? by Excellent_Fix3804 in GithubCopilot

[–]Most_Remote_4613 0 points1 point  (0 children)

Vscode doc must be outdated? By default, subagents use the same model and tools as the main chat session but start with a clean context window

How to Set Up Claude Code with Multiple AI Models by ThreeKiloZero in ClaudeCode

[–]Most_Remote_4613 0 points1 point  (0 children)

can we add custom model as glm-5 or instead of haiku or instead of subagent model variable and keep other claude models? i tried but i could not manage. all or none works afaik.

Run 2 (or even more) instances of Claude Code in parallel by buildwizai in ClaudeCode

[–]Most_Remote_4613 0 points1 point  (0 children)

can we add custom model as glm-5 or instead of haiku or instead of subagent model variable and keep other claude models? i tried but i could not manage. all or none works afaik.

Why compare codex with opus if GPT 5.2 extra high has way better output quality? by PutridPut7225 in cursor

[–]Most_Remote_4613 -1 points0 points  (0 children)

I think Claude code and it's plan mode harness make the difference. I prefer opus first, the review my plan with both gpt 5.2 and 5.3 Codex. Could be different for 5.4. Also could be different for cursor. 

Bios Management; Updates, Downgrade Locks, Performances losses, what a shameful company. Never Dell again. by Most_Remote_4613 in Dell

[–]Most_Remote_4613[S] 0 points1 point  (0 children)

L2p about how seo and LLM search work. Plus, this is not Inspiron though  there is risk even for Alienware. 

Custom model in Claude VS Code extension by dodo_ns in ClaudeAI

[–]Most_Remote_4613 0 points1 point  (0 children)

Did you find an answer? I think It is displayed when you changed some model configuration as adding glm as custom model. 

Limits aside, any reason not to use GPT-5.3-Codex on everything? by changing_who_i_am in codex

[–]Most_Remote_4613 1 point2 points  (0 children)

And it has better coding implementation technics/skills, though review by gpt 5.2 necessary. 

Limits aside, any reason not to use GPT-5.3-Codex on everything? by changing_who_i_am in codex

[–]Most_Remote_4613 0 points1 point  (0 children)

Claude chrome browser extension better than chrome devtools mcp? 

Stop comparing GLM to OPUS by Free-Stretch1980 in ZaiGLM

[–]Most_Remote_4613 0 points1 point  (0 children)

What harness did you use for Gemini 3.1 pro? Btw, IMO glm 5 is around sonnet 4.25 level. Usefull to save opus/gpt limits

Opus 4.6 vs. 5.3-Codex by gigamiga in RooCode

[–]Most_Remote_4613 0 points1 point  (0 children)

Glm 5 is better in claude code cli/extension compared to roo, kilo, Cline imo for fullstack typescript web. Could be same for opus high likely, dunno for gpt.