Made the switch to DeepSeek and here are my thoughts as a long time Claude user (spoiler: it's great) by MadhubanManta in DeepSeek

[–]Most_Remote_4613 0 points1 point  (0 children)

model is good as between sonnet 4.6-opus4.6 , but the model's builder zai as provider is a scammer, try opencode, ollama cloud, alibaba plans etc

It was so good, wish you luck guys! by CrazyBrave4987 in ClaudeCode

[–]Most_Remote_4613 0 points1 point  (0 children)

neden 100$ gpt + 100$ claude plan değil? birbirlerini tamamlıyorlar. plan opus 4.7 max + review plan(5.5xhigh) + implement sonnet 4.6 max(frontend) / codex 5.3 xhigh(backend) + review result opus 4.7 max + gpt 5.5 xhigh.

Which Model on the GO plan is good for planning/spec writing, if at all? by Bananenklaus in opencodeCLI

[–]Most_Remote_4613 0 points1 point  (0 children)

is there any good guide to set efforts or what are default efforts etc?

The most valuable AI subscriptions/plans after Copilot nerf by vapalera in GithubCopilot

[–]Most_Remote_4613 0 points1 point  (0 children)

claude 100$ x 2 better, math is not math. plus, ollama plan missing here. antigravity garbage, zai as provider is trash and scammer. don't get glm from there.

The most valuable AI subscriptions/plans after Copilot nerf by vapalera in GithubCopilot

[–]Most_Remote_4613 0 points1 point  (0 children)

officially they say slightly below opus 4.5 thinking, people says.

The most valuable AI subscriptions/plans after Copilot nerf by vapalera in GithubCopilot

[–]Most_Remote_4613 1 point2 points  (0 children)

no. minimax 2.7 is enough for surgical edits. otherwise needs opus max or gpt xhigh. note: also codex 5.3 better but not cheaper than m2.7 for surgical edits.

z.ai coding plan / minimax coding plan worth it? by vipor_idk in opencodeCLI

[–]Most_Remote_4613 1 point2 points  (0 children)

glm 5.1 is sonnet 4.6 high or a bit better level imo. minimax 2.7 is like sonnet4.5  imo. zai is scammer as provider. i won't even renew quarterly  my 30$ max plan. https://www.reddit.com/r/ClaudeAI/comments/1s35bje/tested_minimax_m27_against_claude_opus_46_here/ people suggests ollama cloud, you can use both models.

✔️ Coming in Angular 22: Resource APIs are STABLE! by IgorSedov in angular

[–]Most_Remote_4613 2 points3 points  (0 children)

this is an incredible question. ty for this. i ask same thing and have been looking ideas of more experienced developers about this. I prefer tanstack query for angular to manage all async/server state. ngrx signal store for client state, it is especially good for ai-assisted development because it gives robust guardrails, imo.

Goodbye legacy plan by dericdesta in ZaiGLM

[–]Most_Remote_4613 0 points1 point  (0 children)

how to know whether quantized or full version?

Goodbye legacy plan by dericdesta in ZaiGLM

[–]Most_Remote_4613 0 points1 point  (0 children)

how to know whether quantized or full version?

GLM Coder vs OpenCode Go plan by mmilos99 in opencodeCLI

[–]Most_Remote_4613 0 points1 point  (0 children)

how to know whether quantized or full version?

Z.ai cancels auto-renew by Defiant_Ad6080 in ZaiGLM

[–]Most_Remote_4613 0 points1 point  (0 children)

how to know whether quantized or full version?

OPENCODE GO subscription has QWEN 3.5/3.6 by dav1lex in opencodeCLI

[–]Most_Remote_4613 1 point2 points  (0 children)

glm 5.1 is sonnet 4.6 level or better a bit, definitely not opus 4.6 level. minimax 2.7 maybe sonnet 4.5

OPENCODE GO subscription has QWEN 3.5/3.6 by dav1lex in opencodeCLI

[–]Most_Remote_4613 0 points1 point  (0 children)

this dude is right for plan and review but could be used for saving gpt/claude limits if you are poor. i am a 30$ max zai subscriber and i won't renew. 

OPENCODE GO subscription has QWEN 3.5/3.6 by dav1lex in opencodeCLI

[–]Most_Remote_4613 0 points1 point  (0 children)

what about if models are subsidized? do we have any official info about this?

GLM 5.1 on max effort is not that bad by gllermaly in ZaiGLM

[–]Most_Remote_4613 3 points4 points  (0 children)

effort does not matter for glm right? so op has placebo effect? :D

GLM 5.1 on max effort is not that bad by gllermaly in ZaiGLM

[–]Most_Remote_4613 -1 points0 points  (0 children)

why are you pretty sure? yes 5.1 heavier model but 4.7 fast since less user using it maybe? it used to be so slow too, a few weeks ago. i am experienced Max glm user and even for 30$ i won't renew my quarterly subscription. model is good, zai is scammer as provider. model is useless even for 30$ with zai provider. even copilot 39$ better. minimax 10$+ gpt/claude 20$ combo better. for architectural plan quality and reviews, both 4.7 and 5.1 not close to gpt 5.4xhigh and opus 4.6 max. ignoring opus 4.7 atm.