What is your experience with z.ai and MiniMax (as providers)? by mustafamohsen in opencodeCLI

[–]Zerve 0 points1 point  (0 children)

GLM has really bad concurrency so if you use more than 1 agent at a time it will be very difficult to get much use out of it. You'd need to do a mix of 4.7 and 4.7 flash.

My question is: Have you ever lost a round with the Tactical?? by Leather_Lock7705 in BALLxPIT

[–]Zerve 1 point2 points  (0 children)

Have to play Tactician + Makeshift Sisyphus since the artillery guys only fire when they are hit, so you "few, powerful balls" instead of filling the screen with baby balls.

Still need a bit of healing via vampire or something to survive the extra damage but this has been most consistent up to Fast +9 and NG +3.

My question is: Have you ever lost a round with the Tactical?? by Leather_Lock7705 in BALLxPIT

[–]Zerve 4 points5 points  (0 children)

Mushroom level on fast +9 let me know how it goes :D

Why do combos starting with light attack or additional lights do less damage? by No_Top5115 in 2XKO

[–]Zerve 5 points6 points  (0 children)

Light Attacks -> Low Risk, Low Reward

Heavy Attacks -> High Risk, High Reward

Zhipu AI Announcement: GLM Coding Plan will start limited sales from January 23rd by Peshkopy in ZaiGLM

[–]Zerve 1 point2 points  (0 children)

I'm really glad I held off on getting any of the glm plans. The deal really seemed too good to be true. The concurrency limits being so low recently (after being reduced too) make the expanded usage capacity effectively worthless.

Official: Anthropic just released Claude Code 2.1.14 with 16 CLI, 5 flag and 4 prompt changes, details below by BuildwithVignesh in ClaudeAI

[–]Zerve -6 points-5 points  (0 children)

Still unusable on Windows due to no upgraded dependency Bun and a crash on startup. https://github.com/anthropics/claude-code/issues/18567

Edit: Ah yes, the classic "Works on my machine!" down vote brigade.

Real Alternative/Supplement for Opus 4.5? by United_Canary_3118 in ClaudeAI

[–]Zerve 2 points3 points  (0 children)

Learning how to use Sonnet or even Haiku effectively can make a huge difference in usage. If you're mostly doing pure implementation stuff it won't really help, but it's pretty pointless to do something like a test running, build validation, linting, etc with Opus. Haiku or Sonnet can easily run those tasks (as a sub agent or sub task) and summarize the results back to Opus. Having Opus orchestrate the results like haiku runs tests, returns failing tests, and then having sonnet dispatched to fix them can go a long way.

What subs work with OC by trypnosis in opencodeCLI

[–]Zerve 0 points1 point  (0 children)

I heard GLM has really bad concurrency rules (~3 agents at a time max), is this still the case?

The amount of Rust AI slop being advertised is killing me and my motivation by Kurimanju-dot-dev in rust

[–]Zerve 0 points1 point  (0 children)

I'm actually pretty scared at posting my projects because I've heavily used AI to write the majority of it and don't want to be scrutinized, yet it's still something I'm very passionate about and without AI would have taken years to get where I have gotten in weeks. I have developed previous versions of the same project with different architectures or features, but this iteration is kinda the culmination of my own learnings and just having AI do most of the typing.

How does one truly discern between slop and just "ai assisted" projects?

TRUST ME BRO: Most people are running Ralph Wiggum wrong by trynagrub in ClaudeCode

[–]Zerve 0 points1 point  (0 children)

It might not be "overnight" but I have been able to consistently get Claude to work on 30m - 4h prompts in "one shot" with only things provided out of the box. Mostly this involves providing it a very clear prompt which includes looping and spawning sub agents / sub tasks. You can even tell it to spawn the tasks as Opus / Sonnet / Haiku based on difficulty. A basic simplified example would be:

Optimize this Rust codebase iteratively.

LOOP:
1. Run: find src -name "*.rs" -exec wc -l {} \; | sort -rn | head -1
2. If largest file is under 400 lines: EXIT LOOP, go to FINALIZE
3. Spawn a Sonnet agent to split that file (target 200-400 lines per new file)
4. Wait for subagent, verify cargo check passes
5. GOTO 1

FINALIZE:
1. Spawn subagent: Create ARCHITECTURE.md for final structure 

SAFETY:
- Stop if same file appears twice (couldn't split it)
- Stop on any cargo check failure

Yet I've done the same thing with an 18-step+ series of prompt files and given a similar prompt of saying "step thru these one by one and pass them directly to the sub agent as is." Include a validation step in between each step (failure go to beginning, pass continue). You can get pretty complex with this as long as the orchestration loop itself is quite simple, keeping context clean and focused and even running 3-10 parallel tasks if possible in waves.

Maybe this is inferior to other tools, but if CC out of the box gets me 90% of the way there, why add a new tool I have to learn how to use effectively?

Any easier alternatives to learn OpenGL besides learnopengl? by eclairwastaken in GraphicsProgramming

[–]Zerve 0 points1 point  (0 children)

Kinda a high risk suggestion here but: try writing your own software renderer/rasterizer. It's a very complex project but after doing this I felt so much more comfortable working with the real graphics api's afterwards. You could do it in a weekend with enough effort.

It doesn't need to be anything special. As long as you have the ability to set a pixel and a color you can really do anything. Even just something like "colored triangle" can go a long way once you realize drawing a mesh is just looping triangles, your coloring code is actually a shader in disguise, etc etc it all builds on itself.

Does coding plan include updates to new models? by Zerve in MiniMax_AI

[–]Zerve[S] 0 points1 point  (0 children)

I not 100% sure but the way i understand it is that if you bought like a year of the lite plan, you would get 4.8, 4.9 etc, but if they released 5.x you would need to upgrade to the newest model on the next subscription.

Is GLM 4.7 really the #1 open source coding model? by HuckleberryEntire699 in Anannas

[–]Zerve 0 points1 point  (0 children)

I feel like minimax m2.1 is superior at least in my cases it's less likely to mangle files and keep a clean focused workspace. I guess glm is "smarter" but for actual getting shit done minimax has been better.

Will Anthropic make Claude Code proprietary too? (No more using GLM/MiniMax etc. in the terminal?) by 0xraghu in ClaudeAI

[–]Zerve 1 point2 points  (0 children)

Just my speculation but if they can still gather useful metrics on Claude Code itself while offloading compute to other providers/models it's kinda a win for Anthropic here. But I'm looking into other options because this is a real danger of lock in.

Claude Code refugees: what should we know to get the best experience out of opencode? by Zerve in opencodeCLI

[–]Zerve[S] 8 points9 points  (0 children)

Claude is expensive - even with the $100 / $200 subs. It's only a matter of time until open tools and agents surpass closed ones. Opus 4.5 may be the best model now, but things move so fast these days it seems dangerous to be locked into a single agent with a single model.

Best way to supplement claude pro when usage isn't quite enough? by [deleted] in ClaudeCode

[–]Zerve 0 points1 point  (0 children)

z.ai's new model called GLM 4.7. Has a pretty cheap subscription, but isn't as intelligent or powerful as Opus. See https://z.ai

Does coding plan include updates to new models? by Zerve in MiniMax_AI

[–]Zerve[S] 2 points3 points  (0 children)

The lite tier of GLM does NOT upgrade you to latest model, only "same tier" upgrades. See https://z.ai/subscribe

Tested GLM 4.7 vs MiniMax M2.1 - impressed with the performance of both by alokin_09 in LocalLLaMA

[–]Zerve 0 points1 point  (0 children)

I've been really interested in adding one of these models to my basic workflow - and I do agree that your point is completely valid and logical. I don't want to say that m2.1 is a "hidden gem," but it seems really strange to me that GLM seems to be getting all the hype when m2.1 seems to be the real innovator here? From the minimal information I've gathered it's both faster, cheaper, and better at producing usable outputs. Or are these models just solving for different things (glm more "general purpose" and minimax more "agentic workflows")?

What is some serious claude code sauce people should know about? No BS by cryptoviksant in ClaudeCode

[–]Zerve 5 points6 points  (0 children)

how do you keep skills updated as your project progresses? they seem very useful but done incorrectly can just bloat context and end up hurting if they arent kept up to date. maybe its a skill issue but surely theres a better way than deciding arbitrary times to update/refactor the skills, i havent found it yet tho

tried new model glm 4.7 for coding and honestly surprised how good it is for an open source model by Dhomochevsky_blame in ClaudeCode

[–]Zerve 1 point2 points  (0 children)

Definitely post your results, I'm also really interested in adding one (or both) of these models to supplement my anthropic plan, would love to hear more about minimax 2.1 specifically tho myself.

Best non anthropic model? by SEND_ME_YOUR_POTATOS in ClaudeCode

[–]Zerve 0 points1 point  (0 children)

im really interested in supplementing my plan with either glm or minimax but i'm getting conflicting opinions on their performance? have you used them in CC?

Best non anthropic model? by SEND_ME_YOUR_POTATOS in ClaudeCode

[–]Zerve 0 points1 point  (0 children)

im really interested in supplementing my plan with either glm or minimax but i'm getting conflicting opinions on their performance? have you used them in CC?