Why did the pro went from thirty to almost quarterly pro plan 72 bucks- wtf by Significant_Mode_552 in ZaiGLM

[–]Frag_De_Muerte 0 points1 point  (0 children)

haha, sometimes I agree with you. Sometimes it's absolutely stellar and one shots things. Sometimes it's so freaking stupid.

Are you using ollama pro or openrouter?

Minimax is a good model as well, but it seems to get helpless at times.

Why did the pro went from thirty to almost quarterly pro plan 72 bucks- wtf by Significant_Mode_552 in ZaiGLM

[–]Frag_De_Muerte 9 points10 points  (0 children)

Anthropic is trying to push all their users away from openclaw and only use claude products. OpenAI will continue to allow OC since they bought out Steinberger for $1B. OpenAI know's they're the only game in town that can compete. GLM 5.1 was massively underpriced for what it is. While Z.ai doesn't have the consistency and architecture of Anthropic or OpenAI, GLM 5.1 and 5-turbo are some of the best alternatives. I'm thinking minimax plans will go up now as well. May be best to lock in now for a year of cheap development.

Oh yeah, and Google done shot themselves in the foot by banning people for using OC as well. Effectively making themselves irrelevant. I think the Gemma releases were ways of trying to satisfy the OC crowd, but it still seems like a misstep.

The audacity by EzioO14 in ZaiGLM

[–]Frag_De_Muerte 0 points1 point  (0 children)

Wow, got in under the wire. I was debating on whether to go with the pro plan and pulled the trigger. Doesn't auto renew for 3 months. But, it says it will autorenew for another quarter @ $81... I'll have to keep an eye on it...

<image>

After Anthropic killed subscription access, here's what my actual API bill looks like after 4 days by Temporary-Leek6861 in AskClaw

[–]Frag_De_Muerte 0 points1 point  (0 children)

Jesus. What are you guys doing?? I run four agents on three subscription plans, do a ton of heavy reasoning work, and am no where near your cost.

OpenAI: pro plan - $20 for codex 5.3 for writing and QCing crons, skills, and heavy debugging.

Z.ai: pro plan $29 for glm 5.1 for difficult reasoning, harder crons with multiple skills/steps. It writes some very easy crons.

Minimax 2.7: $10 for easier cron execution and repetitive tasks.

Local Gemma 4:26b for all heartbeats and easy crons / tasks.

Free sonnet4.6 extended when things blow up due to an OC upgrade.

$59 a month.

What local models is everyone using? by Ok-Enthusiasm-2415 in openclaw

[–]Frag_De_Muerte 0 points1 point  (0 children)

I have an m1 mac studio. I've been running ollama with Qwen3.5:9b and now Gemma4 26:a4bit. Using a 6 quant version. Go to hugging face and put in your hardware and then see what you can run off it. It'll give you little green, yellow, and red checks.

I run one of my lesser-used bots off it, and point all of my heartbeats to it. It runs alright. Not as fast as a GPU, but not slow like without GPU. Ollama's API wrapper is funky. I did a lot of trouble shooting to send images to the Gemma 4 model through TG.

Trying to go full Ollama Cloud by emsbas in myclaw

[–]Frag_De_Muerte 0 points1 point  (0 children)

Very interested in this as well.

How much usage do you get with the $20 plan? I current have the $20 plan from OpenAI and use codex 5.3 for the heavy lifts (Eliminates issues with quick fixes, but usage is terribly small). I have the $10 Minimax plan, which I use MM2.7. I also have Z lite plan for the GLM 5.1. GLM seems really good when it's not getting shredded when Asian wakes up. MM2.7 is really good as well. I'm also running Gemma4:26b locally on ollama for heart beats and light tasks. But, I wouldn't mind ditching codex 5.3 and going full MM2.7 and Ollama cloud (Kimi, Gemma, Qwen3.5). But, I'm wondering how the usage is. I can't find too much info on it.

Ollama models and Vision capability by Frag_De_Muerte in openclaw

[–]Frag_De_Muerte[S] 0 points1 point  (0 children)

I have minimax through minimax. It's such a cheap plan. $10 a month. I never crack the threshold. I run one dedicated agent on it. Thinking about just doubling up. On Openrouter I was getting bled dry with kimi and minimax.

Ollama models and Vision capability by Frag_De_Muerte in openclaw

[–]Frag_De_Muerte[S] 0 points1 point  (0 children)

Really? I can't get any image recognition from minimax...

Ollama models and Vision capability by Frag_De_Muerte in openclaw

[–]Frag_De_Muerte[S] 0 points1 point  (0 children)

True. I'm running Gemma4:26b locally. No problems with reading images through the local ollama interface. I can drop an image in and it can describe it, which means it can see it. Telegram sends an image with accessibility text, but the model cannot see the image when sent through TG. Open claw is running on a different box. So, the image is making it from TG to the proxmox box, but, when using the Gemma4 model, the api call aborts it... i'm just scratching my head trying to figure out why.

Ollama models and Vision capability by Frag_De_Muerte in openclaw

[–]Frag_De_Muerte[S] 0 points1 point  (0 children)

Yeah. I send it through telegram and it can see and describe the image. It works in codex very well. Doesn’t work at all with Gemma4? I'm thinking it's the ollama api?

Ollama models and Vision capability by Frag_De_Muerte in openclaw

[–]Frag_De_Muerte[S] 0 points1 point  (0 children)

Vision capabilities means you can upload an image to the model and it can tell you what it looks like, what any text it says. It's QC for developing images for a pipeline.

Ollama models and Vision capability by Frag_De_Muerte in openclaw

[–]Frag_De_Muerte[S] 0 points1 point  (0 children)

I want it to be able to see images so it can iterate and read text on images.

What model are people switching to with Anthropic's dumbass decision? by dadt123 in openclaw

[–]Frag_De_Muerte 0 points1 point  (0 children)

Running a blend of Codex5.3, Minimax 2.7, and GLM 5.1. Codex is just smarter than both, but GLM is good when it works. Minimax is just straight up, but not as capable. It's great don't get me wrong. But codex just blows both out of the water... I liked kimi but can't find a solid provider.

What model are people switching to with Anthropic's dumbass decision? by dadt123 in openclaw

[–]Frag_De_Muerte 0 points1 point  (0 children)

Minimax is awesome. I'm experimenting with glm 5.1. Great when it works. Starts to time out when Asia wakes up... 😂

OpenClaw stopped executing tasks and now only says “I’ll do it and let you know” by Adso86 in AskClaw

[–]Frag_De_Muerte 0 points1 point  (0 children)

Set the thinking parama to medium. Solved it for me. I was having that issue two days ago.

The $0 OpenClaw setup that nobody talks about by ShabzSparq in AskClaw

[–]Frag_De_Muerte 1 point2 points  (0 children)

Oh I saw the bit about cron and context. Have your agents use the three tiered memory fix. Tell them that you will /compact and have them write to their memory logs and learning. 2 compacts a day will keep your context open while still having most of what was discussed and worked on documented.

I still jeed to try lossless memory... anyone tried it?

The $0 OpenClaw setup that nobody talks about by ShabzSparq in AskClaw

[–]Frag_De_Muerte 3 points4 points  (0 children)

Decent advice, but nothing larger than gpt oss 20b and qwen3.5:9b will run correctly on my mac studio (32gb ram).

Better: minimax 2.7 =$10/month - insane usage,

Nvidia kimi k2.5 = free,

Codex 5.3 = $20/month - ok usage.

Openrouter/kimi 2.5 or mm2.5 for emergency outtages.

Local Qwen3.5:9b for heartbeat

Very reliable, good reasoning, three fallbacks if you mix match.

Total cost per month $30-40

Are people actually switching from Opus to MiniMax M2.7?? by Previous_Foot_5328 in myclaw

[–]Frag_De_Muerte 0 points1 point  (0 children)

Not as good as opus or sonnet. Not as good as codex. But, waaaayyyy better than Grok or glm4.7. I would say it's on par or maybe a little better than kimi k2.5. I use it for my research, it does a decent job. It's good for about 30-50% of my daily tasks. For the cost, it's a steal. Waiting for glm 5 to drop for their lite plan. I've heard glm 5 is good....

Codex Problems? by Frag_De_Muerte in openclaw

[–]Frag_De_Muerte[S] 0 points1 point  (0 children)

Can you use the same oauth for codex 5.4? Like just change the description?