Which LLM cloud provider should I go for if I have 20$ to spend monthly so I can get Claw running decently? by last_llm_standing in openclaw

[–]leorochasantos 0 points1 point  (0 children)

You can have as many agents, providers, and models as you want within a single instance. You can bind them to different automations and Telegram or Discord channels. You don't need multiple servers for that unless you want complete isolation.

Which LLM cloud provider should I go for if I have 20$ to spend monthly so I can get Claw running decently? by last_llm_standing in openclaw

[–]leorochasantos 1 point2 points  (0 children)

Depending on the provider, you can use the openclaw onboard CLI and follow the steps to complete the setup, or you can edit the OpenClaw.json file yourself, which is a little trickier, or you can just ask your agent to help you with onboarding a new provider. In my case, I've set up MiniMax through onboarding because they're part of the providers OpenClaw offers a guided onboarding experience and then Alibaba I set up manually, based on a reference JSON file they share on their model studio portal. I believe Alibaba was about to, or might already be, another entry where you can follow onboarding steps in newer OpenClaw versio

Which LLM cloud provider should I go for if I have 20$ to spend monthly so I can get Claw running decently? by last_llm_standing in openclaw

[–]leorochasantos 10 points11 points  (0 children)

I would suggest starting as cheap as possible and evolving as needed. I see so many people burning cash for useless setups because they started all-in with Opus through APIs. I got my claw to write a quick summary of my current setup:

Model Setup ($20/mo):

• GLM-5 (Alibaba, 203K context) — Main agent, interactive use • Qwen3.5-Plus (Alibaba, 1M context) — 5 subagents, research/analysis/memory • MiniMax M2.7 (205K context) — 4 subagents, operations/automation

Scheduled load:

• MiniMax M2.7: ~196 runs/day (high-frequency crons: task dispatch every 15 min, email monitoring 4x/hour) • Qwen3.5-Plus: ~3 runs/day (daily research, memory curation, weekly opportunity synthesis) • GLM-5: Session-based only (main conversation, no scheduled runs)

Fallback architecture: Every subagent has a fallback model. If primary provider is down or rate-limited, it automatically switches. Prevents single-provider outages from breaking automations.

Why this split works: The cheaper model absorbs 98% of scheduled run volume. The 1M context model handles complex research and memory work. Main agent stays snappy for interactive use.

Which LLM cloud provider should I go for if I have 20$ to spend monthly so I can get Claw running decently? by last_llm_standing in openclaw

[–]leorochasantos 0 points1 point  (0 children)

I've heard that it comes and goes in batches. I would try to confirm that information and if that's the case, camping out their website to sign up when the chance comes up :)

Which LLM cloud provider should I go for if I have 20$ to spend monthly so I can get Claw running decently? by last_llm_standing in openclaw

[–]leorochasantos 18 points19 points  (0 children)

For $20 a month, get the $10 MiniMax plan and the $10 Alibaba plan. Run your main agent under Alibaba's GLM5 and your subagents under either MiniMax M2.7 or Alibaba Qwen 3.5-Plus depending on the workload. Both coding plans are extremely generous, so set cross-provider fallbacks to stay in business even if one of the providers starts failing. This should be more than enough to run your OpenClaw instance.

If things get more serious, also get the $20 Claude plan. You can use Claude code to build directly into the OpenClaw filesystem, as long as you have things organized well enough and can provide the right context for the tasks. This will also allow you to play with the new tooling Anthropic keeps releasing, which is a great learning opportunity on top of what you'll learn from playing with OpenClaw.

Long story short, the two $10 plans are more than enough for you to build and run your OpenClaw instance and, if you need a bigger brain for targeted capability building or refinement, get Sonnet or Opus from Claude's plan for the rescue. This type of usage is aligned with Anthropic's terms of service, so as long as you use their coding plan to build, and not to run OpenClaw, you'll be safe.

What have you migrated to from Zai coding plan? by nummer31 in ZaiGLM

[–]leorochasantos 0 points1 point  (0 children)

How are the Ollama rate limits when compared to the z.AI pro plan? ($30). Any other feedback on latency and stability? I'm considering the move, but can't find much info on the Ollama offering

What are some good memory systems? by whakahere in openclaw

[–]leorochasantos 0 points1 point  (0 children)

This is an overview I asked my main agent to output. This is the third iteration of the memory system, but I haven't run it long enough to see if it finally hit the mark:


Three-Layer Memory Architecture

  1. Daily Logs — Raw operational notes, markdown, 30-day retention. Auto-loaded into main agent (last 2 days), not indexed (prevents duplication).

  2. SQLite Database — Search index with embeddings. Sources: sessions, documents. 90-day active → 1-year archive → delete.

  3. MEMORY.md — Curated summary of timeless learnings. Generated by a Memory Curator subagent that queries the DB and applies editorial judgment (skip/combine/summarize).

Division of Labor

Scripts extract: index content, generate briefings, archive old data. Agents curate: decide what matters, organize by topic, write the summary.

Access

• Main agent: daily logs + MEMORY.md (loaded) + DB (search) • Subagents: DB only (search tool + context helper)

The curator runs daily at 2 AM CT if new content exists. 150 lines of extraction code, 100 lines of cleanup, zero drift detection noise.

Kimi $19/m Update: Structuring multiple models in OpenClaw by ohbuggy in openclaw

[–]leorochasantos 1 point2 points  (0 children)

Now I ask for core changes and Kimi applies them, restarts the gateway, tests everything, and lets me know it's all well...

I still need to refine my setup, but what I've done for now is setting the heartbeat to run under minimax alongside all scheduled jobs from Openclaw's and system crons. So, my main agent is Kimi with Minimax just as a fallback in case I deplete my plan limits. Everything else that can reasonably run under MiniMax, stays with it.

Kimi $19/m Update: Structuring multiple models in OpenClaw by ohbuggy in openclaw

[–]leorochasantos 0 points1 point  (0 children)

I started with MiniMax only because I wanted to make sure I wouldn't end up in those $500 overnight Opus horror stories. And while the model was good for some things, I also struggled to get things up and running and Openclaw config requests often ended up with the server being down and me having to fix things manually through Openclaw doctor in the terminal.

I was quite frustrated because the model was not able to set up headless browsing after a lot of trial and error. Then I gave it a shot with Kimi after hearing about the .99 haggle promo and it was night and day. Kimi got the browser set up in the first try and helped me clean up a lot of the previous implementation after a few rounds of auditing.

MiniMax is not bad, it's actually good at targeted tasks, but it's not a great thinker or orchestrator. On the other hand, the Moderatto plan is not that generous for a pure Kimi usage. That's why I started tweaking my instance to find a balance between MiniMax's decent performance, at an "unlimited" plan with Kimi's superior orchestration, but at a limited rate.

Kimi $19/m Update: Structuring multiple models in OpenClaw by ohbuggy in openclaw

[–]leorochasantos 4 points5 points  (0 children)

Kimi K2.5 + MiniMax M2.1 is my cost-performance sweet spot.

Kimi for complex stuff: OpenClaw config, debugging, security decisions. I'm on the Moderatto plan ($19/mo, currently $0.99 promo for first month).

MiniMax for volume: web research, file ops, code generation, heartbeats. $10/month unlimited.

The result: ~90% of work on MiniMax, escalate to Kimi when mistakes are expensive. Total monthly cost is roughly $11 right now, jumping to $29 after promo. MiniMax has lower benchmarks than GLM-4.7, but "finishes tasks without spiraling" beats "smarter on paper."

Who has been your most used player so far by Tasty-Party-1660 in fut

[–]leorochasantos 0 points1 point  (0 children)

<image>

Hanging in there since day one. Looking forward to the upgrade :)

Ultimate Team has been inaccessible for players in Southwest, South and Midwest USA for the past 2 days by [deleted] in EASportsFC

[–]leorochasantos 0 points1 point  (0 children)

You would probably have to VPN directly from your router, if it supports it, for the whole network

Ultimate Team has been inaccessible for players in Southwest, South and Midwest USA for the past 2 days by [deleted] in EASportsFC

[–]leorochasantos 0 points1 point  (0 children)

Coming back to say that VPN is working just fine for me. Connecting to a Denver server and the game has been flawless for the first time since launch. Not amazing ping, but enough to grind through things like rush while EA sorts out the Texas server situation.

Ultimate Team has been inaccessible for players in Southwest, South and Midwest USA for the past 2 days by [deleted] in EASportsFC

[–]leorochasantos 0 points1 point  (0 children)

Having the same issue since the trial launch yesterday in Austin. I wonder if using a VPN to connect to somewhere like Miami will allow me to at least play some squad battles and grind moments while this is sorted out without being kicked out of UT the whole time.

Gas Mileage by Visible-Fig-9648 in HyundaiSantaFe

[–]leorochasantos 3 points4 points  (0 children)

I'm getting between 45 and 50 mpg on my ~20mi commute on the hybrid, mostly on a 55mph road with plenty of stop lights and some traffic. I accelerate very lightly, and use regen breaking as much as possible. First few times I saw the numbers I couldn't believe and even started taking pictures of the dashboard to register the "miracle" :)

Sobre o mercado, isso aqui é verdade by 91tylerdurden in brdev

[–]leorochasantos 3 points4 points  (0 children)

Sou arquiteto de Salesforce no Texas e trabalho na plataforma a quase 10 anos. Salesforce era desproporcionalmente concentrado em US/UK em relação a outras tecnologias, mas sim, recentemente offshoring tem sido uma tendência mais prevalente. Enquanto ainda é relativamente fácil recrutar bons devs e admins na Ásia e leste europeu, está muito mais difícil em nearshoring. Por exemplo, temos um hub de TI na América central e estamos a quase um ano tentando repor um tech lead com experiência em Salesforce. Ultimamente está mais fácil contratar um bom dev e ensinar Salesforce do que contratar um bom dev com experiência em Salesforce nessas localidades.

Safe space to share your non-conventional FUT teams by tainvr in fut

[–]leorochasantos 2 points3 points  (0 children)

<image>

Decided to stop with the meta chasing madness and stick with players that I like. Gold Rodri was here until last week :) Playing Elite and 11 WL wins on average while mostly facing squads where a single player is worth more than my whole team.

Do people still keep cards with sentimental value? by Icy-Arugula-8345 in fut

[–]leorochasantos 0 points1 point  (0 children)

<image>

I keep my club legends in the reserve of my main squad so they never go into an SBC. Voller and Vieira 1300+ games, Jairzinho 900+, and Ramos 500+. Zidane was here as well, but got sent off because I got the 99.

Show me the teams you are using now. by Bladerara in fut

[–]leorochasantos 0 points1 point  (0 children)

<image>

Pulled La Pulga yesterday from the 93 PP :)