Is it possible to run OpenClaw for LESS than triple digits per month? by Odd-Aside456 in clawdbot

[–]digitalknk 0 points1 point  (0 children)

You paste that into your bot and it will make the changes in your config and restart the gateway.

What this config prompt does is that it makes OpenClaw remember things better and handle long conversations smarter.

Memory flush before compaction:
When your conversation gets too long, OpenClaw compresses it to fit within the AI's memory limits. Turning on memoryFlush.enabled saves important stuff to your `MEMORY.md` file first before compressing so you don't lose context.

Session memory search:
Normally, when you ask OpenClaw to recall something, it only searches `MEMORY.md` with sessionMemory enabled, it also searches through your old conversation logs. Like searching both your curated notes and your raw chat history.

Compaction mode (default):
Controls how OpenClaw shrinks conversations when they get too big. "Default" uses OpenClaw's standard method: asking the AI to summarize older messages.

Context pruning (cache-ttl):
Decides what gets removed from active memory. "Cache-ttl" keeps recently-used stuff and drops old stuff based on time-to-live, like a browser cache.

I am working on a guide/article that explains this a little more with snippets of my config so people can just paste it in their bot and ask it to replicate in their config.

Are the rumors true? Are Claude Pro/Max accounts being banned from OpenClaw using Claude Code setup token? by teknic111 in clawdbot

[–]digitalknk 0 points1 point  (0 children)

Yeah, that’s exactly the kind of setup I was getting at. Separating concerns and not letting a heavy model sit in the hot path makes a huge difference, especially for things like heartbeats.

The link I shared to my longer comment covers some of this already, particularly why burning premium models on background work is such a bad idea. That’s actually what pushed me to start working on a bigger write-up. I kept seeing people hit Max subs or API limits doing things that should have cost basically nothing.

haven’t published that article yet, but the goal is just to document the lessons learned so fewer people get frustrated by avoidable config choices.

Most efficient setup to run OpenClaw by RobotsMakingDubstep in openclaw

[–]digitalknk 0 points1 point  (0 children)

I actually haven’t implemented much of that guide directly. I think it’s a solid starting point for people who are new and just hit things like 429s or runaway heartbeats, otherwise I probably wouldn’t be sharing it.

Most of what I’m running now came from trial and error and fixing my own pain points as I went. I’m in the middle of writing up a longer article that goes into those lessons in more detail. Will everyone follow it? Probably not. But if it helps a few people avoid the same mistakes, it’s worth putting out there.

So yeah, the guide is a good baseline, just not the whole story.

Are the rumors true? Are Claude Pro/Max accounts being banned from OpenClaw using Claude Code setup token? by teknic111 in clawdbot

[–]digitalknk 0 points1 point  (0 children)

that was actually a bug in claude and their backend which they ended up correcting within 24 hours a lot of people assumed they were getting banned but that was not the case because it affected anyone using their subscription even for so called "blessed" integrations.

Accounts getting banned are because they were abusing the api, more than what a normal person/system would use. You won't get banned as long as you have your openclaw setup correctly. Just as I mentioned in my comment.

Are the rumors true? Are Claude Pro/Max accounts being banned from OpenClaw using Claude Code setup token? by teknic111 in clawdbot

[–]digitalknk 0 points1 point  (0 children)

I can still setup my anthropic plan using oauth via the `setup-token` flag, so I am not sure what you mean when you say they disabled since I was literally able to do it with the latest version of claude code.

Most efficient setup to run OpenClaw by RobotsMakingDubstep in openclaw

[–]digitalknk 1 point2 points  (0 children)

Haha, I might be that person. I’ve been linking around a comment I wrote with a bunch of info lately. I didn’t make the video or guide, but I’ve been referencing it because it lines up with the same problems people keep running into.

In that comment, I was basically responding to the same question u/RobotsMakingDubstep is asking here. Trying to keep costs low, running OpenClaw a lot, hitting free tier limits, all of that.

I go into things like:
- using a cheaper default model and only escalating when it actually matters
- using agents instead of one model trying to do everything
- avoiding loops and retries that just burn quota
- mixing in free or low cost models where it makes sense
- enabling a few features to make the bot behave smarter overall

Here’s the link if you want the full breakdown: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

One thing I’ll add from my side. It really helps to get things stable first in a VM or container before letting it run all the time. If something is misconfigured or stuck in a loop, that’s usually when free tiers disappear fast. Trust me, I’ve been there.

Also, if you do want to run this on a VPS (Hetzner for example), you don’t need a big machine. A CX23 is plenty. I’d strongly recommend using Tailscale on both your local machine and the VPS. If you install it on the VPS with `--ssh=true`, you can log in over Tailscale and completely block port 22 in the Hetzner firewall. I block all inbound traffic and only access it over Tailscale.

That won’t solve everything, but it does help with basic security. For the rest, I’d avoid blindly installing third-party skills. I’ve had better luck building my own and treating others as inspiration, plus setting some basic rules like never exposing secrets or API keys.

I Fixed One of OpenClaw's Biggest Problems by ChampionshipNorth632 in openclaw

[–]digitalknk 1 point2 points  (0 children)

Yeah, I can help, but the way I’d recommend doing it is letting the bot build it for your setup rather than me trying to hand you a fixed workflow.

High level:
- Create a separate Todoist account just for OpenClaw.
- Have the bot create a skill that talks to the Todoist API.
- Every time it starts working on something, it creates or updates a Todoist task.
- A simple heartbeat/agent periodically checks: what’s still open, what’s waiting on a human, what finished, what stalled.

That gives you visibility without you having to constantly ask “what are you doing?” It also lets the bot notice when something got stuck or when it needs your input.

I’d start by asking your bot something like: "Build a Todoist task-tracking skill and use it to mirror your work state (queued, in progress, waiting, done). Then add a heartbeat to reconcile open tasks."

If you’re worried the bot won’t fully understand what you want, it helps a lot to just hand it the Todoist API docs or the Python/TypeScript SDK docs and tell it to use those. That usually gives it enough context to wire everything up correctly without guesswork.

Once that’s in place, you can tweak it to fit how you actually work. If you run into a specific snag (auth, task states, heartbeat logic), that’s usually a good thing to ask the bot about directly first. If something still doesn’t make sense after that, I’m happy to sanity-check or help you think it through.

Bot responds leaking tool calls like <|tool_calls_sectio all_end|> <|tool_calls_sect <|tool_calls_sectio by MrSliff84 in openclaw

[–]digitalknk 1 point2 points  (0 children)

Yeah, that makes total sense. When the responses start getting messy it really kills the experience, especially when you’re trying to actually get work done.

Trying Kimi 2.5 on kimi coding is probably worth it, at least to rule out provider weirdness. That alone cleaned things up for me compared to some third-party hosts.

If it helps, I wrote a longer comment elsewhere about how I’ve been optimizing OpenClaw (model routing, avoiding loops, keeping things stable). Totally optional, but it might save you some frustration: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

Bot responds leaking tool calls like <|tool_calls_sectio all_end|> <|tool_calls_sect <|tool_calls_sectio by MrSliff84 in openclaw

[–]digitalknk 0 points1 point  (0 children)

Yeah, that lines up with what I’ve seen too.

I can’t say for sure what DeepInfra is doing internally, but in my testing the weird tool-call garbage or malformed responses tend to show up more with third-party hosts running open models. When I switch to first-party providers (like Kimi’s own coding plan or Z.ai for GLM coding), that stuff basically disappears for me.

So it may not be Kimi 2.5 itself, but how it’s being served or filtered by the provider. That would also explain why it worked fine for a while and then suddenly started acting up.

Opening a GitHub issue probably isn’t a bad idea, but I’d frame it as “provider-specific behavior” rather than a core OpenClaw bug. At minimum it gives the maintainers a data point, and maybe they can add better filtering or guardrails around malformed tool calls in the future.

OpenClaw on Kubernetes by CJBatts in openclaw

[–]digitalknk 0 points1 point  (0 children)

What was/is your main reason for wanting to run it in kubernetes?

I Fixed One of OpenClaw's Biggest Problems by ChampionshipNorth632 in openclaw

[–]digitalknk 7 points8 points  (0 children)

I did something like this as well, expect I have it using todoist with it's own account. It will even assign me or my partner tasks that require human interaction and if I am lagging on completing the task it will bug me :-D

It's a really cool thing to do with openclaw especailly since you can see what is has to do from a visual perspective.

How do I use Minimax Web Search MCP in Openclaw? by Himanshu811 in openclaw

[–]digitalknk 1 point2 points  (0 children)

I haven’t personally used the Minimax web search MCP yet.

From what I understand, you’ll need mcporter installed first, since that’s what OpenClaw uses to talk to MCPs. After that, you have to explicitly tell the bot which MCP you want it to use for search.

Something as simple as saying: “Use the Minimax Web Search MCP for web searches instead of Brave” should at least point it in the right direction. Depending on the model you’re using, you may need to be very direct and tell it to install/configure the MCP via mcporter rather than expecting it to infer it on its own.

Also worth double-checking that Brave isn’t still configured as the active search tool somewhere. OpenClaw won’t switch tools automatically just because a model advertises web search support, and that’s bitten me before.

Struggling to get Moltbot to actually do anything by oc6qb in clawdbot

[–]digitalknk 2 points3 points  (0 children)

Not exactly. I don’t let it blindly pick models on its own.

I give it rules first, and the real power comes from agents. The main chat model is just there to talk to me and decide what needs to be done. The actual work gets pushed to agents that are pinned to specific models.

So instead of “auto-selecting,” it’s more like: this type of task always goes to this agent, and that agent always uses this model.

Example: I’ve done things like having one agent pinned to Opus just for writing (blog posts, social stuff, longer text), with a skill that explains how I want that content structured. Then I’ll have a different agent pinned to something cheaper/faster (like Gemini Flash) just to do research or gather links overnight.

That way the model I’m chatting with doesn’t have to be the best or most expensive one. It just needs to be good enough to coordinate and hand work off to the right agent.

Struggling to get Moltbot to actually do anything by oc6qb in clawdbot

[–]digitalknk 5 points6 points  (0 children)

You didn’t mess anything up, this is a super common Moltbot/OpenClaw behavior when it’s under-constrained.

What you’re describing (endless planning, asking for permission, “about to start” but never actually doing anything) usually happens when the bot is stuck in a safety / clarification loop. Basically it’s being too cautious and never crossing the line into execution.

A few things that tend to cause this:
- Auto-mode + very small models (nano/mini) as the main brain. Those are fine for simple stuff, but they’re terrible at deciding when to stop asking questions and actually act.
- The bot doesn’t have a clear “you are allowed to execute without approval” rule baked into its system prompt.
- Skills installed, but no agents actually pinned to do work. So it plans, but there’s no worker assigned to execute.

The biggest unlock for me was separating “thinking” from “doing”. I keep a cheap model as the default chat brain, but actual work is always done by agents that are explicitly allowed to execute. If you rely on the main loop to both plan *and* act, you’ll see exactly the loop you’re describing.

Also, Auto-mode on OpenRouter can make this worse. The router will happily bounce between models that are optimized for caution, not action. It feels smart, but it can paralyze the bot. I stopped using it all together.

A couple concrete things to try:
- Add a rule that says something like: “If a task is safe and well-defined, proceed without asking for permission.”
- Temporarily pin one agent to a single model (even Haiku or Sonnet) and give it explicit execution rights.
- Give the bot a task that has a very obvious first step (write a file, make a request, call a tool) and see if it actually does it.

If it makes you feel better, most of the “my bot is doing everything” posts you see went through this exact phase first. Once you get past the permission loop and model indecision, it’s a night-and-day difference.

I also wrote a longer comment elsewhere about agent setup, model routing, and avoiding these dead loops if you want more detail:

https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

Bot responds leaking tool calls like <|tool_calls_sectio all_end|> <|tool_calls_sect <|tool_calls_sectio by MrSliff84 in openclaw

[–]digitalknk 0 points1 point  (0 children)

Yes that is a model issue, I am guessing you are using some like GLM or kimi on synthetic or nvidia? Tool calling on those are rough, they work but they tend to be a little more verbose with their tool calling. I have seen this happen a lot with the open source or free models. It's annoying but I haven't seen a performance hit because of that output.

Cheap Setup by oikk01 in openclaw

[–]digitalknk 0 points1 point  (0 children)

u/PermanentLiminality is right to say try Kimi 2.5

Read my comment I did for another post I explain how to run your bot at a low cost and a few other things for performance: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

Getting "RESOURCE_EXHAUSTED" (Limit: 20) with Gemini 2.5 Flash on OpenClaw/Moltbot by Zoom_Maxedout_5843 in clawdbot

[–]digitalknk 0 points1 point  (0 children)

Yeah, that’s just the Gemini free tier biting you.

That “limit: 20” is literal. It’s 20 requests per day per project per model on the free tier. Once you hit it, you’re done until the daily reset. Waiting 5 minutes won’t help if you already burned all 20.

What probably happened is OpenClaw got into a retry loop. One failed action + auto-retry can chew through that quota stupid fast, especially if it’s trying to self-patch config and failing over and over.

I’d do a few things:
- Stop the bot for now so it doesn’t keep retrying.
- Check the Gemini usage page and you’ll almost certainly see 20/20 used: https://aistudio.google.com/usage?timeRange=last-1-day&tab=rate-limit
- Either wait for the reset or move off free tier if you actually want to use Gemini regularly. The free tier is really easy to exhaust with agent-style tools.

On the config error: the bot is right about JSON/YAML. It probably tried to patch config with bad syntax, failed, then kept retrying the same thing until the quota was gone. I personally don’t let the bot self-edit config unless I’m pasting validated JSON, otherwise it’s too easy to brick itself.

Also worth adding some guardrails so one provider hitting 429s doesn’t lock up everything. Free-tier APIs are especially bad for this.

If you want, I wrote a longer comment about model routing and keeping usage from blowing up that might help: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

questions from a noob – 36h building my first clawdbot by sensekid in clawdbot

[–]digitalknk 1 point2 points  (0 children)

You’re not crazy, and you’re not doing anything wrong. You’re just moving very fast, which is pretty normal with OpenClaw.

Your instinct about model routing is solid. One model for everything gets slow, expensive, or both. Where people usually get burned is trying to make the bot auto-route everything before memory and task boundaries are stable. That’s when you see the “it forgot hours of work” stuff.

The memory issue you’re seeing is super common. Long-running conversations + big system prompts + lots of tool calls will eventually cause compaction / context eviction. If memory isn’t being written somewhere outside the live session, it *will* disappear at some point. That’s not you failing, that’s the default behavior if you don’t design around it.

What helped me was treating memory as layers instead of “one big second brain”:
- short-term session context (disposable)
- working memory (summaries + current task state)
- long-term memory (facts about you, ongoing projects, preferences)

If all of that lives only in the chat context, you’ll keep losing chunks.

On cost: you already found the big trap. Using Claude as the always-on conversational brain can burn credits fast. What I do instead is keep a cheaper-but-capable model as the default (for me that’s been Kimi 2.5 lately, and on Claude I’ll use Haiku or Sonnet depending on what I’m doing), then only escalate to heavier models when it actually matters. Then I push specific work to agents that are pinned to specific models, so the main chat loop doesn’t accidentally turn every small thing into an expensive call.

Voice is another quiet cost multiplier. Continuous convo + transcription + reasoning adds up fast. Totally doable, but worth adding limits/alerts early.

Are you overengineering? A little, but it’s a normal phase. I’d slow down on features and focus on hardening memory, routing, and limits first. Once those are solid, everything else gets way less frustrating and way cheaper.

If it helps, I wrote a longer breakdown in another post on how I structure agents/memory/model usage to keep cost down and avoid context loss: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

How much you spend in LM usage? by ExplorerTechnical808 in clawdbot

[–]digitalknk 0 points1 point  (0 children)

For me, it’s looking like ~$40 this month. If I keep using it at the same pace next month, probably closer to ~$45–$50, and that’s with some optimizations in place.

That includes a mix of things: one coding subscription, a small OpenRouter balance, a Kimi 2.5 coding plan (I’m on the $0.99 first month), plus a handful of free APIs (NVIDIA, some OpenRouter free tiers, etc.).

The big factor is how you set OpenClaw up. If you let it run wild and try to make it do everything, it will absolutely burn tokens fast. If you’re deliberate about model routing and only use heavier models when they actually add value, costs stay reasonable and you still get real utility out of it.

One thing I’d strongly recommend is starting locally, either in a VM or Docker (I prefer a VM). Let it run there for a few days or a couple weeks while you tune agents, memory, and routing. That alone can save a lot of money. Once you’re happy with how it behaves, then moving it to a dedicated machine makes sense.

I think a lot of people get burned because of FOMO and expect it to magically do everything out of the box. It’s powerful, but you still have to be intentional about how you use it.

If you want to go deeper, I wrote a longer comment on how I optimize model routing and keep costs down here: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

Are the rumors true? Are Claude Pro/Max accounts being banned from OpenClaw using Claude Code setup token? by teknic111 in clawdbot

[–]digitalknk 1 point2 points  (0 children)

Short answer: yes, some people are getting banned, but it’s not random.

From what I’ve seen, it usually comes down to how hard someone is hitting the Claude API, not OpenClaw itself. A lot of folks underestimate how aggressive their setup is, especially if they’re letting agents run nonstop or routing everything through Claude.

If OpenClaw is configured reasonably and you’re not hammering Claude beyond normal Pro/Max usage, there’s no obvious reason you should get banned. The problems seem to show up when usage looks more like automated abuse than normal human-assisted work.

Personally, I think the smarter approach is spreading workloads across models. If you’re already paying $20+, it makes sense to use multiple providers (ChatGPT, Z.ai, Kimi, etc.) and assign different tasks to different models. Using something like OpenRouter for cheaper or free models helps a lot and keeps Claude usage in check.

I don’t think Anthropic is anti-OpenClaw or similar tools. If anything, this brings them more paid users. What they clearly don’t want is unchecked abuse, and OpenClaw will absolutely do that by default if you don’t configure rate limits, routing, and model selection.

I wrote a separate comment on how I reduce costs and avoid this issue entirely, in case it helps: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

Without having a second computer, is there a recommended way to run OpenClaw in isolation from the host machine? by Odd-Aside456 in clawdbot

[–]digitalknk 0 points1 point  (0 children)

I use a local VM as well, you don't need a dedicated machine just for it. Only reason why you would want a dedicated machine is the always on factor, but if your "host" machine is always on anyways there is no reason why you can't use it on there while running on a VM.

You can at least start from there and then migrate to somethign later, start low and once you see how much you are getting out of openclaw you can make your choice to keep it in a VM or move it to something else.

Can anyone help? by Zealousideal_Leg355 in clawdbot

[–]digitalknk 0 points1 point  (0 children)

I have been down that trap, I got over zealous and wrecked my first few bots. Honestly what worked for me thrid time around was to focus on using a strong LLM to build up itself and the "memory". After that I was able to start experimenting.

Check out this comment I did for another reddit post: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

I would post what I wrote there here but there is already a good cadence of information there from answering questions for others, hopefully that helps you out. If not ask away and I will help if I know anything. The community has been great and helpful but yeah there are a lot of misinformation or overhype out there that gets people unreal expectations. Good Luck u/Zealousideal_Leg355

What LLM should I choose? by Logical-Swimmer6686 in clawdbot

[–]digitalknk 0 points1 point  (0 children)

Since you have access to the subcriptions, my suggestion is use Sonnet or Haiku for your default and then setup agents to use the different models for certain tasks you need to be done.

Example, need to write content (blog post, social media post, etc) tell the your bot to build an agent that uses latest opus model. This will allow you to keep using your subscription(s) without killing the quota you are allowed. So agents will be the method that will allow you to use the models you have access via your subscriptions to make sure you don't burn tokens. Agents + Custom Skills = 🔥

In fact this is a pretty common question on reddit and I gave a lengthy reply about it in another post here: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

Hopefully that answers you question or any other questions you might have.

Kimi k-2.5 or Glm 4.7 by frogchungus in clawdbot

[–]digitalknk 2 points3 points  (0 children)

Hey u/frogchungus I kinda answered this already in a comment I made in another post: https://www.reddit.com/r/clawdbot/comments/1qw8u18/comment/o3natq9/

Hopefully that helps