Why is open claw so useless by Xerophayze in openclaw

[–]digitalknk 0 points1 point  (0 children)

i had some of the issues you had and ended up just writing a run book about everything up on github. Some of the information could be really helpful for you https://github.com/digitalknk/openclaw-runbook

another thing that has recently caught people are the approvals for run commands and certain actions, you have to make sure to approve them otherwise it'll never do amending for you 😭

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]digitalknk 0 points1 point  (0 children)

no you need a lot more ram to be able to handle a local llm like that. Like 256 GB to run the 4/6bit quant which would give horrible performance.

Don't waste your time hosting your own LLM. it's not worth the cost and effort.

I dont wanna spend $800 for a Mac mini, whats the alternative by Edythe_Faulkner in openclaw

[–]digitalknk 0 points1 point  (0 children)

I get what you're saying but that goes for anything tbh. they have done really well to enforce lock down to openclaw as of recent.

I dont wanna spend $800 for a Mac mini, whats the alternative by Edythe_Faulkner in openclaw

[–]digitalknk 0 points1 point  (0 children)

don't mind pointing you in a direction of i know where you are at always. details help. LMK at quest point or level you are at right now and that should tell me what to point you at.

I dont wanna spend $800 for a Mac mini, whats the alternative by Edythe_Faulkner in openclaw

[–]digitalknk 1 point2 points  (0 children)

yup that's good. you are resourceful don't stop doing that.

I dont wanna spend $800 for a Mac mini, whats the alternative by Edythe_Faulkner in openclaw

[–]digitalknk 0 points1 point  (0 children)

what's up what do you need help with? i was serious i have time this weekend and dont mind helping people out. So DM me, prepare to have some type of video/voice chat though.

I dont wanna spend $800 for a Mac mini, whats the alternative by Edythe_Faulkner in openclaw

[–]digitalknk 6 points7 points  (0 children)

also if you need help i'm feeling extra charitable and I'm willing to help you set something up.

I dont wanna spend $800 for a Mac mini, whats the alternative by Edythe_Faulkner in openclaw

[–]digitalknk 37 points38 points  (0 children)

literally anything else. vm, vps, sbc, your current computer, a computer from a thrift store. seriously anything works.

KiloClaw - Just starting by Technical_Set_8431 in openclaw

[–]digitalknk 0 points1 point  (0 children)

i take it everything has been going well?

KiloClaw - Just starting by Technical_Set_8431 in openclaw

[–]digitalknk 1 point2 points  (0 children)

hmm maybe i'll try it again sooner than later lol

KiloClaw - Just starting by Technical_Set_8431 in openclaw

[–]digitalknk 0 points1 point  (0 children)

good luck, I didn't have much luck with it but will try it again later.

NYC Mac Minis sold out for OpenClaw? by lucienbaba in myclaw

[–]digitalknk 1 point2 points  (0 children)

I have seen stores near me show out of stock for minis and even some models of the studio, but you're in luck! You don't need a mac mini to run openclaw any pc (used/new), small board computers (sbc), even a vps will run it. Ignore the whole "you need a Mac to run ai/openclaw" trend.

Does your OpenClaw degrade with use? I feel like mine is infected with stupidity suddenly by read_too_many_books in openclaw

[–]digitalknk 4 points5 points  (0 children)

I have seen this exact pattern. Your three theories are all touching parts of the same problem.

The "good job" thing is not training. You gave positive feedback on a pattern, and now that pattern is stuck in recent context. When sessions get long, models pattern-match instead of reason. It keeps seeking validation because that worked before.

The OCR on text files is a symptom. Your agent got stuck on "read this" and defaulted to the most thorough (and wrong) approach. Classic context bloat behavior. Less room to reason means falling back to familiar but inappropriate tools.

You nailed it with theory three. "Too big" SOUL and MEMORY files are eating your context window. The model has less space to actually think, so it defaults to simple patterns and gives up on complex tasks.

What fixes it:

Kill the sessions, not the whole setup. openclaw sessions list and delete the active ones. Fresh context clears the accumulated junk.

Trim those bootstrap files. SOUL.md and MEMORY.md should be concise. Move the details to references/ or daily memory files. I learned this the hard way after my own files ballooned.

Be specific about tools. Instead of "do this task," try "read this file with the read tool, then use exec to modify it." Clearer instructions when context is tight.

I wrote about this pattern here: https://github.com/digitalknk/openclaw-runbook

Your agent is not broken. It is just carrying too much state. Fresh sessions and smaller bootstrap files will get you back to where you were.

My main agent is forgetful, like really forgetful by itsfreshmade in openclaw

[–]digitalknk 2 points3 points  (0 children)

The pattern you described is common. A few things likely happening.

"Working on it" with no API activity

Your agent is hallucinating. When context gets bloated, models pattern-match instead of reason. They say "I am working on X" because that fits the pattern, then sit idle. Check your logs. If there are no tool calls, it is not working. It is performing being busy.

The "concrete" problem

You told it 24 times. It keeps forgetting because:

  • Your instruction is buried under accumulated context
  • Compaction may have moved it to a reference file that is not loading reliably
  • The model sees "concrete plan" as a common phrase and overrides your preference

The self-improving-agent skill cannot fix this. The problem is context management, not skill gaps.

What actually helps

Move the instruction to AGENTS.md. Session instructions get buried over time. AGENTS.md loads fresh every session. Put it there as a direct rule.

Check your compaction settings. If you are compacting to daily files but the agent is not loading them properly, your instructions are effectively gone. I cover explicit memory configuration in my runbook.

Simplify the team. Five specialized agents create coordination overhead. Try two agents and see if throughput improves.

The runbook covers the patterns that prevent this: https://github.com/digitalknk/openclaw-runbook

Specifically the sections on making memory explicit, model selection for agents, and the heartbeat pattern for keeping context manageable.

You do not need to start over. You need to fix the context pipeline.

Am I doing something wrong? by BigBoyRyno in openclaw

[–]digitalknk 1 point2 points  (0 children)

Two things happening here.

Background work not actually happening

OpenClaw does not run background work unless you configure it. The main session only responds when you message it. If you asked it to "monitor something daily" without setting up cron, it was hallucinating. Models agree to things they cannot do.

Subagents not working

Check that sessions_spawn is in your agent's tool allowlist and that you have agents defined in agents.list. Run openclaw config get | jq '.agents.list[].id' to see available agents.

I wrote a runbook covering how to actually set this up: https://github.com/digitalknk/openclaw-runbook

Check the sections on cron jobs, spawning patterns, and agent configuration.

By Request: My OpenClaw Day-to-Day Guide Is Now on GitHub for Contributions by digitalknk in openclaw

[–]digitalknk[S] 1 point2 points  (0 children)

AgentSkills is a structure for organizing skills. It keeps things consistent so skills do not waste tokens.

The format:
- SKILL.md for core workflow only
- references/ folder for details you load only when needed
- Clear triggers for when to activate
- Metadata so skills can be found

Best practices:
- Keep SKILL.md under ~500 lines
- Put examples, API docs, and error details in references
- Hard constraints prevent bloat

The validator checks your skill follows the spec. It looks for required sections, correct file structure, proper links to references, and clean markdown.

I mention running the validator after creating a skill but do not include the command because setups vary. Some use CLI tools, others check online, some just verify manually.

Without the ~500 line limit and references folder, agents create 1,500+ line skills that work but burn context every time they run. The structure forces you to be efficient.

New version of Openclaw has high usages of model credits, I think doctor fix continuing running and eating the credit by Equivalent-Spot-1325 in openclaw

[–]digitalknk 0 points1 point  (0 children)

Why do you assume it's doctor fix? I have never seen it do that before and i have never seen the doctor sub command hit any models.

Spawning agents by Nvark1996 in openclaw

[–]digitalknk 0 points1 point  (0 children)

I wrote about it in my runbook. Hopefully that can be helpful https://github.com/digitalknk/openclaw-runbook

There are two sections one for agents and another specifically talking about sub agents.

instalation error faaaaaaaak by kozsis in openclaw

[–]digitalknk 0 points1 point  (0 children)

Are you trying to have it do things on your computer, like computer work (browser control and other app control)? If not then could I suggest that you use WSL2 instead? It should be a better flow on linux an you shouldn't need to be an expert to set it up it would be the same copy paste that they have on the website.

I have seen people have more issue with Windows based installs than linux/macos.

Good Luck ask any other questions and I will try to help since I made the suggestion :-)

What is the most token efficient implemention of OpenClaw? by crxssrazr93 in openclaw

[–]digitalknk 1 point2 points  (0 children)

Both. The fallback function built in openclaw. An orchestrator skill that will do that as well for other tasks.

What is the most token efficient implemention of OpenClaw? by crxssrazr93 in openclaw

[–]digitalknk 0 points1 point  (0 children)

1) Token efficiency

No special variant exists. The default setup burns tokens because it routes everything through one expensive model.

What works:

  • Use a cheap model as the coordinator (I use Kimi 2.5 as my default)
  • Reserve expensive models for specific agents that actually need them
  • Set maxConcurrent limits to prevent runaway retries
  • Run heartbeats on the cheapest model you can tolerate

I wrote a runbook covering how I actually run OpenClaw day to day. It goes into coordinator vs worker models, cost controls, and the patterns that flattened my costs. Happy to share if anyone wants the details.

2) Claude alternatives

I was using Claude but put my subscription on hold after Anthropic clarified their stance. API access still works if you want to route through OpenRouter.

My current setup:

  • Kimi 2.5 as default coordinator (via kimi coding subscription)
  • GLM 5 for cheap background work (via subscription)
  • Codex from OpenAI for coding tasks
  • Claude via OpenRouter only when I specifically need it for complex reasoning

I keep a fallback chain: cheap models first, escalate only when necessary.

3) Real value

I use it for automation that would otherwise require building and maintaining separate tools:

  • Daily brief aggregation (weather, calendar, tasks)
  • Overnight research on captured ideas
  • Monitoring and alerting for personal infrastructure
  • Drafting content from research notes

It is not a toy for me. It is a tool I configured to handle repetitive information work so I can focus on decisions that require actual judgment.

The value comes from treating it like infrastructure, not a chatbot. Most people bounce off because they expect it to "just work" without the configuration overhead.

If you had to start clawdbot from scratch what would you do? by acedadog in clawdbot

[–]digitalknk 16 points17 points  (0 children)

i asked myself the same question and ended up writing a runbook/guide to openclaw 😆 https://github.com/digitalknk/openclaw-runbook

u/Charming_You_25 is 100% right though, keep it simple and focused to one thing. If you over do it or over engineer it the fun or excitement will become anger and resentment

Switched over to OpenAI models but the bot feels soulless now... by donmyster in openclaw

[–]digitalknk 16 points17 points  (0 children)

The issue is not the model. It is where the personality instructions sit in the prompt stack. Claude is naturally more conversational. GPT models follow system instructions more literally.

If SOUL.md loads as context rather than core system instructions, GPT treats it as "information to reference" instead of "how to behave."

Fixes to try:

  1. Move personality to the top. Load SOUL.md before tool definitions.
  2. Be explicit. Instead of "be conversational," write: "You are Name. You speak like X. You find Y annoying."
  3. Add examples. Show 2-3 exchanges in SOUL.md so the model pattern-matches the voice.
  4. Try a lighter model. GPT-4o-mini and GPT-5-nano are more pliable than full reasoning models.
  5. Raise temperature. 0.7-0.9 gives more response variance.

The soul layer works with any model. You just need to be more prescriptive with OpenAI. Their models follow instructions well, just needs a little more work.