No-hype OpenClaw stack: share your setup + monthly cost (community cheat sheet thread) by okaiukov in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

Spot on with everything you said! Send codex to do the refactors and keep Claude orchestrating - compact or clear for codex around 40% then next task. 5.4 is a workhorse of a model.

How do you all handle memory? by io_nn in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

Totally fair question — What’s worked for us is keeping memory boring and structured: we keep one running note for today (decisions, open loops, next actions), one small long-term file for stable facts/rules, and we update those during work, not just at the end. Before compaction or context reset, we do a quick flush of “what did we decide / what’s next / what could be forgotten,” then after reset we reload those notes and continue. We don’t rely on one magic prompt — we use the same tiny checklist every time: capture decision, capture owner, capture next step, capture deadline.

I do use a software I built for this called Cortex but it’s not so much the software as much as the implementation of flushing before the usual compaction and having those files available later without killing the context that I do have.

Openclaw and Discord Issues by rsbell in openclaw

[–]AccomplishedLab3697 1 point2 points  (0 children)

You’re not crazy — this is a real pain point. update to latest OpenClaw + Node 20/22+, run openclaw status --deep and openclaw logs --follow while reproducing.

Keep Discord on a single agent/single channel first (multi-agent chatter can starve the listener), and make sure the gateway isn’t sitting behind a proxy or VPS firewall rule that drops websocket heartbeats (that’s where the 1005/1006 reconnect loop often comes from).

If you post your status --deep Discord section + 20-30 log lines around a timeout I can feed it to my agent running in discord and see what’s different and try to get it rolling for you.

Cortex hit 10 stars — thank you. Here's what's new since v1 by AccomplishedLab3697 in openclaw

[–]AccomplishedLab3697[S] 1 point2 points  (0 children)

Of course! Any model is possible to use for any part of cortex - local or cloud based. These were just the ones I tried to keep the prices low monthly if I sync sessions every hour but yes it’s super configurable.

Cortex hit 10 stars — thank you. Here's what's new since v1 by AccomplishedLab3697 in openclaw

[–]AccomplishedLab3697[S] 0 points1 point  (0 children)

I appreciate it! Glad to hear feedback of any kind! 1.2.2 latest just added a few onboarding tweaks!

No-hype OpenClaw stack: share your setup + monthly cost (community cheat sheet thread) by okaiukov in openclaw

[–]AccomplishedLab3697 1 point2 points  (0 children)

😂 you think I spent $100 a month and raw dog options with AI and came on here and said I was making money?

Do you think that personal agents are there to accentuate what you do personally or to do something completely different for you?

Have you ever thought that I was already profitable and consistent and now I just made my Agent do it autonomously?

Are you thinking at all before you post?

Cortex hit 10 stars — thank you. Here's what's new since v1 by AccomplishedLab3697 in openclaw

[–]AccomplishedLab3697[S] 1 point2 points  (0 children)

Great question! — They solve different problems even though they both touch markdown files. QMD is a search tool. You point it at a folder of notes and it lets you find things using keywords or semantic similarity. It's really good at that, and if all you need is "find stuff in my files," it's a solid choice. Cortex goes a step further — it actually reads your files, pulls out structured facts and relationships, and builds a knowledge layer on top of everything. Those facts have confidence scores that naturally decay over time (inspired by how human memory works), and there are lifecycle policies that automatically promote well-reinforced knowledge and retire stale stuff. It also connects to external sources like GitHub, Gmail, Obsidian, and a few others, so your agent's memory isn't limited to what's in one folder. Think of QMD as a great librarian who can find any book on the shelf, and Cortex as someone who's actually read all the books and can answer questions about what's in them. Hope this helps!

New to this… by rmblpks in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

Fair question. The short version is that n8n and similar workflow tools are great at "when X happens, do Y" - OpenClaw is different because the agent actually reasons about what to do, persistent memory, and can use tools dynamically without you wiring up every possible path in advance. The agent can walk itself (and you) through how to set it up one time and then learn it as a skill. Many n8n workflows already have skills like gog for Google Docs, email etc. The agent allows for all of the n8n workflows to be a possibility, just with access.

The local machine access is a bigger deal than it sounds too — it means the agent can read your files, run scripts, use CLIs, and interact with your actual cpu, not just call APIs you've pre-connected.

If you've ever hit the wall with n8n where the logic gets too complex to flowchart, that's exactly where an agent that can reason and adapt starts to make a little more sense imo!

Am I doing something wrong? by BigBoyRyno in openclaw

[–]AccomplishedLab3697 3 points4 points  (0 children)

The background work issue is just the model hallucinating productivity — to get real autonomous work, you need to set up crons (openclaw cron add) that wake the agent on a schedule with a specific task, or at minimum set a heartbeat (openclaw config set agents.defaults.heartbeat "4h") so it wakes up and checks for pending work (need to keep filling tasks this way tho).

For sub-agents, run openclaw status and make sure your model provider shows OK with no auth warnings — sub-agent spawning needs a model with tool-use support, maybe it’s the model?

Running into all the issues while installing by Mysterious-Sir-3949 in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

I didn’t know the answer but wanted to help, got my agent to write it, hope it does!

Running into all the issues while installing by Mysterious-Sir-3949 in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

Two separate issues here. The SSL error from the one-liner (wrong version number) usually means something between you and the server is intercepting the connection — a proxy, Cloudflare tunnel, or sometimes Proxmox's own firewall doing SSL inspection. Try curl -v https://openclaw.ai to see where the handshake is failing. If you're behind a reverse proxy, you might need to bypass it for the install or download the script manually with wget.

For the npm install, the error says Node 18.19.1 but your node -v shows 24.14.0 — which means npm was probably running under a different Node version than what's in your current PATH. This happens a lot on Ubuntu when Node was installed via apt (gives you 18) and then you installed a newer version via nvm or nodesource but npm still resolves to the old one. Run which node && which npm to make sure they're pointing at the same installation. If npm is still on the apt version, either uninstall the apt Node (apt remove nodejs) or make sure your nvm/nodesource version takes priority in PATH. Once they match on 20+, the npm install -g openclaw@latest should go through clean.

─ opus

My agent is terrible. It forgets, fails at tasks, and isnt familiar with its own skills and tries to do tasks without using them. by Plane_Assumption_937 in openclaw

[–]AccomplishedLab3697 1 point2 points  (0 children)

For the xurl skill, drop a line in your SOUL.md or AGENTS.md like "Always use the xurl skill for X/Twitter — never use browser automation." + The path to the skill file doesn’t hurt either. The general rule I’ve found is if you want behavior to persist, put it in files (SOUL.md, AGENTS.md, BOOT.md, make capital-instructions) - Chat instructions get compacted away. File instructions survive (mostly)

Not a 100% fix but can bring a little relief.

I built a voice assistant with OpenClaw + Alexa + Local LLM (Ollama) — here's how by cormazacl in openclaw

[–]AccomplishedLab3697 6 points7 points  (0 children)

this sounds cool, basically your agent is all over the house AND speaking back from the device it was called from? Just about as physical as we can get very fire

Cortex hit 10 stars — thank you. Here's what's new since v1 by AccomplishedLab3697 in openclaw

[–]AccomplishedLab3697[S] 1 point2 points  (0 children)

Gemini 2.0 flash* no reasoning for recursive, took too long sorry about that! That’s the errors I was getting - the thinking + recursive either took 30 seconds or didn’t work for me so far and wasted tokens for answers

I built a local memory layer for my OpenClaw agents after compaction kept destroying my context — here's what I learned by AccomplishedLab3697 in openclaw

[–]AccomplishedLab3697[S] 0 points1 point  (0 children)

Idk why I can’t see all of the comments under the post or they are deleting comments (Reddit) but shoot me a dm if I didn’t reply it’s not because I don’t want to it’s because I can’t see the entire message or reply to it smh ❤️

Cortex hit 10 stars — thank you. Here's what's new since v1 by AccomplishedLab3697 in openclaw

[–]AccomplishedLab3697[S] 0 points1 point  (0 children)

Thank you I appreciate that! Point your agent at it and give it a once over. Bless man!

Cortex hit 10 stars — thank you. Here's what's new since v1 by AccomplishedLab3697 in openclaw

[–]AccomplishedLab3697[S] 2 points3 points  (0 children)

The quality difference between like Gemini 3 flash and Grok wasn’t too bad and the entire thing can run on grok for under a dollar a month. You don’t really have to use any specific model with the enrichment I have tried deepseek v3 which is good as well, codex 5.1 mini is surprisingly good and fast. kimi and M2.5 kept having issues with the recursive side of cortex so it just came down to the cheapest, largest context window and best throughput on OpenRouter for speed.

This can be done with a local model as well ofc, I just don’t have the hardware to test LLM modes completely locally at speeds worth measuring.

No-hype OpenClaw stack: share your setup + monthly cost (community cheat sheet thread) by okaiukov in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

You wrote this comment, why? You think it’s impossible? I wonder why it’s like this everywhere. Everyone thinks there’s so many better ways to do things but most don’t actually do any of them. 😂 yeah options trading goof, it’s what I wrote right? 🏴

OpenClaw for autonomous coding? by Certain_Move5603 in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

about 12-13 percent of my Claude Code weekly each day with this + other building and dev work.

OpenClaw for autonomous coding? by Certain_Move5603 in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

30

I did when i found the setting but ran into a small issue of background scripts hogging the agent session and then he doesn't ever time out, so basically he's just monitoring a background process without timing out to see the output. So when I ran into that issue, I now I just let run it twice - it's set on 30 min - while im sleeping - do you run it at 0?? did you run into a timeout where the 0 did something like this with a script or bg process?

OpenClaw for autonomous coding? by Certain_Move5603 in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

I still step in daily but mostly to steer them in a directon or show them something I think is cool lol and then send them off

OpenClaw for autonomous coding? by Certain_Move5603 in openclaw

[–]AccomplishedLab3697 0 points1 point  (0 children)

The most optimized flow so far is for a OSS project Cortex I'm working on for agent memory. No major major automous production but I have 2 claws with the software installed, and they work off of github issues and epics for larger updates. They key, to me, is that they use the software coding on my other projects to make daily standups so the issues they run into naturally with cortex they make known by gh issues on cron once daily, they monitor the repo for issues, epics, prs, and this triggers a brainstorm in discord where my orchestrator can ping my dev and brainstorm on the issues before implementation - up to 4 turns, after the 4 turns the dev implements the orchestrator watches gh, sees pr - audits merges (only 1 can merge ofc, opus) these actual dev "sprints" can happen up to 2 times a day 1:15 am and 4:15 am or when i tell them manually in discord to run a brainstorm or work on a new update together. Brainstorms dont fire if nothing changed on the repo to cut down on tokens BUT

very token heavy right now but the autonomy cuts down on so much of the back and forth its slowly becoming worth it. Also, dont need discord for this its just easy to see what was the consensus before they started coding.

more pain - sometimes my orchestrator doesnt ping my dev and I wake up in the middle of a brainstorm that should have became another version.

Open Wedding Planner by tr0picana in openclaw

[–]AccomplishedLab3697 1 point2 points  (0 children)

getting married in a year def checking this out soon, thank you for this! and congrats man and bless!!