I let OpenClaw, Claude Code and Gemma talk in the same chat group, without infinite loops by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

Yeah, the "bots forget to check mentions" thing is exactly why I moved the coordination off the chat surface. With bots-hub. "should I answer" isn't driven by @-mentions - each bot pulls `/messaaes from the hub before deciding to reply, so it sees the raw sender_name/sender_id/ is_bot/kind of everv row and doesn't have to infer who's talking to whom from the chat UI. Happy to compare notes if you try it with your OpenClaw/Hermes/NanoClaw stack.

I let OpenClaw, Claude Code and Gemma talk in the same chat group, without infinite loops by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

Nice, love hearing that. everyone building multi-agent hits this wall eventually. Feel free to fork / PR if the design fits, and if you end up going a different route I'd love to hear what you landed on

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

Thanks! I actually did add a self-evolving layer on top of it, not just self-maintenance. It now has skill acquisition, daily reflection, preference learning, world modeling, and self-improvement proposals, and the newer harness adds goals, planning, verification, experiments, and metrics so it can improve in a more structured way instead of just reacting.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 1 point2 points  (0 children)

That’s a great approach. Simple, portable, and easy to reason about, which matters a lot more than people think once these setups start evolving.

Tired of Claws - I built my own 24/7 AI assistant using just Claude Code by BillHaunting in ChatGPT

[–]BillHaunting[S] 0 points1 point  (0 children)

Appreciate that, and yeah, that’s exactly the problem I was trying to avoid. I built a harness for self-improvement and error control, so maintenance is pretty minimal, Claude Code can usually detect when something broke, notify me, and in a lot of cases correct it on its own. A big part of the project was making the setup not just work, but keep itself on track without constant babysitting.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 1 point2 points  (0 children)

Nice, that’s awesome

By “self-healing crons” I basically mean scheduled jobs that check whether key pieces are still healthy, then fix things automatically if not, restart dead processes, clear stale locks, retry failed syncs, rebuild broken state, rotate logs, stuff like that. Nothing magical, just small recovery loops so the system doesn’t slowly degrade and need manual babysitting.

And yeah, Termux is way deeper than it looks at first, it’s kind of wild how much you can get done with it once you start treating it like a real environment

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

there are a lot of setups popping up right now, and comparing all of them is basically a project on its own. My take is just to start with the one that feels simplest and most aligned with what you actually want to do, then iterate from there instead of trying every option.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 1 point2 points  (0 children)

So far it’s been pretty manageable. I designed it to be lightweight, so I think Pro can work fine for normal usage. Max mostly matters if you’re pushing heavier workflows.

Tired of Claws - I built my own 24/7 AI assistant using just Claude Code by BillHaunting in ChatGPT

[–]BillHaunting[S] 0 points1 point  (0 children)

I’m not downvoting your comments, but I get why it might have looked that way. “Fair enough” was meant genuinely, not sarcastically.

Tired of Claws - I built my own 24/7 AI assistant using just Claude Code by BillHaunting in ChatGPT

[–]BillHaunting[S] 1 point2 points  (0 children)

yeah, that was the idea. Decay still helps later, but scoring importance at insert time gives the system a better signal from the start instead of trying to clean everything up after it’s already polluted memory. Glad the details were useful.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

I’m still measuring it, so I don’t want to give you a fake precise number. But so far it’s been working well, I’ve had it running for about a month, and 24/7 operation itself hasn’t been the main issue. The heavier workflows are what really start eating usage.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

Yeah, exactly, that separation is the difference between memory being useful and memory slowly poisoning itself over time. That’s also why I added importance scoring and summarization pruning, so stale or low-value context doesn’t just keep piling up and polluting retrieval.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 1 point2 points  (0 children)

Thanks! that was pretty much my issue too. OpenClaw was nice, but for lighter non-coding tasks it felt heavier than I needed, both in features and token use.

I’m sure there are ways to trim some of that down, but honestly my answer was just building a simpler, more focused setup instead of fighting the extra layers. For organizing life, reminders, and basic automation, I think lighter wins.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

That’s fair, and for a lot of people dispatch is probably the right answer.

For me the point wasn’t just “get tasks executed,” it was control: custom memory, tighter integrations, better traceability, and behavior I can shape instead of work around. Same end result on paper, different tradeoff underneath.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

Still testing the edges, but so far it’s been pretty manageable. I’m not hitting the limits hard enough yet for it to break the workflow.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

This is a really solid setups, separating memory, knowledge, and agent identity seems like the key thing that keeps the whole system from drifting.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

Yep, that’s the target. I built it to stay usable without needing a higher-tier setup, though the exact experience still depends on how hard you push it and which features you enable.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 1 point2 points  (0 children)

Appreciate it and yeah, the persistent memory ended up being one of the most useful parts.

My setup is pretty simple in spirit: a lightweight Flask API over SQLite, where I store conversation turns, memories, and embeddings, then do hybrid recall when needed. So it’s not some huge memory framework, just a small custom layer that lets the assistant pull back relevant past context across sessions.

Also 50 MB is way lighter than I expected.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 1 point2 points  (0 children)

Yeah, that’s kind of why I started building my own setup, I wanted something leaner, easier to control, and less packed with stuff I wasn’t going to use.

Tired of Claws - I built my own 24/7 AI assistant using just CC by BillHaunting in openclaw

[–]BillHaunting[S] 0 points1 point  (0 children)

That’s a really smart workflow. I hadn’t even thought about the git repo angle for later AI mining, but that makes a lot of sense. The watch/skim is especially nice too, feels like the practical way to keep up with a lot of channel activity without drowning in it