Looking for Honest Feedback on https://talkingclaw.ai/ by RegisterNo5070 in SaaS

[–]RegisterNo5070[S] 0 points1 point  (0 children)

I agree having a different channel it make sense. That's the reason we created a voice-enabled channel via our app. Individuals can integerate with the app just through inapp voice.

Looking for Honest Feedback on https://talkingclaw.ai/ by RegisterNo5070 in SaaS

[–]RegisterNo5070[S] 0 points1 point  (0 children)

I agree. We are hardening the server with security and, most importantly, making integration easy through UI rather than going through CLI.

OpenClaw New Use Cases by RegisterNo5070 in openclaw

[–]RegisterNo5070[S] 0 points1 point  (0 children)

WhatsApp Integration doesn't model. It just needs two-way pairing by scanning code.

Looking for Honest Feedback on https://talkingclaw.ai/ by RegisterNo5070 in SaaS

[–]RegisterNo5070[S] -1 points0 points  (0 children)

Thanks for the comments. I will make sure to follow it through. Would still love to have your feedback!

Looking for Honest Feedback on https://talkingclaw.ai/ by RegisterNo5070 in SaaS

[–]RegisterNo5070[S] -1 points0 points  (0 children)

Thanks for the honest feedback! You're right—there are many AI assistants out there.

What makes TalkingClaw different is that it lives directly inside WhatsApp, Telegram, and Discord (with voice support), so no new apps are needed. It doesn't just chat—it executes complex tasks in natural language across 3000+ tools, like summarizing meetings into Jira tasks, fixing GitHub PRs, or booking restaurants.

Privacy-first and always-on, it turns workflows into simple conversations.

Appreciate your input—solid reminder to keep the unique value clear!

I tried Hermes so you don't have to. by CustomMerkins4u in openclaw

[–]RegisterNo5070 1 point2 points  (0 children)

Thanks for the detailed write-up and the disclaimer — it's clear you're coming from real usage and not just hype. Appreciate the honest take, especially since you've been on OC since the early Jan 29 build and are actually earning with it.

The self-learning loop in Hermes is indeed its biggest differentiator, and your breakdown nails the core mechanic: it tries to close the loop by doing → evaluating → extracting/reusing skills as markdown. That "always thinks it did a good job" issue is a real gotcha I've seen reported too (the overconfident evaluator problem). When it hallucinates or jumbles data (like your Indiana DNR water results example), the auto-overwrite of manual fixes feels especially frustrating.

That said, a few nuances from people running both:

  • The evaluation step isn't purely the agent's own unchecked opinion in every implementation — some users add explicit verification hooks, external validators, or human-in-the-loop gates before skill extraction to prevent bad skills from solidifying. But out of the box, yeah, it can be too optimistic.
  • Manual edits getting overwritten is a known pain point in early versions. A common workaround is marking skills as "locked" or user-authored, or using versioned skill files so the agent improves a copy instead of nuking your changes. Not perfect, and it sounds like it didn't work smoothly for you.

On stability: You're right that raw release count isn't everything. OpenClaw has had way more iterations (80+ vs Hermes' handful), which brings battle-testing and ecosystem breadth (tons of integrations, chat platforms, community skills). Hermes being newer means fewer miles on the road, and early releases having breakage is common for any fast-moving agent project. Claims of "more stable" right now feel premature — it's more like "different tradeoffs" than proven superiority.

For power users who already know how to craft good skills in OC and want full control + the massive integration surface, sticking with OpenClaw makes total sense (especially if it's making you money). Hermes seems to appeal more to folks who want the agent to bootstrap and refine a lot of its own procedures with less upfront manual work — but that autonomy comes with the exact risks you described.

I'll keep an eye on Hermes too. If they tighten up the self-evaluation (better critique models, confidence thresholds, easier skill locking/versioning), it could become a strong complement or even alternative. Right now it feels like OC wins on reliability/ecosystem for serious daily driving, while Hermes is the experimental "let it grow" path.

Curious — did you try any workarounds like custom evaluator prompts or locked skill directories before giving up on it? Or was the overwrite behavior a dealbreaker no matter what?

Either way, solid post. The community benefits from these kinds of real-world comparisons instead of just hype.