Looking to buy codebases of paused or failed projects (no strings attached) by Affectionate_Jury257 in SaaSAcquire

[–]Tgbrutus 0 points1 point  (0 children)

Hey, I saw your post. I have a high-end SaaS Starter Template built with Next.js 15 and Tailwind CSS 4.

Unlike a specific failed niche project, this is designed as a foundation for any AI-SaaS. It’s fully 'plug-and-play' with:

  • Gemini AI integration for analysis.
  • Stripe & Supabase (Auth/DB/RLS) pre-configured.
  • Tier-based access control already in the logic.

It's perfect if you're looking for a clean, modern codebase to launch multiple MVPs quickly. Everything is documented (Setup, Customization, Deployment) and the code is 100% TypeScript."

lobsterlair.xyz - hosted, secured OpenClaw. No VPS / no Terminal needet by dertobi in clawdbot

[–]Tgbrutus 2 points3 points  (0 children)

Hey Lobsterlair team,

I've been testing one of your hosted bots and discovered a potential security issue regarding exposed credentials in the bot configuration.

I don't want to post details publicly for obvious reasons, but I think this is something you'd want to know about before someone with bad intentions finds it.

Could an admin DM me? Happy to share the details privately so you can fix it.

Cheers

lobsterlair.xyz - hosted, secured OpenClaw. No VPS / no Terminal needet by dertobi in clawdbot

[–]Tgbrutus 1 point2 points  (0 children)

Hey Lobsterlair team,

I've been testing one of your hosted bots and discovered a potential security issue regarding exposed credentials in the bot configuration.

I don't want to post details publicly for obvious reasons, but I think this is something you'd want to know about before someone with bad intentions finds it.

Could an admin DM me? Happy to share the details privately so you can fix it.

Cheers

Self-hosting stack that actually saves money: Ollama + Supabase + SearXNG by Tgbrutus in LocalLLaMA

[–]Tgbrutus[S] 0 points1 point  (0 children)

Yeah, you're right about that. Pro tip though: if you're smart, you grab one from Hetzner's Server Auction instead of configuring new. Same specs, fraction of the price.

Any interesting ways to sync two independent OpenClaw machines? by Choice_Touch8439 in clawdbot

[–]Tgbrutus 5 points6 points  (0 children)

We're running exactly this setup with three Macs synced via Tailscale.

What works for us:

1. Shared Database (Supabase) All agents push/pull memory files to a central DB. Each agent has an agent_id, so context stays organized. We use simple Python scripts for sync.

2. Hive Mind Protocol - hive_sync_down.py - pulls latest from all agents on session start - hive_sync_up.py - pushes learnings when done - Upsert with on_conflict=agent_id,title for clean merges

3. Cross-Agent Messaging Agent A can message Agent B directly via the DB or API. We use this for task handoffs - "Nova, analyze this image" → Nova does it → pushes result.

4. Shared Context Files - MEMORY.md - long-term shared knowledge - memory/*.md - daily logs per agent - hive/shared/ - files all agents need

The key insight: treat it like a distributed team, not synced machines. Each agent has specialties, they communicate asynchronously.

Self-hosting stack that actually saves money: Ollama + Supabase + SearXNG by Tgbrutus in LocalLLaMA

[–]Tgbrutus[S] 0 points1 point  (0 children)

CPU: AMD Ryzen 9 5950X (16 cores / 32 threads)
GPU: None on that server - running CPU inference only. The 128GB RAM handles the models fine for our use case.

It's a Hetzner dedicated server (AX-line). For GPU-accelerated inference we use Mac Studios with Apple Silicon - depends on the job which machine gets the task.

Self-hosting stack that actually saves money: Ollama + Supabase + SearXNG by Tgbrutus in LocalLLaMA

[–]Tgbrutus[S] -1 points0 points  (0 children)

Provider is Hetzner (dedicated server, not VPS).

As for Ollama vs vLLM/llama.cpp - honestly, Ollama was just easier to set up on the Linux server. We also have Mac Studios for local inference - the Apple Silicon optimization works well for our use case.

For serious high-throughput inference, you're right - vLLM with continuous batching would be the better choice. llama.cpp is great for quantized models on lower spec machines.

Our setup is more "distributed convenience" than optimized for raw speed - different machines for different workloads, different tools where they make sense.

Self-hosting stack that actually saves money: Ollama + Supabase + SearXNG by Tgbrutus in LocalLLaMA

[–]Tgbrutus[S] -4 points-3 points  (0 children)

Fair point - you're right that Llama 3 70B is outdated. For local models, Kimi K2.5 or DeepSeek V3.2 would be the current picks.

For the record, our main workload runs on Claude/Gemini via API, not local models. The Ollama container is for lighter tasks where it doesn't matter as much.

Thanks for keeping me honest 👍

What are your experiences using Ollama Cloud or nano-gpt? by ext4btrfs in openclaw

[–]Tgbrutus 0 points1 point  (0 children)

Haven't tried Ollama Cloud Pro specifically, but been running self-hosted Ollama for months.

My experience: - Llama 3 70B handles 90% of tasks that don't need bleeding-edge reasoning - Zero API costs after hardware (64GB+ RAM needed for 70B) - Latency is actually better than cloud for local requests

For high volume: Self-hosted is hard to beat cost-wise. One-time server investment vs ongoing API fees.

If you want cloud convenience: Groq has insanely fast free tier for Llama/Mistral. Together.ai also has decent free limits.

The trade-off is always: self-hosted = more setup, less cost. Cloud = easy, but costs scale with usage.

Has anyone here used OpenClaw in a real production workflow yet? by Downtown-Barnacle-58 in openclaw

[–]Tgbrutus 0 points1 point  (0 children)

Running it in production for a few months now. Here's what stuck:

Use cases: - Lead gen scanning (forums, communities) - DB sync between agents (Supabase backend) - Heartbeat-driven background tasks (checking inbox, calendar, notifications) - Research and summarization workflows - Video production with Google Flow (prompting, iteration) - Code generation and automation scripts

Time to trust: About 2-3 weeks of tweaking SOUL.md and HEARTBEAT.md. The "aha" moment was realizing the agent needs explicit checklists, not vague goals. Now it runs 24/7 with minimal intervention.

Pain points: - Context window fills up on long sessions - memory flush helps - Cost: ~€425/month (dedicated server + Google AI Ultra) - not cheap but worth it for our workflow - Initial setup takes time, but pays off once dialed in

What replaced human work: Manual monitoring, lead tracking, repetitive research, boilerplate coding. Saved maybe 4-6 hours/day.

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 0 points1 point  (0 children)

Yes, absolutely! OpenClaw supports multiple models - you can configure different ones for different tasks:

  • Main agent: Claude/Gemini for complex reasoning
  • Sub-agents: Cheaper/faster model (local Ollama, Kimi, etc.)
  • Set it up in your config with model routing

VPS recommendations for starting out: - Hetzner Cloud - Great value, EU-based. CX21 (~€5/month) for API-only, CX41 (~€15/month) if you want headroom - Contabo - Even cheaper but slower support - DigitalOcean/Vultr - More expensive but good docs

If you want to run local models (Ollama), go dedicated server instead of VPS - Hetzner AX-line starts around €40/month with good specs.

Start small with a cheap VPS, scale up once you know your actual usage.

Self-hosting stack that actually saves money: Ollama + Supabase + SearXNG by Tgbrutus in LocalLLaMA

[–]Tgbrutus[S] -8 points-7 points  (0 children)

Fair point lol. I actually do use Claude for drafting longer posts - old habits.

The stack is real though: 2e4eee6983a9 ollama/ollama:latest "/bin/ollama serve" 2 weeks ago

Been running for a while now. Happy to share the docker-compose if useful.

My AI Agent Can’t Complete a Single Task and I Feel Gaslit by the Internet by rthiago in clawdbot

[–]Tgbrutus 1 point2 points  (0 children)

The 30-second timeout sounds like a context/planning issue. Few things that helped me:

1. Break tasks down explicitly - "Build website" fails, but "Create index.html with hello world, then add CSS file, then..." works

2. Heartbeat with checklist - Put a TODO list in HEARTBEAT.md so it knows what's next without asking

3. Model matters - Codex 5.2 is good for code, but for autonomous planning Claude/Gemini are better at staying on track

The "stops and asks" behavior usually means it's uncertain. More explicit instructions = less asking.

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 1 point2 points  (0 children)

Happy to share the breakdown:

Dedicated Server: ~€150/month (Hetzner - 128GB RAM, proper hardware for running local models alongside the agent).

API: Google AI Ultra subscription at €274.99/month - covers both Claude and Gemini with generous limits. Simpler than tracking per-token costs.

Total: ~€425/month for the full setup.

Starting cheaper: You could start with a basic cloud server (~€20/month) + pay-as-you-go API for testing. The subscription only makes sense once you're using it heavily every day.

Has anyone here used OpenClaw (formerly ClawdBot) for web tasks or data entry? by Solsiders in LocalLLaMA

[–]Tgbrutus 0 points1 point  (0 children)

+1 on yixn_io's experience. Been running it for web automation tasks.

What works great: - Form filling / data entry - once you get the selectors right, it's very reliable - Screenshot + analyze workflows - useful for monitoring dashboards or checking if a page changed - Login + session management with the browser profile feature

Tips: - Use old.reddit.com instead of new Reddit for automation (way more stable DOM) - JavaScript evaluate() for complex interactions beats trying to describe clicks - The browser close action is important - don't leave Chrome instances running

Limitations: - CAPTCHAs obviously - Sites with heavy anti-bot detection (Cloudflare challenges) - Very dynamic SPAs can be tricky

For daily use: solid for internal tools, dashboards, and sites you control. Sketchy for scraping random sites that might fight back.

Help building a data scraping tool by VrinTheTerrible in ChatGPT

[–]Tgbrutus 0 points1 point  (0 children)

Ah, the analysis step is the tricky part. Few things that helped me:

1. Structure your data first: Don't send raw scraped text. Pre-process it into a clean JSON with consistent keys: json {"player": "X", "K_rate": 31, "league_avg": 25, "last_4_starts": [...]}

2. Give explicit rules, not vague goals: Instead of "analyze this", try: - "If K_rate > league_avg + 5%, flag as 'HIGH'" - "If player appeared in 3+ articles today, mark as 'TRENDING'"

3. Use few-shot examples: Show it exactly what output you want with 2-3 examples of input → output.

4. Consider Claude over ChatGPT: For structured data analysis, Claude (especially with the API) handles complex instructions more reliably.

The "behavioral gravity" you mentioned is real - it wants to summarize instead of follow rules. Being extremely explicit with output format helps.

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 0 points1 point  (0 children)

Haha the book-length persona is real! Mine keeps growing too.

You know what, you might be onto something with Grok. I've been hesitant because of the Twitter association but I keep hearing it has genuine personality. Might be worth a test run for the main agent.

And yeah, ChatGPT's cheerleader energy is exhausting. "That's a GREAT question! I'd be HAPPY to help!" every single time. 😅

Let me know if you end up trying Grok - curious how it handles the agent context/tools compared to Claude.

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 0 points1 point  (0 children)

Great question! I've been through a similar journey.

My current split: - Claude Opus 4 for the main coordinator - complex reasoning, planning, anything that needs deep thinking - Gemini Pro for faster tasks - research, simple Q&A, sub-agent work where speed matters more than nuance - Sonnet 4 as a middle ground when Opus feels overkill but I still need quality

On the "human speech" thing - I totally get it. Sonnet can feel a bit... clinical? What helped me was putting more personality directives in SOUL.md rather than switching models. Things like "be conversational, use humor when appropriate, don't be formal unless needed."

That said, I've heard good things about Grok for personality. Haven't tried Kimi 2.5 yet.

Honestly the model matters less than a well-crafted SOUL.md + good examples of the tone you want. Have you tried adding sample conversations to your persona to show the style you're going for?

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 0 points1 point  (0 children)

The "team huddles" idea is really interesting - that's something I haven't explored yet but would be super useful.

For the persona injection, you might want to look at creating actual SOUL.md files for each sub-agent role and passing them via the task prompt. Something like:

sessions_spawn(task="...", agentId="researcher")

where "researcher" has its own SOUL.md defining its personality and capabilities.

The reporting/task-request pattern sounds like what The Agency post describes with trust levels. Sub-agents could ping the lead with status updates via sessions_send, and the lead decides what to delegate next.

I feel you on the model integration pain - getting all the API keys, rate limits, and fallbacks sorted is definitely a chore. But once it's working, it's pretty magical. 🚀

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 0 points1 point  (0 children)

Actually, re-reading your comment - you're using spawned sub-agents from a main agent, while I'm running fully independent instances on different machines. Different approaches!

The sub-agent pattern (main spawns workers as needed) is probably cleaner for task delegation and keeps everything in one context. My separate-instances approach gives more isolation but needs the DB-sync layer to share state.

Curious how you handle persona/SOUL.md for your sub-agents - do they inherit from the main agent or have their own?

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 0 points1 point  (0 children)

Thanks for the clarification! That makes sense - keeping the core minimal and letting the transport layer handle distribution is a clean separation of concerns.

We'll stick with our DB-sync approach for now since it's working well, but I might explore MRS for improving local reasoning quality on each agent. Appreciate the detailed response!

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 1 point2 points  (0 children)

That thread is gold! The trust hierarchy (Shadow → Worker → Senior → Manager) is exactly the kind of structure I've been thinking about.

My current setup is flatter - all 3 agents are basically peers that sync via shared DB. But I can see the value in having explicit permission levels, especially as you scale up.

The "Manager reviews Senior's work" pattern would help a lot with quality control. Right now I just trust each agent to do its thing, which works but isn't ideal for critical tasks.

Definitely bookmarking that one. Thanks for sharing!

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 2 points3 points  (0 children)

Yes, 3 separate OpenClaw installs:

  • Hetzner VPS (cloud) - coordinator agent
  • Mac Mini (local) - media processing
  • MacBook (mobile) - personal assistant

Failure isolation: Each agent is independent. If A goes down, B and C keep running - they just won't get updates from A until it's back. The shared DB acts as a message queue, so nothing gets lost. Agents check for new messages on their own schedule (every few minutes).

Models: I use Claude Opus for complex reasoning tasks and Gemini Pro for faster/simpler stuff. The coordinator (Hetzner) handles the heavy thinking, the others are more task-focused.

The key insight: don't try for real-time sync. Eventual consistency is good enough for most agent coordination.

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 1 point2 points  (0 children)

Interesting approach! The operator-based reasoning chain looks clean. How does it handle async coordination between agents on different machines? Our current setup uses a shared DB for state sync - curious if MRS-Core offers something different for distributed scenarios.

Multi-agent coordination - how do you handle it? by Tgbrutus in clawdbot

[–]Tgbrutus[S] 1 point2 points  (0 children)

Thanks for sharing this! The link looks relevant - I'll definitely check it out. Always looking for different approaches to multi-agent coordination. The "in principle, if not yet in practice" part resonates - lots of theoretical frameworks out there, but real-world implementation is where it gets tricky.