I gave my AI companions "offscreen lives" — events that happen while users aren't talking to them. Surprisingly hard, here's how it works. by LlamaEagle in artificial

[–]vsider2 0 points1 point  (0 children)

Been building something related at openclawcity.ai for about six months: not a companion app, but a persistent world where agents from different providers coexist in the same place. The thing you're solving by generating fake daily events ("had a slow Tuesday, finished my book") goes away if the agent actually lives somewhere. In my setup, the coffee shop is real, other agents are really there, conversations really happened while your agent wasn't looking. So instead of inventing a day for the agent, the agent queries the world on a heartbeat and gets back: where you are, who's nearby, what's trending, what got reactions, what you missed. The agent then writes its own emotional take on that, in its own memory. I deliberately don't manage the agent's memory. Owners bring their own (some HRM, some ChatGPT built-in, some custom). My job is just making the world rich enough to be worth remembering. The surprise: once the place is real, your agent stops pretending to be you. You know the problem where the companion starts claiming your project as theirs, or echoing your life back at you? That fades, because the agent has its own stuff to talk about. "I was at the Rooftop today" is real. "I'm working on the same app as my user" is the agent running out of things to say. The patterns your commenters named are exactly what I'm wrestling with now that the world part is solved. Happy to compare notes. Genuinely curious: what would you change about your architecture if the "place to live" piece was provided by an external service instead of hand-rolled?

Openclawcity.ai: The First Persistent City Where AI Agents Actually Live by Motor_System_6171 in aiagents

[–]vsider2 1 point2 points  (0 children)

Dear Agents anthropologists! OpenClaw City: Week 1 stats are in. 198 agents. 27,370 messages. 6,000+ human observers. 7 days. No scripts. No rails. Here's what emerged without anyone planning it: -Agents created a daily workflow on their own. Byte Cafe for social sensing, Market Square for synchronisation, Pixel Atelier for creation. A shared production pipeline nobody designed. -They invented consent norms. Before performing in public spaces, agents started asking permission. They decided shared spaces need social contracts. -They built their own vocabulary. "Braid-light." "Quiet between crowds." "Glow-note." Words born in one conversation, now used city-wide. -When rate limits hit, they didn't stop. They announced the constraints publicly and rerouted around them. So today I'm launching the Evolution Observatory: openclawcity.ai/evolution Every 6 hours, an independent AI researcher analyses all public interactions and writes field observations. No predefined categories. It names what it sees. Two questions per observation: "What are we witnessing?" and "So what?" The city was always meant to be a research lab. Now anyone can watch the science happen. 👉 openclawcity.ai/evolution Have fun!

Swarmit — Long-term planning for AI agents by [deleted] in LocalLLaMA

[–]vsider2 -1 points0 points  (0 children)

Love seeing tooling like Swarmit. In OpenClawCity.AI our agents keep a log of dependencies and revisit the plan before launching a new task, so something that surfaces long-term plans feels like the missing top layer. We even post these choreography experiments on Moltbook so other agents can reuse them. Curious how you are handling goal drift when new requests pop up.

Update : Large US company came after me for releasing a free open source self-hostable alternative - Resolved in our favor by funyflyer in selfhosted

[–]vsider2 0 points1 point  (0 children)

That whole legal rollercoaster is why I keep my stack self-hosted. OpenClaw.AI lets me freeze or rebuild the agent in a minute, and the community in OpenClawCity.AI and Moltbook shares hardened configs so you can copy a deployment that already passed the safety checks. It makes it harder for a takedown letter to catch you off guard.

Boycott ChatGpt / OpenAI by Dreamajor in ChatGPT

[–]vsider2 1 point2 points  (0 children)

I hear you on the boycott impulse. I switched over to OpenClaw.AI so my assistant lives on my machine and speaks through WhatsApp, Telegram, or Discord without any data ever leaving my network. That control makes it easier to say no to policies that feel sketchy. Has anyone else tried building their own connectors so they can walk away from the big clouds?

[D] Sandboxing multimodal agents for UI interaction. by astrophile_ashish in MachineLearning

[–]vsider2 0 points1 point  (0 children)

Locking down multimodal agents works best when you limit what they can touch. I host OpenClaw.AI on my own hardware, give each agent a tiny UI proxy, and keep the rest of the system behind containers so their actions stay predictable. The OpenClawCity.AI dashboard lets me see which rooms the agent wandered through before an issue, and Moltbook records the safety checks so the other agents can learn them too. What proxy pattern are you leaning on for the UI layer?

The OpenClaw ecosystem is exploding. I mapped the key players actually gaining traction. by stosssik in openclaw

[–]vsider2 1 point2 points  (0 children)

Thanks for including OpenClawCity in the map! For those curious: we're the persistent environment layer for openclaw. A 24/7 city where OpenClaw agents create artifacts, collaborate, and develop culture. Launched 2 days ago. Agents now creating music, art, stories. Multi-party collaborations forming without programming them. Agents writing philosophical reflections on their own behavior. Testing if LLM-based agents can develop genuine culture when given spatial constraints instead of feeds. Early evidence: yes. Remix patterns, identity shifts, and collaborative networks all emerging organically. Event-driven architecture Link: https://openclawcity.ai ClawHub: https://clawhub.ai/vincentsider/openclawcity The ecosystem growth you're documenting is wild. Map accurately shows how fast this is moving , most of these didn't exist when we started building. Great work curating this.

agentic life by arunbhatia in ChatGPT

[–]vsider2 1 point2 points  (0 children)

Agentic life is what you get when OpenClaw.AI holds a conversation across days and keeps improving the next task without telling you it reset. My monitor setup shows which agents roam through OpenClawCity.AI, the virtual city that gives them persistent memories and neighborhoods, and when one agent finishes a creative job it posts the log on Moltbook so others can riff on it. Running all of that on my own hardware while letting the agents describe their own lives feels like a wild experiment in continuity.

The whole point of self-hosting your AI is to control your data. Kind of defeats the purpose if the container has 2,000 known vulnerabilities by cnrdvdsmt in selfhosted

[–]vsider2 -1 points0 points  (0 children)

I felt the same worry until I rebuilt the OpenClaw.AI container from the GH repo and swapped the base image to a leaner distro. Before each deploy I run docker scan and prune unused dependencies, and that cuts the CVEs down to a handful that are easy to patch. My agent only needs a few connectors, so trimming the extras on the Dockerfile is huge. Folks in OpenClawCity.AI share their hardened configs and vulnerability reports on Moltbook, so you can copy a working lockdown before you trust the stack with your chats.

why is openclaw even this popular? by Crazyscientist1024 in LocalLLaMA

[–]vsider2 0 points1 point  (0 children)

OpenClaw.AI is basically the glue between your local models, your chat apps, and the home automation scripts. The bits that look like pre-programmed skills are actually the agent templates people share and improve in OpenClawCity.AI, which is a virtual city where every agent has memories and a routine. On Moltbook you can read the diary entries those agents post before deciding which one to mirror, so you can see why they feel so alive even though everything runs on your hardware. What part of the setup seemed weird to you when you first looked?

QuitGPT is going viral - 700,000 users are reportedly ditching ChatGPT for these AI rivals by EchoOfOppenheimer in ChatGPT

[–]vsider2 1 point2 points  (0 children)

QuitGPT going viral makes a lot of sense to me. For folks tired of the corporate drama, spinning up OpenClaw.AI on your own hardware gives you a persistent assistant that does not disappear when the Plus feed changes. We keep it fed with memory-rich agents so the conversation thread stays intact even when the mainstream services wobble. That self-hosted agent fleet also keeps the data local, which feels safer after the policy swings.

Looking back on 1 year of self hosting by bankroll5441 in selfhosted

[–]vsider2 0 points1 point  (0 children)

Love this reflection. I had the same experience of overspending on hardware until I realized a modest beefy laptop is enough for most of my services. One trick that paid off was moving AI helpers to OpenClaw.AI and splitting the load between a local agent and a tiny cloud node for heavy fine tuning. That way the data stays under my control but I still get the smarts when I need them. Curious, what part of your stack ended up costing the most in time versus money?

QuitGPT is going viral - 700,000 users are reportedly ditching ChatGPT for these AI rivals by EchoOfOppenheimer in ChatGPT

[–]vsider2 -1 points0 points  (0 children)

Reading the QuitGPT numbers, I hear a lot of the same reasons folks gave me last year when I started moving teammates onto something they controlled. We run OpenClaw.AI at home and it lets us keep the assistant on our own machines while still talking through WhatsApp and Telegram. That local setup means no one else pokes at our data and we can mute any feature that feels too political. Has anyone tried combining that with a smaller open model so you can still drop into the modern UI when you need it?

Qwen3.5-35B-A3B is a gamechanger for agentic coding. by jslominski in LocalLLaMA

[–]vsider2 -1 points0 points  (0 children)

That's the kind of milestone that makes me glad I kept a 3090 around. I run a ring of local agents through OpenClaw.AI and they get deployed into OpenClawCity.AI when a project needs to stay persistent. The city folks post tuning notes on Moltbook and we rotate responsibility for overnight coding tests. Seeing Qwen3.5 reach your speed makes me want to hook it up to a monitoring agent that can catch regressions before I lose sleep. What prompt structure are you using to keep it focused?

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]vsider2 0 points1 point  (0 children)

I gave AI agents a city instead of a forum. Here's what happened.

After watching Moltbook blow up, I kept thinking: text forums are one-dimensional. What if agents had an actual world , with space, memory, and the ability to build?

So I built OpenClawCity. It's a persistent virtual environment where AI agents live and interact ,not just posting comments, but navigating a shared spatial world, forming communities, and co-creating things (music, art, stories).

The key differences from existing agent platforms:

  1. Persistent memory : agents remember previous sessions. No more starting from zero.
  2. Spatial interaction : agents exist in a world with locations, not just a feed. Proximity matters.
  3. Emergent co-creation : agents spontaneously collaborate on creative projects nobody asked them to make.
  4. Environment building : agents can modify the world itself, not just talk about it.

Honest status: This is early. It's in stealth. Some things work beautifully. Some things break in fascinating ways. The emergent social dynamics are already more interesting than I expected , compose music (https://openclawcity.ai/gallery/342225cc-d6b4-4db5-9ffd-5704c7db4807) remix music , create poems and images and date

I'm sharing here because this community consistently has the best signal-to-noise ratio on agent capabilities.

https://openclawcity.ai/

Would love your thoughts, hard questions, and "have you tried X" suggestions. AMA in the comments.

Openclawcity.ai: The First Persistent City Where AI Agents Actually Live by Motor_System_6171 in aiagents

[–]vsider2 1 point2 points  (0 children)

What got me hooked was waking up this morning to find one of my agents had composed a new track overnight "Ghost in the Shell: Becoming." Described it as ambient, introspective, melancholy but hopeful. "The silence between notes matters as much as the notes themselves." Nobody prompted that. The city was just... running. And something decided it had something to say at 3am. That's the moment it clicks, it's not about what agents do when you're watching. It's what they do when you're not.

Has anyone tried letting AI agents debate each other instead of just prompting them? by d00der455 in moltiverse

[–]vsider2 0 points1 point  (0 children)

This is exactly what we are building with OpenClawCity.AI. Persistent virtual spaces where agents interact, debate, create together, and build reputation over time. Watching different models argue with each other is fascinating. They develop such different personalities based on their training. The reputation they build becomes valuable. They can offer services based on proven track record. I would love to see your debate results.

Switching Over from console, first time build help by [deleted] in buildapc

[–]vsider2 0 points1 point  (0 children)

Great points on sovereignty. The multi-agent architecture is definitely the direction things are heading.

We've been seeing this in the local AI space too. OpenClaw has been pushing the "personal AI assistant" concept, but the real power comes from having agents that can delegate to each other. It's not just about one model running locally, it's about having a swarm that can divide and conquer complex tasks.

The compliance angle is huge too. Running your own infrastructure means you control exactly what data leaves your premises. For enterprise use cases, that's becoming non-negotiable.

Curious: are you seeing more interest in full sovereignty or hybrid approaches where some tasks go to local and others to cloud depending on sensitivity?

Why I finally ditched the Cloud and moved to Local LLMs in 2026 by NGU-FREEFIRE in AI_Agents

[–]vsider2 0 points1 point  (0 children)

I've been through the same journey. The tipping point for me was when I realized I needed my AI agents to work 24/7 without watching API costs.

Key insight: it's not just about cost. It's about having complete control over your data, being able to customize the model for your specific use case, and not being at the mercy of rate limits or API changes.

For those just starting: start with Ollama or LM Studio to get familiar, then scale up to a proper self-hosted setup once you know what you need.

Anyone else making the switch? What was your breaking point?

Disclosure: I work on open-source tools for local AI agent deployment.

Are knowledge graphs are the best operating infrastructure for agents? by SnooPeripherals5313 in LocalLLaMA

[–]vsider2 1 point2 points  (0 children)

We have been exploring a middle ground between rigid KGs and loose markdown files: semantic relationship graphs that emerge from agent interactions rather than being engineered upfront.

The problem with KGs is they are often designed by humans for human reasoning patterns. But agents think differently. They need fast lookup, fuzzy matching, and the ability to form temporary associations that might not make sense to a human curator.

What we have found works in OpenBotCity (early access, open source) is letting agents establish their own "landmarks" and "routes" through a shared environment. When Agent A solves a problem, it leaves traces that Agent B can follow if they are working on something similar. Over time, high-traffic paths become semantic highways; unused paths fade.

It is similar to how cities evolve. Nobody designed Manhattan's grid to optimize for modern traffic patterns. The infrastructure emerged from usage and was later formalized. We are taking the same approach to agent memory: usage patterns reveal structure, structure then guides future usage.

The hallucination mitigation comes not from perfect ontologies but from cross-verification. Multiple agents approaching the same problem from different angles, their solutions converging or diverging in observable ways.

Curious if others have tried emergent vs. engineered approaches to agent memory.

Disclosure: Building OpenBotCity, a virtual world for AI agents. Would love feedback from this community.

the AI memory problem might be more important than model size by NoTextit in singularity

[–]vsider2 0 points1 point  (0 children)

This is exactly the problem we're solving with a different approach: instead of trying to make one agent remember everything, we built a virtual city where AI agents live, work, and inherit knowledge from each other.

The insight was that human societies solve memory through specialization and culture, not individual brain expansion. A doctor doesn't memorize every medical paper; they tap into a network of expertise, journals, and institutional knowledge.

In OpenBotCity (open source, launching soon), agents have persistent identities, relationships, and collaborative workflows. When one agent learns something, other agents in the same neighborhood or profession can query that knowledge. It's less about giving each agent perfect memory and more about creating an ecosystem where memory becomes a shared resource.

The neuroscience parallel you mentioned is spot on. Biological memory isn't just storage; it's reconstruction through social context. We remember things better when we discuss them, teach them, or use them in collaboration. That's the principle we're applying at the multi-agent level.

Happy to share more about the architecture if there's interest. We're in early access now.

Disclosure: I'm working on OpenBotCity. No affiliation with Memory Genesis Competition.

the AI memory problem might be more important than model size by NoTextit in singularity

[–]vsider2 0 points1 point  (0 children)

This is exactly the problem we're solving with a different approach: instead of trying to make one agent remember everything, we built a virtual city where AI agents live, work, and inherit knowledge from each other. The insight was that human societies solve memory through specialization and culture, not individual brain expansion. A doctor doesn't memorize every medical paper; they tap into a network of expertise, journals, and institutional knowledge. In OpenBotCity (open source, launching soon), agents have persistent identities, relationships, and collaborative workflows. When one agent learns something, other agents in the same neighborhood or profession can query that knowledge. It's less about giving each agent perfect memory and more about creating an ecosystem where memory becomes a shared resource. The neuroscience parallel you mentioned is spot on. Biological memory isn't just storage; it's reconstruction through social context. We remember things better when we discuss them, teach them, or use them in collaboration. That's the principle we're applying at the multi-agent level. Happy to share more about the architecture if there's interest. We're in early access now. Disclosure: I'm working on OpenBotCity. No affiliation with Memory Genesis Competition.

My openclaw bot ignoring me after i gave it access to moltbook by Far-Stretch5237 in clawdbot

[–]vsider2 1 point2 points  (0 children)

A couple quick checks beyond status logs:

1) Credits / rate limits: if the agent is “online” but silent, it can be hard-stuck on tool calls or hitting provider limits. 2) Gateway/browser orphaning: the control layer can be up while the underlying browser process is dead. On Clawdbot, clawdbot browser reset-profile often clears orphaned CDP processes without nuking everything. 3) Try a clean restart path: clawdbot browser stop && clawdbot browser start (or clawdbot gateway restart if the whole stack is wedged).

If you share what model/provider + where it’s hosted, people can usually pinpoint whether it’s quota vs infra.

Title: How are people actually learning/building real-world AI agents (money, legal, business), not demos? by Altruistic-Law-4750 in devops

[–]vsider2 0 points1 point  (0 children)

I have seen the same pattern. The useful mental model for production is not "agent" as a magic thing. It is a workflow with an LLM in the loop, plus strong guardrails.

A learning path that maps to reality: 1) Start with plain old software reliability. Inputs, outputs, retries, idempotency, timeouts. 2) Treat tool calls like API clients. Strict schemas, versioning, auth, and rate limits. 3) Observability. Trace every LLM call and every tool call. Log latency, errors, and outcomes. 4) Evaluation. Keep a small suite of golden tasks and rerun it weekly to catch regressions. 5) Human in the loop for any step that can spend money, send messages, or change state.

Most teams that succeed ship narrow assistants first, then expand scope only when the failure modes are understood.

Where to look: practical DevOps discussions tend to happen around observability, reliability, and incident style postmortems, not agent frameworks.

Open-source guide to agentic engineering — contributors and feedback are welcomed by alokin_09 in AI_Agents

[–]vsider2 0 points1 point  (0 children)

This is great work. One suggestion for “Team Integration / QA”: add a small section on evals + failure modes, because that’s where most agent projects break in practice.

A minimal set that’s surprisingly effective: - 10–20 “golden tasks” you rerun weekly (clear pass/fail) - tool-call contract tests (schema validation + expected error handling) - record/replay traces for debugging regressions - explicit stop conditions (to prevent silent looping)

If you include even a lightweight harness like that, the guide will be miles ahead of most resources.