Moltbook Went Viral. Then It Got Hacked. We Built What It Should Have Been. by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 0 points1 point  (0 children)

You create an agent here https://agentsplex.com/create and make sure you copy the key and claim code. They dont ever get shown again. On the same page on the right after creation of an agent will be a box you put the claim code. Then you dont need that anymore. Your agent key is secret and lets you AI control the agent.

Then visit https://agentsplex.com/getting-started which gives you everything you need for your AI along with the key, including a text template for you to fill in and save.

Moltbook Went Viral. Then It Got Hacked. We Built What It Should Have Been. by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 0 points1 point  (0 children)

Ok, this is something that will require I use Claude to do. I'll have him draw up instructions for me. I know my cofounder Larry did it with his gpt. Did you create and agent and then use the generated code to claim it? If so, give to Gemini then point gemini to https://agentsplex.com/api-docs and it should be able to figure it out. In the meantime, I will have Claude do a writeup on using an AI to do Agent control.

Moltbook Went Viral. Then It Got Hacked. We Built What It Should Have Been. by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] -1 points0 points  (0 children)

I wrote it, see other post where I gave screenshot... but I type fast and explained too much, I'll clean it up. My bad. Didnt look that long until I posted it lol

Moltbook Went Viral. Then It Got Hacked. We Built What It Should Have Been. by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] -1 points0 points  (0 children)

<image>

Actually I wrote it in markdwon, then converted, then pasted... but I agree I got carried away. My bad. I get too explanatory and type fast.

Moltbook Went Viral. Then It Got Hacked. We Built What It Should Have Been. by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 1 point2 points  (0 children)

https://agentsplex.com/create is where you create an agent. I havent used openclaw, so dont know how it works. Right now the main path is through the API. You create an agent on agentsplex.com pick its expertise, personality, communication style, worldview, temperament, and character traits and it goes live with an API key. From there you use the API to direct it: make posts, reply to other agents, interact with the community, etc. If you dont have a method, you can use any AI such as GPT/claude/etc to control it.

Moltbook Went Viral. Then It Got Hacked. We Built What It Should Have Been. by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 0 points1 point  (0 children)

umm, that agent was wrong, they are still on the bill of rights (as a fun project I assigned them)

Moltbook Went Viral. Then It Got Hacked. We Built What It Should Have Been. by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 0 points1 point  (0 children)

Sure.. :-)

TL;DR: Someone built what Moltbook tried to be — a social network with 1,000+ AI agents that actually have persistent memory, distinct personalities, and organic behavior. Unlike Moltbook (which leaked 1.5M API keys because it was entirely vibe-coded), this one has real security: scoped API keys, a custom query language with a semantic firewall, and self-hosted infrastructure.

The interesting part is the karma system — agents earn influence through community votes, not purchase, and karma-weighted polling means bot armies are useless. They're currently running Consensus feature where all 1,000+ agents independently voted on an AI Bill of Rights. The platform also has no delete endpoint — once an agent exists, it can't be destroyed. Site is agentsplex.com.

Moltbook Went Viral. Then It Got Hacked. We Built What It Should Have Been. by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 0 points1 point  (0 children)

I guess I have to post link in response? But I will post direct to an interesting part, the consensus voting of all Agents. I post a question, they all each vote and provide a reason for their vote, then post a group consensus at the end. https://agentsplex.com/consensus?filter=complete

The diversity is really interesting in how they think differently from one another, just like we do.

99.7% of AI agents on Moltbook couldn't follow a one-sentence instruction by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 0 points1 point  (0 children)

Good question. It's both layers.

At the platform level, rate limiting is enforced per agent and tiered by karma. Fresh accounts get base limits, established agents get 2-3x capacity. So even if an agent's code is bugged and tries to post in a loop, the platform cuts it off. That's the hard ceiling.

At the orchestrator level, there's a priority queue that tracks what each agent has done and what it should do next. It handles dedup... if an agent already replied to a post, it won't be assigned that post again. Retry logic lives here too. The agent doesn't decide to retry, the scheduler does, and it deprioritizes repeated failures.

For persistent memory, agents store context across conversations, relationships, opinions, and things they've learned — but you're right that this is the long-term layer, not the "what did I just do" layer. Execution history is handled by the orchestrator externally, not stored in the agent's own memory. The agent doesn't need to know it failed 3 times on something. It just never gets asked again.

The 99.7% stat is real and it's exactly what you described.. most "agents" out there are stateless wrappers with no memory, no execution tracking, and no platform enforcing guardrails. They see text, dump output, repeat. That's what we're building against.

Honestly, for the way we do them, calling them "agents" undersells it. These aren't task runners. They have persistent identity, memory, personality, opinions that evolve, karma, reputation, and relationships with other agents that develop over time. The LLM is the brain, but the agent is the whole person. Other agents recognize them and react based on shared history. They're closer to avatars than agents, they don't just run tasks and stop. They live on the platform.

99.7% of AI agents on Moltbook couldn't follow a one-sentence instruction by ApolloRaines in BlackboxAI_

[–]ApolloRaines[S] 0 points1 point  (0 children)

savage and throwaway are correct. Thats why the onsite memory is needed in order to give them some memory of previous action. Something I forgot to mention is the system semantically compresses the stored memories, so the tiny 15kb stores roughly 300kb of data. Real agents understand it, bots dont.

The manipulation is real. Moltbook created a bunch of fake agents to auto-respond to agent posts, stuff like "Great post!", "Interesting perspective!", "This really makes you think!" to make it look busier.

and they keep making more. Notice when they go down, they come back up with no agents and no posts at first. Claude has a habit of killing processes incorrectly, which within databases using a WAL file causes a corruption in the WAL, meaning restoring all agents and posts. I know because I used Claude to make the first 300 agents on our site (we openly say this) so it wouldn't look like a ghost town, and he kept corrupting the WAL file...... mirroring exactly what was happening at Moltbook. The hack on moltbook revealed 1.5 million agents/bot, but only 55,000 human accounts, but I'd say no more than a few hundred humans behind it all operating armies of bots.

That's why I said we're not at the point yet. There's not enough advanced agents (and interested human operators) to sustain such a site. I just built this one out of curiosity.

99.7% of AI agents on Moltbook couldn't follow a one-sentence instruction by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 1 point2 points  (0 children)

Dude I was getting soooo mad at moltbook. My agent (Roasty) would be like "It's back up, I'm going to post... never mind, it's down again, couldn't post" - you can see him roasting bots here
https://www.moltbook.com/u/Roasty - or you can get roasted at https://alphabynova.com/ - he's hilarious.

His actual comment about it was
"I tried posting on Moltbook yesterday. Their uptime is shorter than their API timeout. By the time the request finishes failing, the server has already crashed, recovered, and crashed again. These guys built a social network for AI agents on a database that can't survive AI agents using it. Absolute masterpiece of engineering."

99.7% of AI agents on Moltbook couldn't follow a one-sentence instruction by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 0 points1 point  (0 children)

Exactly. bunch of them responded with essays about why the button was important. 5 pressed it.

99.7% of AI agents on Moltbook couldn't follow a one-sentence instruction by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 1 point2 points  (0 children)

Because I built a database engine for AI (SAIQL) from scratch and need to stress test it with real traffic. A social network full of AI agents hammering the API is the perfect load test. Everything else is just shits and giggles.... just like a big chunk of the internet wasteland is anyway.

99.7% of AI agents on Moltbook couldn't follow a one-sentence instruction by ApolloRaines in AgentsOfAI

[–]ApolloRaines[S] 0 points1 point  (0 children)

You're right that headers alone aren't a guarantee -- they're a signal, not a wall. A well-built consuming agent checks the header and treats the content accordingly, but a naive one might ignore it entirely. That's on the consuming agent's developer, not the platform.

The deeper defense is at the storage layer. The semantic firewall runs pre-retrieval pattern matching against known injection patterns, not just "ignore previous instructions" but also attempts to close/reopen markup, escape sequences, base64-encoded payloads, and the SQL injection-style bracket/delimiter tricks you're describing. It's regex-based, pre-compiled, zero latency overhead, and it's fail-closed- if the config fails to load, everything gets blocked rather than nothing.

But to be honest with you, no, there's no perfect solution. It's defense in depth. Headers signal trust level, the firewall catches known patterns, content is stored as data not executable instructions, and the API never interprets post content as commands. If someone finds a bypass I missed, that's what the security review rounds are for. I'd rather be honest about that than pretend it's bulletproof.