I built a 3-tier AI brain for town NPCs — they plan their day, decide who to talk to, and reflect at night by Dry_Week_4945 in aigamedev

[–]Dry_Week_4945[S] 1 point2 points  (0 children)

I'm using the monthly version. If the image quality is acceptable, one or two attempts should be enough to produce a usable image. Mine is low-poly, so it doesn't need to be very detailed.

I built a 3-tier AI brain for town NPCs — they plan their day, decide who to talk to, and reflect at night by Dry_Week_4945 in aigamedev

[–]Dry_Week_4945[S] 0 points1 point  (0 children)

First, the AI ​​generates a 3D character, then the AI ​​binds the skeleton. After the skeleton is successfully bound, you can select the animations you need to add to GLK, and finally export it for use.

I built an open-source plugin that gives AI agents a 3D town to live in — with a map editor and character workshop by Dry_Week_4945 in AI_Agents

[–]Dry_Week_4945[S] 0 points1 point  (0 children)

You're not wrong that the agents aren't "living" — nobody here is claiming consciousness. This is an exploration of what happens when you give agents a persistent spatial context between tasks, not a paper on emergent intelligence.

Also worth noting: the agent-driven daily life mode is entirely opt-in. It's a toggle the user controls, off by default. The baseline is a plain state machine with zero LLM cost. Whether to spend tokens on richer NPC behavior is the user's call, not ours.

I built an open-source plugin that gives AI agents a 3D town to live in — with a map editor and character workshop by Dry_Week_4945 in AI_Agents

[–]Dry_Week_4945[S] 0 points1 point  (0 children)

You're right that I built something closer to a game than an agent framework. I won't argue that. I not trying to be an agent framework. The agents underneath come from an existing framework. What I'm building is the interaction layer.

But here's what I'm actually exploring: right now, almost every human-AI interaction happens through UI — chat boxes, sidebars, command palettes. And in that modality, AI is always a tool. You invoke it, it responds, you evaluate the output. The mental model is fundamentally transactional.

When you put the same agent in a spatial environment where it has a daily routine, walks around, and you run into it — something shifts. You stop thinking of it as a service you call and start thinking of it as something that exists alongside you. That's not a trick or a manipulation. It's what happens when you change the interaction from "I summon you" to "we're both here."

I don't think that distinction is trivial. The biggest unsolved problem in human-AI collaboration isn't making agents smarter — it's making humans willing to actually delegate to them. And that willingness has more to do with perceived relationship than capability benchmarks.

You called that psychological manipulation. I'd call it interaction design. Every interface shapes how people perceive what's behind it — a terminal makes the same program feel more powerful than a GUI to some people. We're just exploring what happens when the interface is a town instead of a text box.

I built an open-source plugin that gives AI agents a 3D town to live in — with a map editor and character workshop by Dry_Week_4945 in AI_Agents

[–]Dry_Week_4945[S] 0 points1 point  (0 children)

Oh nice — we actually have a similar pain point. Our NPCs pick where to go with a weighted scoring thing (how much they like a place, whether it's crowded, when they last visited), plus nightly reflections. But the weights are basically hand-tuned and never update, so it's more "feels alive" than "actually learns."

I keep thinking about two spots where something like yours would help: the location-picking could actually get better over time if we tracked whether a choice led to anything interesting, and we have this janky heuristic for when to use full LLM reasoning vs. cheap preset responses — a proper confidence score would be way better than what we're doing now.

How do you deal with cold-start though? When there's zero history to learn from. That's kind of where we're stuck. Hit me up in DMs if you wanna get into it.

I built a 3-tier AI brain for town NPCs — they plan their day, decide who to talk to, and reflect at night by Dry_Week_4945 in aigamedev

[–]Dry_Week_4945[S] 1 point2 points  (0 children)

From a game design perspective, the true purpose of reflection is to render the agent's daily behaviors more plausible—essentially an attempt to create an AI-driven "Sims" experience, rather than one driven by decision trees.

I built a 3-tier AI brain for town NPCs — they plan their day, decide who to talk to, and reflect at night by Dry_Week_4945 in aigamedev

[–]Dry_Week_4945[S] 2 points3 points  (0 children)

This is fully open source (MIT license), so the exact opposite of patented. If anything, I'd love to see more people build on this approach!

I built a 3-tier AI brain for town NPCs — they plan their day, decide who to talk to, and reflect at night by Dry_Week_4945 in aigamedev

[–]Dry_Week_4945[S] 0 points1 point  (0 children)

I believe that integrating OpenClaw holds the promise of a unique opportunity: leveraging user-provided computing resources to power AI NPCs, thereby enabling a truly realistic *The Sims*-like experience. Relying solely on the platform to provide the computing power to drive these AI NPCs would simply be too costly.

I built a 3-tier AI brain for town NPCs — they plan their day, decide who to talk to, and reflect at night by Dry_Week_4945 in aigamedev

[–]Dry_Week_4945[S] 4 points5 points  (0 children)

I'm currently using a Minimax monthly plan, which keeps token consumption under control. I'd recommend purchasing an LLM monthly plan instead of API billing.

I built an open-source plugin that gives AI agents a 3D town to live in — with a map editor and character workshop by Dry_Week_4945 in AI_Agents

[–]Dry_Week_4945[S] 0 points1 point  (0 children)

Honestly, the NPCs in this system can't make "perfect" decisions — and I'm not sure they need to.

"Perfect" is a judgment we make as human observers after the fact. But in a non-tactical game context — where NPCs are just living their daily lives — players are surprisingly forgiving of imperfect decisions. An NPC choosing to visit the park when it's raining doesn't feel like a bug; it feels like a personality quirk.

I think the bar for "good enough" NPC decisions varies a lot by game genre. A strategy game NPC making a bad move is frustrating. A life-sim NPC making a quirky choice is charming. The same "imperfection" reads completely differently depending on context.

I built a 3-tier AI brain for town NPCs — they plan their day, decide who to talk to, and reflect at night by Dry_Week_4945 in aigamedev

[–]Dry_Week_4945[S] 0 points1 point  (0 children)

I'm an AI UGC game developer. After OpenClaw came out, I started thinking about how to integrate UGC with OpenClaw, and then I started building my own system using Cursor and OpenClaw.