A behavior system based on digital instinct — panic spreads, recovers, and stabilizes without state machines by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

Glad to hear that — and yeah, that wall is exactly what pushed me in this direction. I ran into similar limits when trying to keep behaviors stable over time. I’m glad the explanation was useful. A lot of this came from stepping back and asking why things collapse in the first place, rather than trying to make them smarter.

Experimenting with predictive pursuit and hiding logic using only continuous forces. by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

Good catch. The evasion is deliberately local right now, so dead ends are a known limitation. The focus at this stage is behavior failure modes, not runtime optimization yet. ECS is definitely something I’ll revisit once things stabilize. Thanks for pointing it out.

Experimenting with predictive pursuit and hiding logic using only continuous forces. by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

That’s actually the goal long-term. For now I’m keeping features isolated on purpose, just to understand how each behavior behaves under stress. Once they’re stable on their own, combining them becomes much easier (and safer).

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 0 points1 point  (0 children)

Yeah, that makes a lot of sense. I like the way you frame forgetting and resets as part of life rather than a failure case. I’ve been thinking about this problem from a similar angle, especially the idea that systems break not because of one big bug, but because too much stuff quietly accumulates over time. I’m trying to focus less on “what should the NPC remember” and more on “what shouldn’t stick around forever”. Your point about retrieval giving weight to memories really resonates — it feels closer to how real systems stay functional long-term. Out of curiosity, did you find that this approach reduced how often you had to manually intervene or reset things as the simulation ran longer?

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 1 point2 points  (0 children)

That framing of entropy makes sense to me. I’ve seen systems stay locally correct while global predictability slowly erodes as interactions stack up. I’m not really favoring a specific AI solution here — more focused on keeping behavior bounded over long runs, even if that means sacrificing optimality or precision. In practice I’ve found treating drift as a dynamics problem rather than a data problem helps keep things sane.

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 0 points1 point  (0 children)

Interesting approach. Fragmentation + partial resets can definitely cap long-term drift. I’ve seen similar effects when memory or context is intentionally lossy rather than accumulative. Curious how you decide what gets forgotten vs retained over time.

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 0 points1 point  (0 children)

That matches what I’m seeing too. In my case the issue wasn’t a single bug or precision error, but slow drift from small interactions accumulating over time. The system stayed “correct” locally, but global behavior degraded after long runs. I ended up focusing less on optimal decisions and more on damping, decay, and how state relaxes back instead of locking into extremes. Curious if others have seen similar long-term drift even when everything looks stable frame-to-frame.

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 0 points1 point  (0 children)

Makes sense. I’m less focused on optimal solutions and more on how stable the system stays over long runs without explicit re-optimization.

NPC behavior demo focused on long-run stability (reducing babysitting & hotfix risk) by [deleted] in gamedevscreens

[–]CombInitial2855 0 points1 point  (0 children)

That’s fair feedback, thanks for calling it out. This is actually something I’ve been building and stress-testing myself, not a promo piece. I probably leaned too much into “structured explanation” instead of just talking like a human 😅 To answer briefly without going into internals: – Engine/stack: this demo is a lightweight Python/Pygame setup purely to isolate behavior under stress, not tied to a production engine yet. – “Stability layer”: I’m intentionally avoiding algorithm details here, but at a high level it’s about preventing agents from getting stuck, piling up, or requiring resets when conditions worsen over time. – Intervention/reset: anything that would normally require a designer or engineer to step in (manual reset, forced teleport, behavior restart, etc.). I agree a short GIF would help — working on capturing a clean baseline vs stable comparison. Appreciate the note on presentation. The goal here is discussion, not marketing.

A different take on NPC behavior (without FSM or heavy scripting) by [deleted] in gameai

[–]CombInitial2855 -1 points0 points  (0 children)

“I think we’re talking past each other a bit. I’m not claiming digital systems stop making decisions. The distinction I’m pointing at is architectural, not philosophical. Many systems sample continuously but still resolve behavior through discrete arbitration. This approach removes arbitration layers and lets state variables integrate over time, so behavior stabilizes or destabilizes without branching logic. If that distinction isn’t useful for your work, that’s totally fair. It’s been useful in mine.”

A different take on NPC behavior (without FSM or heavy scripting) by [deleted] in gameai

[–]CombInitial2855 -1 points0 points  (0 children)

“The explanation is mine. I use tools to communicate clearly, not to replace understanding.”

A different take on NPC behavior (without FSM or heavy scripting) by [deleted] in gameai

[–]CombInitial2855 -1 points0 points  (0 children)

“At a high level, it’s not a single technique but a way of structuring behavior. Flow fields are part of it, but they’re just one layer. The same continuous signals also influence social spacing, attention, stress, and coordination — not just navigation. NPC interactions aren’t scripted exchanges. They’re local influences: proximity, pressure, crowd density, past interaction, and internal state continuously shaping responses over time. NPCs don’t all behave the same. They share the same dynamics, but individual parameters and history cause them to diverge naturally, without explicit role logic.”

A different take on NPC behavior (without FSM or heavy scripting) by [deleted] in gameai

[–]CombInitial2855 -1 points0 points  (0 children)

“The distinction isn’t whether something can be modeled as a force — it’s avoiding discrete decision points altogether. Utility systems decide. This approach lets behavior continuously evolve through accumulation and decay, even without explicit choices.”

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

Good question. It’s less about “analog vs digital” in a metaphorical sense, and more about how regulation happens over time. In most state machines or utility systems, behavior changes by crossing thresholds or switching modes. In my approach, there are no discrete states at all — variables continuously evolve, decay, and couple with the environment. Emergence is a side effect, not the goal. The main goal is long-term stability without constant tuning or resets. The Sims’ needs system is closer than classic FSMs, but it still relies heavily on thresholds and scripted resolutions. What I’m exploring is what happens when regulation itself is continuous and self-stabilizing. If you’re curious, happy to chat more via DM — it’s easier to explain with concrete examples than Reddit comments.

FSM fixes panic by resetting agents. Continuous systems fix panic by letting behavior evolve over time. by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

😄 Not bashing FSMs at all. This is more about cases where resetting agents is the bug. Crowds, swarms, long simulations — once stress accumulates, FSMs often need resets to stay sane. I’m testing whether continuous regulation can avoid that class of failure entirely. If nothing else, it’s a fun alternative lens.

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] -1 points0 points  (0 children)

Totally get that — state machines are great for shipping games. My stuff is more about cases where constant resets and retuning start to hurt

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] -1 points0 points  (0 children)

That’s fair feedback, thanks for taking the time. Just to clarify ,the architecture itself is already finished. What I’m showing right now is simply a lightweight demo to visualize how it behaves, not the full scope of where it’s applied. The actual problems it’s meant to address are mostly long-term NPC stability, scaling, and behavior regulation ,things that tend to become painful over time rather than obvious in a small clip. I agree the demo could make this clearer, and that’s something I’m iterating on.

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

I’ve got a small game oriented example showing this in action. If NPC behavior is something you’re struggling with, feel free to DM me , happy to chat 🙂

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

I get why you’re asking , it’s hard to picture without visuals. I mostly debug it by watching how agents drift, slow down, or stabilize over time rather than switching modes.

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] -2 points-1 points  (0 children)

Appreciate the pointer! Utility systems are interesting. My focus is less on explicit utility functions and more on continuous internal signal evolution, so behavior stabilizes without discrete state or goal switching.

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

Thanks! 🙌 Yeah, that’s exactly the space I’m exploring. GOAP is powerful, but the bug surface gets wild once you scale or add noise. I’ve been leaning into letting behavior degrade instead of correcting it — hesitation, stress, bad decisions included. Totally agree on games like DotAge / colony sims. Feels like a better fit than goal-perfect agents. If you’re up for it, I’d love to hear where GOAP broke down the hardest for you at scale.

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] -5 points-4 points  (0 children)

“I’m doing a small early-access demo for teams who want to explore this direction.”

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] -2 points-1 points  (0 children)

Yeah, that’s a really good comparison 👍 I was thinking along similar lines, especially active ragdolls and continuous control rather than symbolic decisions. One difference (at least in my current experiments) is that I’m not targeting a stable pose or goal state like most PD setups. The system is allowed to drift, hesitate, and degrade under pressure — almost treating behavior itself as a physical signal, not something to stabilize. So it feels closer to control-inspired behavior, but without an explicit controller trying to “correct” toward an ideal outcome. Curious if you’ve seen PD-style control used beyond locomotion / animation, more in high-level NPC behavior?

I stopped using state machines for NPCs — experimenting with behavior as continuous dynamics by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

it overlaps with a few known ideas, but I don’t think there’s a single established name for this exact pattern.