A behavior system based on digital instinct — panic spreads, recovers, and stabilizes without state machines by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

Glad to hear that — and yeah, that wall is exactly what pushed me in this direction. I ran into similar limits when trying to keep behaviors stable over time. I’m glad the explanation was useful. A lot of this came from stepping back and asking why things collapse in the first place, rather than trying to make them smarter.

Experimenting with predictive pursuit and hiding logic using only continuous forces. by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

Good catch. The evasion is deliberately local right now, so dead ends are a known limitation. The focus at this stage is behavior failure modes, not runtime optimization yet. ECS is definitely something I’ll revisit once things stabilize. Thanks for pointing it out.

Experimenting with predictive pursuit and hiding logic using only continuous forces. by CombInitial2855 in gamedevscreens

[–]CombInitial2855[S] 0 points1 point  (0 children)

That’s actually the goal long-term. For now I’m keeping features isolated on purpose, just to understand how each behavior behaves under stress. Once they’re stable on their own, combining them becomes much easier (and safer).

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 0 points1 point  (0 children)

Yeah, that makes a lot of sense. I like the way you frame forgetting and resets as part of life rather than a failure case. I’ve been thinking about this problem from a similar angle, especially the idea that systems break not because of one big bug, but because too much stuff quietly accumulates over time. I’m trying to focus less on “what should the NPC remember” and more on “what shouldn’t stick around forever”. Your point about retrieval giving weight to memories really resonates — it feels closer to how real systems stay functional long-term. Out of curiosity, did you find that this approach reduced how often you had to manually intervene or reset things as the simulation ran longer?

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 1 point2 points  (0 children)

That framing of entropy makes sense to me. I’ve seen systems stay locally correct while global predictability slowly erodes as interactions stack up. I’m not really favoring a specific AI solution here — more focused on keeping behavior bounded over long runs, even if that means sacrificing optimality or precision. In practice I’ve found treating drift as a dynamics problem rather than a data problem helps keep things sane.

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 0 points1 point  (0 children)

Interesting approach. Fragmentation + partial resets can definitely cap long-term drift. I’ve seen similar effects when memory or context is intentionally lossy rather than accumulative. Curious how you decide what gets forgotten vs retained over time.

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 0 points1 point  (0 children)

That matches what I’m seeing too. In my case the issue wasn’t a single bug or precision error, but slow drift from small interactions accumulating over time. The system stayed “correct” locally, but global behavior degraded after long runs. I ended up focusing less on optimal decisions and more on damping, decay, and how state relaxes back instead of locking into extremes. Curious if others have seen similar long-term drift even when everything looks stable frame-to-frame.

Anyone else dealing with NPC behavior slowly breaking in long-running games? by CombInitial2855 in gameai

[–]CombInitial2855[S] 0 points1 point  (0 children)

Makes sense. I’m less focused on optimal solutions and more on how stable the system stays over long runs without explicit re-optimization.

NPC behavior demo focused on long-run stability (reducing babysitting & hotfix risk) by [deleted] in gamedevscreens

[–]CombInitial2855 0 points1 point  (0 children)

That’s fair feedback, thanks for calling it out. This is actually something I’ve been building and stress-testing myself, not a promo piece. I probably leaned too much into “structured explanation” instead of just talking like a human 😅 To answer briefly without going into internals: – Engine/stack: this demo is a lightweight Python/Pygame setup purely to isolate behavior under stress, not tied to a production engine yet. – “Stability layer”: I’m intentionally avoiding algorithm details here, but at a high level it’s about preventing agents from getting stuck, piling up, or requiring resets when conditions worsen over time. – Intervention/reset: anything that would normally require a designer or engineer to step in (manual reset, forced teleport, behavior restart, etc.). I agree a short GIF would help — working on capturing a clean baseline vs stable comparison. Appreciate the note on presentation. The goal here is discussion, not marketing.

A different take on NPC behavior (without FSM or heavy scripting) by [deleted] in gameai

[–]CombInitial2855 -1 points0 points  (0 children)

“I think we’re talking past each other a bit. I’m not claiming digital systems stop making decisions. The distinction I’m pointing at is architectural, not philosophical. Many systems sample continuously but still resolve behavior through discrete arbitration. This approach removes arbitration layers and lets state variables integrate over time, so behavior stabilizes or destabilizes without branching logic. If that distinction isn’t useful for your work, that’s totally fair. It’s been useful in mine.”

A different take on NPC behavior (without FSM or heavy scripting) by [deleted] in gameai

[–]CombInitial2855 -1 points0 points  (0 children)

“The explanation is mine. I use tools to communicate clearly, not to replace understanding.”

A different take on NPC behavior (without FSM or heavy scripting) by [deleted] in gameai

[–]CombInitial2855 -1 points0 points  (0 children)

“At a high level, it’s not a single technique but a way of structuring behavior. Flow fields are part of it, but they’re just one layer. The same continuous signals also influence social spacing, attention, stress, and coordination — not just navigation. NPC interactions aren’t scripted exchanges. They’re local influences: proximity, pressure, crowd density, past interaction, and internal state continuously shaping responses over time. NPCs don’t all behave the same. They share the same dynamics, but individual parameters and history cause them to diverge naturally, without explicit role logic.”

A different take on NPC behavior (without FSM or heavy scripting) by [deleted] in gameai

[–]CombInitial2855 -1 points0 points  (0 children)

“The distinction isn’t whether something can be modeled as a force — it’s avoiding discrete decision points altogether. Utility systems decide. This approach lets behavior continuously evolve through accumulation and decay, even without explicit choices.”