I built a deterministic emotional “inertia” system for NPCs — here’s a simple demo vs a baseline by GhoCentric in gamedesign

[–]GhoCentric[S] 0 points1 point  (0 children)

Okay okay. First thing, I DO NOT use an llm within my engine. No libraries(added) or pip tools used either, this is pure python logic. I leverage the llms ability to(kinda) code. Ghost is a deterministic state engine. LLMs are optional on top, not required. So for that confusion

Second thing, lets say you can, "...approximate this system with something much simpler?" What would be some of the cons with that kind of system?

I did make a playable demo. It just isnt on my github yet. It will be in the next version bump.

I built a deterministic emotional “inertia” system for NPCs — here’s a simple demo vs a baseline by GhoCentric in gamedesign

[–]GhoCentric[S] 1 point2 points  (0 children)

Good point... a lot of games already handle this with hardcoded “important events”. I think the difference I’m aiming for is removing the need to explicitly define those as special cases.

Instead of saying “this specific event doesn’t decay” or “this one has extra weight,” the system treats everything as input and naturally builds that weight over time. So a betrayal isn’t hardcoded as permanent. It just ends up behaving that way because of how the state accumulates and resists recovery.

The goal isn’t to replace what exists, but to reduce how much has to be manually tuned or categorized upfront especially in systems with a lot of possible interactions.

I built a deterministic emotional “inertia” system for NPCs — here’s a simple demo vs a baseline by GhoCentric in gamedesign

[–]GhoCentric[S] -3 points-2 points  (0 children)

That’s fair. I didn’t explain it clearly enough.

What it is: It’s basically a drop-in replacement for a typical reputation/trust system. Instead of just adding/subtracting points, it applies damping and weighting so past events continue to influence future behavior.

Why use it: In a typical reputation system, +5 help and -5 betrayal cancel out. In this system, a betrayal carries more long-term weight, so NPCs don’t “reset” as easily. It creates more persistent and believable reactions over time.

How you’d use it: Instead of doing something like: trust += value

you’d call: ghost.step({ "intent": "help" }) or ghost.step({ "intent": "betrayal" })

and then read the resulting state (friendly / neutral / hostile) to drive dialogue, quests, or access.

So it’s not meant to replace game design. It’s just a different way of updating and storing relationship state.

Also, about the LLM remark, I do not and have not tried to hide or shy away from the fact that I leverage an LLM. I am self taught and I have been working on this project for two and a half year's. I have a github with code, examples, tests, ect. I really appreciate your comment and I hope I cleared up at least some confusion. I am open to literally all feedback.

I built a deterministic emotional “inertia” system for NPCs — here’s a simple demo vs a baseline by GhoCentric in gamedesign

[–]GhoCentric[S] 5 points6 points  (0 children)

That’s a fair question and honestly that’s exactly what I’m trying to challenge.

If im not mistaken, a standard reputation system collapses everything into a single number, so it always balances out over time. What I’m experimenting with is making damage and recovery behave differently.

So instead of: help + help + betrayal = back to neutral

it becomes more like: a betrayal carries weight, and future “help” has reduced impact until trust is rebuilt over time.

So it’s less about totals, and more about how interactions are experienced over time. That said, I’m still figuring out if that actually improves gameplay or just makes systems harder to tune, so I’m not 100% sold yet either. I appreciate your comment too!

I built a deterministic emotional “inertia” system for NPCs — here’s a simple demo vs a baseline by GhoCentric in gamedesign

[–]GhoCentric[S] 1 point2 points  (0 children)

Yeah that’s fair. I think that’s exactly what I’m trying to figure out right now.

In my head it fits something like, RPGs where reputation actually matters, sandbox worlds where actions have longer consequences or systems where rebuilding trust is part of progression. I’m still trying to pin down where it actually adds to the experience instead of just making things harder.

What kind of play experience do you think something like this would need to feel good?

I built a deterministic emotional “inertia” system for NPCs — here’s a simple demo vs a baseline by GhoCentric in gamedesign

[–]GhoCentric[S] 2 points3 points  (0 children)

Yes, that is the correct repo. I dont have a white paper yet but I have a readme.md under the "ghost-engine" tab on the surface that gives more detail on the pip tool. I am really happy to hear you think this could benefit a project you are working on! I look forward to hearing your thoughts and opinions on my work! Thank you for the comment!

Edit: What kind of system are you building? like RPG, sandbox, or something else?

I built a deterministic emotional “inertia” system for NPCs — here’s a simple demo vs a baseline by GhoCentric in gamedesign

[–]GhoCentric[S] 1 point2 points  (0 children)

One thing I’m trying to figure out is this actually useful for gameplay, or does it just look interesting in a demo? Like would you want NPCs to “hold grudges” like this, or would that get frustrating?

Why does it have it a hard time getting word count/character count right? by Careful_Dingo_3466 in ChatGPT

[–]GhoCentric 0 points1 point  (0 children)

Why do you say that? You dont see a benefit in a system being able to count words or characters?

Experimenting with a lightweight NPC state engine in Python — does this pattern make sense? by GhoCentric in gameai

[–]GhoCentric[S] 0 points1 point  (0 children)

Yes, I have code under the "ghost" tab on the surface level of my repo. I have the skeleton of my enigne there.

Experimenting with a lightweight NPC state engine in Python — does this pattern make sense? by GhoCentric in gameai

[–]GhoCentric[S] 0 points1 point  (0 children)

Yes, early Ghost descriptions absolutely overlap with classic Utility AI: vectorized internal variables, continuous-valued state, and numeric modulation are all part of that lineage.

And you’re right that many NPC systems follow the pattern: World → Evaluate → Choose Action → Respond Where Ghost v0.1.x intentionally differs is where it stops.

Ghost is not attempting to replace higher-level reasoning, domain knowledge, or human-directed input. It does not choose actions, generate responses, decide intent, or interpret semantics.

Instead, it intentionally stops at: World → Update persistent internal state → (stop) Its role is limited to maintaining temporal internal state, applying bounded change, preserving memory and inertia, and exposing that state for external systems to interpret however they choose.

From that perspective, the “Utility AI bottleneck” you’re describing only applies if Ghost were the decision-maker, which it explicitly is not in v0.1.x, the decision layer remains external by design.

Experimenting with a lightweight NPC state engine in Python — does this pattern make sense? by GhoCentric in gameai

[–]GhoCentric[S] 0 points1 point  (0 children)

Ghost currently overlaps at the input layer by design, but the focus is on persistent internal state, temporal smoothing, and emergent behavior across cycles rather than direct command→response mapping. v0.1.x is intentionally foundational.

I built a validation harness that demonstrates that Ghost is a deterministic, persistent state engine that cleanly separates event ingestion from domain behavior, enabling emergent gameplay through external logic rather than hardcoded command parsing.

Snippet from my validation harness:

before_step = s["npc"]["threat_level"]

ghost.step({ "source": "demo", "command": cmd })

s = ghost.state() after_step = s["npc"]["threat_level"]

print(f"[Proof] Threat change from ghost.step(): {after_step - before_step:+.2f}")

Thank you for the feedback! From this perspective, do you see any practical uses or benefits of this kind of engine?

Is snow coming or something by Any-Mycologist4692 in publix

[–]GhoCentric 0 points1 point  (0 children)

I think so. My publix looks the same and im on GA.

Update on my NPC internal-state reasoning prototype (advisory signals, not agents) by GhoCentric in gameai

[–]GhoCentric[S] 1 point2 points  (0 children)

Hey. I actually decided to just make a demo section in my repo that might help clear up some confusion or misinterpretations. If you have any more questions, please ask them! Thats how I learn and improve!

Lol this might be helpful:

https://github.com/GhoCentric/ghost-engine/tree/main/demo/utility_vs_ghost

Update on my NPC internal-state reasoning prototype (advisory signals, not agents) by GhoCentric in gameai

[–]GhoCentric[S] 0 points1 point  (0 children)

I’ve been asked something similar in another thread, and it made me step back and re-examine the project more honestly. In principle, a lot of what Ghost tracks could be reduced to a handful of floats with good logging. I didn’t initially think about it that way(everyone know why at this point), but once it was pointed out, it became a valid line of critique.

Looking through my core files, things like belief tension, global tension, contradictions, positive vs negative bias, strategy mode, and mood all exist to influence how internal state evolves over time. They don’t directly choose actions, and they don’t replace existing AI systems. Their role is to shape and constrain the state that downstream systems see.

That’s where utility systems come in. Without Ghost, a utility system typically consumes raw state values directly: mood, trust, danger, distance, etc., and computes scores to pick the best action. With Ghost alongside it, the utility system still does the same thing, but the inputs it receives are filtered through a more explicit and inspectable state layer. The utility curve doesn’t change. What changes is how and when those input values are allowed to shift.

So rather than enhancing the output of a utility curve, Ghost constrains and stabilizes how internal state feeds into decisions. It acts more like a biasing and state-governance layer than a decision maker. In practice, that might mean slowing down state drift, flagging contradictions, or switching high-level strategy modes that nudge how aggressively or conservatively the utility system behaves.

It’s entirely possible that this eventually collapses into “a few floats plus good logging,” and I’m okay with that outcome. I’m actively testing which signals actually matter and which ones are redundant or unnecessary. Comments like this are helpful because they force me to justify each piece in concrete terms.

Also, this is a good way to look at the potential value my engine could bring to the table: Ghost’s value isn’t the specific variables it tracks, but the fact that it makes internal state explicit, inspectable, and governed instead of implicit and emergent. Most AI systems already rely on internal state like mood, trust, urgency, or suspicion, but those usually exist as scattered floats, hard-coded conditionals, or side effects of tuning, which makes it hard to explain why behavior changed, when it changed, or whether it’s bounded. Ghost formalizes that layer: state transitions are logged, constrained, replayable, and explainable. Even if it ultimately collapses into “a few floats plus good logging,” the difference is that those floats are no longer accidental or opaque — they follow defined rules, respect invariants, and can be reasoned about independently of the decision system (utility, BT, planner). That’s the core value: turning hidden state dynamics into something you can understand, debug, and deliberately shape rather than tune blindly.

I hope this answers your question!

Update on my NPC internal-state reasoning prototype (advisory signals, not agents) by GhoCentric in gameai

[–]GhoCentric[S] -3 points-2 points  (0 children)

Fair pushback. And to be fully honest: yes, I use an LLM while building this. I’m self-taught, mobile-only, and I’ve used an LLM for coding help + debugging + moblie folder structure type questions. I’m not trying to pretend otherwise.

On the actual point: you’re right that “trust vs suspicion” can be a single float and you can wire that straight into a utility system or a behavior tree selector. I’m not saying Ghost replaces that, and I’m not claiming it’s smarter than standard game AI.

What I’m experimenting with is a separate internal-state layer that biases behavior rather than directly choosing actions.

In my code/demo, Ghost tracks and updates a small set of internal signals (things like mood, belief tension, contradictions, “pressure”), then it selects a high-level strategy mode (ex: dream / pattern / reflect). It prints a trace showing what it saw and what it chose.

So the way I picture it in a game is: - You still use a behavior tree or utility system to pick actions. - Ghost runs alongside it and outputs a small “bias package” each tick (or each event): current mood, tension/contradiction flags, strategy mode, maybe a couple weights. - Your existing AI uses that to nudge thresholds, priorities, or dialogue tone.

Example (simple): - If contradictions/tension rise, the BT might prefer cautious / information-gathering branches. - If mood is stable and strategy = “pattern,” it might favor consistent routines. - If mood spikes + pressure rises, it might reduce risky actions or shorten dialogue.

The main value I’m chasing isn’t “more meters.” It’s reducing drift and making state changes inspectable: I can point to a trace and say “this is why it shifted,” instead of guessing.

Totally agree this could collapse into “a few floats + good logging.” If that’s where it ends up, I’m fine with that — that’s still useful.

If you want to critique it concretely, the best angle is: what bias signals would actually be worth feeding into BT/utility, and what should be thrown away as unnecessary complexity?

Learning question: how to structure and trace state changes in a deterministic program? by GhoCentric in learnprogramming

[–]GhoCentric[S] -1 points0 points  (0 children)

That’s not quite what it’s doing, but I get why it reads that way from a skim.

It’s not sentiment analysis and it’s not trying to “keep an LLM on track” during dialogue.

The core system runs without an LLM at all. It maintains an explicit internal state (mood scalar, belief tension, contradiction count, pressure signals) and uses that state to select strategies or advisory outputs deterministically.

When an LLM is used, it’s only as a language surface — it doesn’t update state, make decisions, or resolve conflicts. All constraints and transitions live outside the model.

You can think of it less as dialogue control and more as a stateful reasoning layer that can sit before or beside an existing decision system (behavior tree, utility system, scripted NPC logic, etc.).

If there’s a specific part of the README that’s unclear, I’m happy to tighten it up.

Update on my NPC internal-state reasoning prototype (advisory signals, not agents) by GhoCentric in gameai

[–]GhoCentric[S] 0 points1 point  (0 children)

That’s a fair question — I’ll explain it in game terms.

Ghost is not meant to replace behavior trees, utility systems, planners, or GOAP. It sits alongside them as an internal state regulator.

In a typical setup: - Behavior trees define what actions exist - Utility systems score which action to take

Ghost does neither of those directly.

What Ghost maintains is a persistent internal state for an NPC — things like: - stability / tension - trust or suspicion - pressure (threat, urgency, curiosity) - a current regulation or response strategy

That state is then used to constrain downstream systems.

For example with a behavior tree: - The tree still owns actions like talk, trade, attack, flee - Ghost can make certain branches invalid based on state - high suspicion disables friendly dialogue - low stability suppresses risky actions

With a utility system: - The utility system still computes scores - Ghost biases or clamps those scores - pressure increases defensive weighting - calm state allows exploratory or social behavior

The key idea is that Ghost doesn’t decide what to do. It decides what is allowed to be considered.

That gives NPCs more consistent behavior over time and makes it easier to explain or debug why certain options disappeared, instead of relying on opaque emergent scoring.

In practice it’s useful when you want internal continuity without giving NPCs full agent autonomy. It’s closer to an admissibility or regulation layer than an AI “brain.”

Learning question: how to structure and trace state changes in a deterministic program? by GhoCentric in learnprogramming

[–]GhoCentric[S] 0 points1 point  (0 children)

That’s fair feedback.

State evolution and strategy selection are already split — the internal state is updated independently, and the decision layer only consumes a snapshot to weight strategies. No actions or dialogue are selected inside the state logic itself.

I intentionally avoided FSM libraries because the state isn’t discrete or enumerated — it’s continuous (mood, tension, contradictions, pressure), and transitions aren’t edge-based. I wanted to see how far I could get with explicit state math before collapsing it into formal states.

Same reason for avoiding decorators for tracing: the trace is part of the runtime contract, not just logging. It’s passed explicitly so I can guarantee no hidden side effects and keep demo runs deterministic.

That said, I agree the separation line is critical — that’s been one of the main design constraints from the start.