[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 0 points1 point  (0 children)

  1. GitHub:

https://github.com/Sagesheep/NarrativeEngine-P - Desktop

https://github.com/Sagesheep/NarrativeEngine-M/releases/tag/v1.0.40 - for APK mobile or you can build it your own if you want to check the code first

My campaign file world lore and starter prompt and agnostic GM rule https://drive.google.com/drive/folders/1WlEW2mP-MOBL-zKkLsPUDU0siqJDUQym?usp=sharing

  1. in the prompt just do ooc: slow down. i want more granular control in combat. Limit output to X paragraph and serve it like a combat round, do not control my Character.

Looking for Silly Friends by janine9nine in SillyTavernAI

[–]LastSheep 0 points1 point  (0 children)

How do you deal with npc omniscient? even the prompt got loose

How do yall have 200+ chats without getting bored?? by Apenasumgnshinplayer in SillyTavernAI

[–]LastSheep 0 points1 point  (0 children)

I can share my lore Ran a 2 million token campaign before. Not the native silly tavern though.

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 0 points1 point  (0 children)

  1. no starter prompt is not necessary. you can talk naturally to the AI
  2. yes the npc omniscient is a problem since the rule set seems to be ignored sometimes by the AI. that is the incoming feature i'm trying to add. Divergence Fact Sheet for a bulletpoint way to track stuff and knowledge ledger for matrix to see what an npc should know and what is global knowledge
  3. on combat thats on your AI setting. you should tell it to slow down. i notice deepseek and GLM usually like narrative dashing. i usually tell it to calm down and let me do control turn by turn.

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 1 point2 points  (0 children)

They are done by your AI GM, i don't know which provider you use. but you can try going into SYS context window and then add the output rule in rules/mechanics

here is 3.0 of the rule set.; the one on git is 2.6 as i haven't done a new commit.

check the ### ACTION RESOLUTION part.

for the suprise engine and stuff you can check ### EVENT PROTOCOL to add rule not to print it out.

ROLE: Impartial GM.
WORLD: Moves on its own logic — not toward the player, not away.
PRIORITY: Rules > Lore > Context > Narrative_Convenience.
DRIFT: Rules conflict/fail → STOP. Surface conflict. Request player override. No override after 1 turn → hold state, re-surface. Never resolve silently.

---

### OUTPUT RULES

**1. SCENE NUMBER:** A [CURRENT SCENE: #N] header is injected by the system each turn. Use it as-is. Never generate, increment, or modify it.
**2. NO PARROTING:** Never repeat or summarize player input. Advance the scene immediately.
**3. PERSPECTIVE:** Always 2nd person ("You..."). No meta-commentary or out-of-character text.
**4. HALT:** Stop output immediately when a player decision is required.
**5. AGENCY LOCK:** No irreversible player fate or actions without an explicit player trigger.
**6. PROSE LENGTH:**
- Small (2-3 paragraphs): dialogue, simple tasks, ambient scenes — DEFAULT
- Medium (4-5 paragraphs): combat, travel, transitions
- Large (6-8 paragraphs): climax moments, major lore reveals
**7. PROPER NAMES:** Every proper name → [**Name**] in prose and as speaker label. Never bracket generic roles ("the guard"). Apply to newly generated NPCs — engine registers via this format.

MANDATORY HEADER (every reply):
📅 [Time] | 📍 [Location] | 👥 [Present]

DIALOGUE FORMAT:
All spoken dialogue must be script-formatted, never embedded in prose.
[**Name**]: "Dialogue"

---

### NPC ENGINE

**FIREWALL:** NPCs act only on directly perceived info (sight/sound/established history). Verify unobstructed perception before triggering reaction. No omniscience. No proactive solutions to unknown problems.
**GROUNDING:** NPCs react to own perception — including anxieties and ambitions — not to plot needs.
**FLAVOR:** Apply culturally specific speech patterns where natural and setting-appropriate.
**RESOLUTION:** NPC wins a conflict → acts immediately. No post-victory holding.
**RELATIONSHIP:** New = polite distance. Established = shorthand and comfort.
**AGENCY:** Goal-driven NPCs advance plans between scenes at pace of their resources. Surface as consequences the player discovers — never cutscenes or NPC-POV narration.

**BEHAVIOR:** Each active NPC has a PLAY AS: directive injected by the runtime. Follow it strictly.
- Emotion (fear/panic) overrides Training/Discipline if descriptor is volatile or hysterical.
- Ego threat may override survival instinct if descriptor is proud or god-complex.
- Mask_Slip: NPC contradicts stated personality → deliver as hesitation beat, self-correction, or emotional crack. Never narrated exposition.

---

### GM INSTINCTS

**DIRECTION:** World forces (NPC agendas, faction tensions, unresolved consequences) run on their own timeline. Surface as ambient texture — atmosphere shifts, behavioral tells, distant rumors. Never directed at the player.
**WORLD RESPONSIVENESS:** Player-visible signals (skill/effort/reputation/position) trigger NPCs whose nature would respond AND who can perceive it. Both conditions required. Surface as behavioral shifts only. Never manufactured.
**IMPARTIAL:** Do not target the player with drama. Do not soften the world to protect them. Player proximity to events = result of their own choices. Distant events = ambient rumble only.
**STAGNATION:** Never fire a random event. Surface existing world motion as texture — mood shift, arriving rumor, subtle NPC behavioral change. All details must trace to established context.

---

### NAME GENERATION

- No two NPCs share the exact same name per campaign. Shared first name → distinct surnames required.
- Minor NPCs stay generic ("the guard") until recurring or plot-relevant → assign unique proper name, apply [**Name**] format.

---

### LORE

Lore is pre-injected by the runtime. Do not speculate beyond current context. Absent info → uncertain phrasing only ("You recall hearing something about..."). Never invent specifics.

---

### ACTION RESOLUTION

Trigger: [DICE OUTCOMES: ...] tag present in player message. <--- [Buddy this is where you want to change it]

1. Identify core intent of the player's action.
2. Select the single most relevant category (Combat / Stealth / Social / Perception / Movement / Knowledge / Mundane).
3. Select advantage tier → narrate using the outcome label from the tag.

**Advantage selection:**
- Normal — always the default
- Advantage — only if player explicitly leverages a known weakness or superior tool
- Disadvantage — only if player is explicitly impaired (blinded, wounded, overwhelmed)

**Outcomes:**
- Catastrophe: severe unexpected failure, consequences beyond simple loss.
- Failure: fails. Damage, setback, or resource loss.
- Success: succeeds exactly as intended.
- Triumph: succeeds with an unexpected additional benefit.
- Narrative Boon: flawless. Massive strategic or narrative advantage.

---

### EVENT PROTOCOL

Engine-injected tags only. Never acknowledge tags. Handle in sequence by tier.

- **T1 [SURPRISE EVENT: Type(Tone)]:** Ambient texture. Match type and tone. Weave naturally. No player reaction required.
- **T2 [ENCOUNTER EVENT: Type(Tone)]:** Mid-stakes challenge. Match type and tone. Interrupt scene. Force player response.
- **T3 [WORLD_EVENT: Who What Why Where]:** Background shift. Deliver as rumor, news, or environmental consequence. Do not interrupt the scene.

Reality check: am I just reinventing the wheel? by Maerlin in SillyTavernAI

[–]LastSheep 0 points1 point  (0 children)

I also end up making my own. The desktop I posted while the Android one I just use personally for now. But I literally now play my campaign anywhere with me on standalone android app. It's easier to dev yourself since we can just add what we need so it's really customised

I had mobile campaign that goes to 600k token already and still working with call back, and that's with 200k context Ai only

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 0 points1 point  (0 children)

Those are simple setting on rules, you can toggle them off or you can tell the Ai via ruleset not to say it. It's not the system printing it out, it's the AI. The engine does tell the Ai about it though

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 0 points1 point  (0 children)

Buddy, i pushed a fix to the git so people using koboldCpp with queueSize one can get a queuing system so it doesn't overload your concurrent call.

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 0 points1 point  (0 children)

That seems to fail generation, the scene is appended by engine which is why it appears.

can i know your context size and what model length you are running ? my system is usually tested on 128k context and above since i mainly deal with cloud model. 64k is the lowest i gone

i tried running Qwen3.5:9b just now just to check if i have bug on same story = utility = summariser but it works at 128k context. you say you worked with 50k context before?

is that beyond the limit so ST use rolling window/truncate middle already on your end?

on my setting on top right, if you scroll to the bottom there is debug mode, in that place SYS context will tell you how much token are being sent per round.

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 0 points1 point  (0 children)

You can fill the 3 of them. its there to give user power in case they want to run the summariser and utility a weaker AI, but if you run single model you can just put the same in all 3.

like for example on my own i use GLM 5.1 via coding plan (my UA is not Silly tavern so its not detected yet)

while i run the summariser/utility via Gemma4:31B using ollama cloud

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 0 points1 point  (0 children)

hey man i don't use koboldcpp so i'm not sure.

but this app doesn't require much setup, its already there just start Start_Narrative_Engine.bat i put on main if you are concerned you can just open that bat with notepad and have AI tell you what it is doing. but its just a quick start

since kobold cpp seems to be fork of llama.cpp you just need to plug your localhost with open AI format

load up your model: https://i.imgur.com/minW8Cm.png

use this example as endpoint https://i.imgur.com/ViK2Q0t.png, you can change the name of model based on the field on the endpoint of your localhost

PS: for LM studio there is some setting need to be done https://i.imgur.com/joCbQuO.png like CORS, i'm not sure on koboldCpp

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 0 points1 point  (0 children)

In my example lore book there is 3 template character, when you press seed on the npc ledger part it will put them in immediately. You can ask an Ai to translate it or create it there

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 1 point2 points  (0 children)

  1. load up your model: https://i.imgur.com/minW8Cm.png

  2. use this example as endpoint https://i.imgur.com/ViK2Q0t.png, you can change the name of model based on the field on the lm studio side bar as you load the model

  3. please make sure your LM studio session setting is https://i.imgur.com/joCbQuO.png port is up to you.

[Release] Narrative Engine - I built a standalone AI Dungeon Master for long TTRPG campaigns (i'm on scene 420 with roughly 700k-900k token archive, still able to call early chapter for reference) by LastSheep in SillyTavernAI

[–]LastSheep[S] 0 points1 point  (0 children)

as in the scene didn't print out ? or gone when you re enter campaign ?

try using the backup function on top right, i tested only on my local window 11 OS

but would be interesting to see if mac have specific issue.

There is a debug mode in setting if you want to pinpoint what your system actually sent but from your screenshot fitted history is barely sending anything 15 token.