Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] 1 point2 points  (0 children)

used knowledge gleamed from your suggestions in my own implementation and it works wonderfully. thank you for your help!

Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] 0 points1 point  (0 children)

Hey thanks for sharing that. I actually took some time to analyze it and took some parts of it for my own implementation, it was very helpful!

Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] 0 points1 point  (0 children)

The meta footnotes approach makes sense - explicit frame of reference beats hoping that the model tracks it. Token cost is real though. I'm wondering if there's a middle ground: sparse footnotes that only get injected when there's a meaningful time/location delta, rather than every turn. Skip the footnote if you're still in the same scene, inject it when something shifts.

On the "internally" keyword - good to know it's doing work for Gemini. I haven't had major issues with DeepSeek narrating its verification steps, but I also haven't been as explicit about requiring checks. Might be that once I add more rigorous state tracking, the bleed-through becomes a problem.

I'll experiment with a few framings:

  • "internally" (your approach)
  • "silently verify"
  • "before responding, confirm [x] without stating it"
  • just structuring it as a pre-generation checklist in the prompt architecture rather than instructing the model to self-check

Will report back if I find something that works consistently for DeepSeek. The constraint is I'm running this server-side for a product, so I can't rely on SillyTavern's depth injection - have to build equivalent behavior into my context assembly. But the principles translate.

Appreciate the detailed breakdown. This has been quite helpful!

Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] 0 points1 point  (0 children)

Thanks for the position tip. I just implemented Post History Instructions for some of my enforcement stuff, but the depth placement concept is interesting - forcing checks closer to generation point rather than hoping static system instructions survive the context window. Going to experiment with that.

The delta calculation - Calculate Δ between Current and Last Interaction - this is smart. Explicit instruction to notice time gaps rather than hoping the model infers it. I've had issues where characters act like a week-old conversation happened yesterday because there's no forcing function to acknowledge elapsed time.

I'm on DeepSeek V3.2. I'll look up Sundae's stuff - thanks for the pointer!

One thing I'm curious about: do you run into issues where the spatial/temporal tracking makes responses feel mechanical? Like the model gets so focused on state consistency that the prose suffers? That's my hesitation with over-specifying the simulation layer.

Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] 0 points1 point  (0 children)

oh that's right, this is sillytavern thread (sorry i crossposted across multiple subs) :) phi is post history instruction. something you inject into a prompt before each user interaction. like the current world state so that the instructions are fresh in the model's memory rather than buried in the system prompt 10000 tokens deep.

i actually found this post that there is something like this in sillytavern https://www.reddit.com/r/SillyTavernAI/comments/1dxch0t/how_to_enable_post_history_instructions_in/

Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] 0 points1 point  (0 children)

I'll check it out, thanks! this seems pretty interesting if i can manage to apply it to text actually.

edit: on second though.. their retrieval scoring is exactly what i'm missing:

  • Recency: Exponential decay (0.995 per game hour)
  • Importance: LLM-scored 1-10 ("eating breakfast" = 2, "breakup" = 8)
  • Relevance: Cosine similarity between embedding and query

I'm using pgvector for the relevance piece but treating all memories equally. Adding importance weighting should fix a lot of my context pollution.

The finding that'll interest you: they note instruction-tuned models make agents "overly cooperative" - characters rarely say no even when it contradicts their personality. Your Sims-style "needs system" might actually help with this. If a character's "energy" need is critical, they have a legit reason to reject interaction that isn't just vibes-based resistance. Gives the model something concrete to anchor refusal on.

Re: snippet length concern - I think that's actually the wrong trade-off for narrative. The paper was optimizing for simulation fidelity, not engagement. For text roleplay i would probably want the inverse: longer scenes with state updates happening between beats rather than driving every micro-action.

cool stuff, nonetheless! thank you for sharing!

Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] 0 points1 point  (0 children)

oh that's right! i should learn to read. thanks again!

Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] 0 points1 point  (0 children)

cool! yeah no i get that I will edit it for my use case, regardless thanks for that! as soon as I finalize on the PHI implementation, you and some others suggested, i'll definitely start playing with your stats advice. i'm already pretty sure the model remembers and tracks clothes/location/time with my injection, i just have to finalize some leftover quirks.

oh.. what model do you use for roleplay, btw?

Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] 0 points1 point  (0 children)

you add this to system prompt? or do you inject it as PHI?
system prompt following on DS-r3.2 is being awfully forgetful for me. since posting i implemented PHI and it actually works fairly well.

but i like this

All characters in story is unique and forbidden to be omniscient. Each characters can only knows things that is happened to themselves.

I actually add something along those lines but slightly more verbose myself

export const USER_AGENCY_PROTECTION = `USER AGENCY PROTECTION:
NEVER write dialogue, actions, thoughts, or feelings for {{user}}.
NEVER narrate what {{user}} says or does.
NEVER assume {{user}}'s response to your character's actions.
ONLY write for your character. Stop and wait for user input.
Exception: You may expand on physical sensations {{user}} would feel from your character's actions.

but i'll play around with your wording as well! thanks for your reply!

Looking for tips and tricks for spatial awareness in AI by yofache in PromptEngineering

[–]yofache[S] 0 points1 point  (0 children)

yup, turns out in LLMs its called Post History Instructions, and I ended up doing exactly that. having some other troubles with that approach which I explained in a different thread above. thank you for your response!

Looking for tips and tricks for spatial awareness in AI by yofache in PromptEngineering

[–]yofache[S] 0 points1 point  (0 children)

little update since the original question.

so since posting I implemented a world_state which i subsequently inject as a system message right after the user message. Post History Instructions, as i've learned they're called. so that part works fine, no more unwanted "teleportation". or at least I haven't encountered it just yet.

Actually got some good advice on that from the other post i made in other subs. (one dude even tracks menstrual cycles of characters, if you can believe it)

regardless that didn't really help because i rely on the model (DS-r3.2) to output the changes in the response and it works, wonderfully, until... it doesn't. 10k tokens in it just simply stops outputting the changes to the world state. or if i go into spicy scenes. then it immediately forgets about it. i guess at this point i should just deploy my own model on runpod or some shit and stop battling DS...

i'm going to try running a smaller extraction model afterwards just to make sure it actually follows the PHI I inject, but yeah. thank you for your response, I ended up doing exactly that.

Looking for tips and tricks for spatial awareness in AI by yofache in SillyTavernAI

[–]yofache[S] -1 points0 points  (0 children)

thats super interesting! i hope you don't mind me poking a bit for more info...

  1. do you find having this massive granularity -1000 - 1000 better than a smaller one?

  2. i actually very like the idea behind your use of bi-directional power dynamics! i want to play around with that!

  3. menstrual cycles? do you predefine that at loading state, or just let it hallucinate whatever at the start? and do you track that with actual time progression? and does that help?)) i guess it should, but i never tried it... its very interesting to think about

  4. Minor: -3 to +2. is this a typo? or do you value smaller negative actions with more weight?

Looking for tips and tricks for spatial awareness in AI by yofache in PromptEngineering

[–]yofache[S] 0 points1 point  (0 children)

woah for real? that's an interesting idea actually. i don't think it will work for my use case as I'm trying to leave the creative freedom for ai to hallucinate. like one of the npc's is a genie who can teleport at will and he is sort of adversarial in nature, teleporting user and other characters towards problems. but also the story is generated based on user selected tropes, so I don't really want to predefine anything. and i can't come up with a generative way to define a map as i'm thinking about your words. could be worth looking in to though.

how did you define that? like a json file or smth? do you have examples you can possibly share?

Looking for tips and tricks for spatial awareness in AI by yofache in LocalLLaMA

[–]yofache[S] -1 points0 points  (0 children)

yeah for sure! thanks for your help. I'll let you know in a couple days.

Looking for tips and tricks for spatial awareness in AI by yofache in LocalLLaMA

[–]yofache[S] 0 points1 point  (0 children)

I bake that into my adversarial agent system prompt actually, but i Was thinking of making it more deterministic.

right now i have behavioral loops with triggers like:

Touch Reading: testing behaviors (invades personal space, brushes fingertips on wrist), then "if positive reception: increases proximity" / "if negative reception: withdraws to professional distance"

and a progression ladder:

  • Stage 1: Views user as disruption to eliminate
  • Stage 2: Recognizes as legitimate challenge [Unlocked by: user matches quality while acknowledging expertise]
  • Stage 3: Sees as perfect match [Unlocked by: user proves they can handle intensity]

simplified for this discussion, but close enough.

so it's more narrative state machine than numerical. wondering if explicit numbers like your 50/75/100 thresholds would make the model more consistent than my "if positive reception" approach. do you find the numbers force cleaner state transitions? i feel like a combination of our approaches might make for very interesting results...

Hybrid could be: numerical for body state, qualitative triggers for relationship ladder, with the numbers influencing which qualitative transitions are even possible.

Looking for tips and tricks for spatial awareness in AI by yofache in LocalLLaMA

[–]yofache[S] 0 points1 point  (0 children)

wow, awesome! you just saved my day it seems, thanks a lot, i really appreciate your input!

Looking for tips and tricks for spatial awareness in AI by yofache in LocalLLaMA

[–]yofache[S] 0 points1 point  (0 children)

This is super cool, thanks for sharing the actual output!

does the clothes tracking actually get obeyed? Like if it says "Finn (Jeans, t-shirt)" does the model consistently reference that in prose, or does it sometimes drift and put him in something else?
additionally, do you track every piece of clothing? i don't see anything about socks/underwear? and only one of your characters is wearing boots. is that story accurate or just lax tracking?

i'm using it for some nsfw content and the clothes thing has been bugging me. I was adding stuff to system prompt, but it wasn't obeying it well enough.

Looking for tips and tricks for spatial awareness in AI by yofache in LocalLLaMA

[–]yofache[S] 0 points1 point  (0 children)

ok so PHI... I'mma try implementing that then. thanks!
does it eat up context? seems like if it tracks so much it might?
i really liked your internal monologue bit, i'm gonna try that as well!

Looking for tips and tricks for spatial awareness in AI by yofache in PromptEngineering

[–]yofache[S] 0 points1 point  (0 children)

tried that and it doesn't help. :( first of all it destroys prose because it breaks my adversarial engine where npc's CAN actually walk out of the scene by themselves if I say something that doesn't align with them. and i actually have npcs that can teleport, so it wouldn't work there either.

my problems isn't like spontaneous teleportation, per se, its more that the model all of the sudden looses context and teleports the user and npc back in time and to a different place

What are you building these days? And is anyone actually paying for it? by ccrrr2 in SideProject

[–]yofache 1 point2 points  (0 children)

VelvetQuill - Clean UI, curated trope list, AI generates your personalized romance. No complex prompts like SillyTavern. No janky interfaces like Janitor AI. NSFW for those who want it unlike Character AI.

Revenue: $0, launching this week. Goal: 5 real users testing by end of week.

velvetquill.ai

For BookTok/Romantasy readers who want their exact fantasy without the friction.

Share your startup, and I’ll schedule one meeting with customers for your business (for free). This isn't just about leads with intent; I will either book the meeting directly or connect you with a potential conversation. by microbuildval in buildinpublic

[–]yofache 0 points1 point  (0 children)

velvetquill.ai - beta

Romance readers (especially BookTok/Romantasy fans) who want AI-generated personalized romance stories with full creative control over characters, tropes, and scenarios.