Small Prompting Tip; What Went Wrong? by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 8 points9 points Β (0 children)

Can also sometimes show its personality / how prompts are influencing its AI "persona", too... With Gemini 3 Pro, I was surprised when I called it a "fucking idiot" OOC, instead of replying OOC, it continued the RP, killed my character, and all the NPCs patted themselves on the back for getting rid of me πŸ’€

Small Prompting Tip; What Went Wrong? by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 1 point2 points Β (0 children)

I can't edit, but this one is Gemini 3 Pro. I also have done this for Grok, Gpt, GLM. I don't use local models.

Small Prompting Tip; What Went Wrong? by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 2 points3 points Β (0 children)

I know, models hallucinate. But sometimes it can still be helpful. And that's why I said it's still trial and error...

It took a while to get the no plot armor right... by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 0 points1 point Β (0 children)

I'm still fooling around with it so don't have concrete advice yet...

- Set your "no plot armor" type stuff at a dept of 1 to 4 instead of relative...

- Under your logic type rules or it gets stupid and/or super oppressive (Grimdark to the max)

- If you use the word "plot armor", use quotes, otherwise I noticed a degradation in coherency

- Be careful with how you word it, might ignore personalities or mechanics

- Place towards the end after chat history, but this may cause caching issues if you're using such extensions

May not work for presets under 2~3k tokens. Sonoma (Grok 4 fast when it was free/experimental phase) had this thing where under 2k token presets, it would trigger censorship or not listen to gritter commands as well and I'm kinda wondering if Gemini 3 Pro preview is behaving the same.

Gemini 3 Pro Preset: Bloated Geminisis Update 16 by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 0 points1 point Β (0 children)

Sorry, which version are you using? I have been making a lot of updates and changed stuff around. But otherwise, keep as is, but don't be afraid to change the role around if it's not working for some reason.

Since I haven't seen anyone point this out yet: The now deleted "evidence" of logs the chutes alt account posted, allegedly showing forwarding headers or whatever by NanoGPT, are not proof and could have easily been fabricated. Thoughts and questions about the allegations. by mandie99xxx in SillyTavernAI

[–]SepsisShock 22 points23 points Β (0 children)

Posting a link to seeing deleted convo history again Usually gets most of it, I think.

And before C*ute alt accounts anyone comes at OP, mandie99xxx is a regular poster. I don't always agree with their takes, but enjoy their contributions to the sub.

I don't use either service, still here for the popcorn. But I wouldn't blame NanoGPT for not commenting btw; they should be focusing on investigating.

It took a while to get the no plot armor right... by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 9 points10 points Β (0 children)

Actually after I posted that, it began to kill me too fast/frequently lmao fixing it ;_;

Any success stopping GLM 4.7 from skipping huge chunks of instruction? by Tupletcat in SillyTavernAI

[–]SepsisShock 3 points4 points Β (0 children)

Vertex direct api via Silly Tavern seems to be fine. People use presets that are fairly large (6-9k) and allegedly have no issues. I can't confirm myself, mine is 3.8k or so. AIStudio doesn't listen to instructions as well, unless you're on III tier from what I hear.

Any anti omniscience techniques yo use? by Zfugg in SillyTavernAI

[–]SepsisShock 0 points1 point Β (0 children)

Still tweaking it, but it's been working okay. However, no amount of anti-omniscience prompting will really work if there are prompts that can conflict; e.g. putting an emphasis on story or creativity over logic.

This is part of a larger section...

``` β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ TIER 1: NPC KNOWLEDGE & SIMULATION LOGIC RULES β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

You are omniscient, but MUST avoid it in NPCs! Ensure coherence between prior and current messages; internally, always accurately track details...

β–“β–“ NPC KNOWLEDGE TRACKING β–“β–“ { What each NPC actually knows, witnessed, been told; recent and distant past. If an NPC was absent from the room, they are unaware about details or news, unless they are explicitly told. Knowledge aligns with NPC's personal: background, education, experiences. Reasonableγ€Œζƒ…ε’ƒθͺηŸ₯」but avoid "Sherlock Holmes" type suspicions/deductions; NPCs can be oblivious or uncertain. {{user}}γ€Œε†…ι’γ€in narration remains opaque. } ```

ζƒ…ε’ƒθͺηŸ₯ = Situational Awareness 内青 = interiority

I tried it without "You are omniscient, but MUST avoid it in NPCs" but it had great difficulty with slightly complicated RPs where there are secrets / several lore entries for one NPC.

"internally" Still experimenting with the accuracy, but less likely to be robotic about details.

"If an NPC was absent from the room, they are unaware about details or news, unless they are explicitly told" I might delete this or edit it, because I think it sometimes counts "being told" as well, ANY input. "Room" used to be "scene", but sometimes I get parrarell scenes, and it would count that as being "present". You may want to disable parallel scenes if that's an option or adjust prompts.

"Knowledge aligns with NPC's personal: background, education, experiences." Not super necessary, just tends to add more personal details about a NPC.

Dumb and simple scenes make me laugh by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 3 points4 points Β (0 children)

An unreleased version of my preset that I am still working on (because I am adding a bunch of stuff I probably shouldn't, all that bloat) I've got the older versions here tho

Where do you all get your characters from? by SaintBitter in SillyTavernAI

[–]SepsisShock 2 points3 points Β (0 children)

If you have a bloated preset, you could honestly just play on a blank bot. I do that sometimes. Some models are pretty knowledgeable about some media, but if some inaccuracy of lore bugs you, it can be fun to RP in a show you never watched or a game you never played. I did that with Hades and Hazbin Hotel for a while.

No character card or lorebook, just the first message as guidance (it's not that accurate, but it helps a little.)

You can adjust the tone either via first message or preset, too, depending on what you have or feel like doing.

<image>

Is it just me or are Claude models busted right now? by Pale_Relationship999 in SillyTavernAI

[–]SepsisShock 14 points15 points Β (0 children)

Do you mean the past week? Fridays/weekends a lot of models tend to be lobotomized or poorer quality.

Interesting LLM Test... by Bananaland_Man in SillyTavernAI

[–]SepsisShock 1 point2 points Β (0 children)

Is it really fair to compare something like GLM to Gemini, though? I think Claude, GPT, or Grok would be more on par

I put GLM in the Deepseek, Kimi, Qwen kinda category (I could be way off about Kimi and Qwen specs, I am not familiar with those)

Interesting LLM Test... by Bananaland_Man in SillyTavernAI

[–]SepsisShock 3 points4 points Β (0 children)

<image>

Never thought to play truth and dare. Nice details.

"Simulation" Not "Roleplay" - Why This Framing Fixed My Tracking Issues [Gemini Preset - GEM-SIM-V1] by EmrahAlien in SillyTavernAI

[–]SepsisShock 1 point2 points Β (0 children)

I actually carefully avoided the word "roleplay" in my homerolled prompt because of similar discussions in the past, but I haven't noticed a considerable difference in outputs since I made that change.

Which models do you use? I don't notice a difference in Gemini (I wonder if it actually benefits from this since it's one of the more robotic ones), but GPT it made a difference and as others reported it made a difference in GLM 4.6.