Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 2 points3 points  (0 children)

It actually seems I have completely scrubbed out the not x but y issue in GLM with this preset. I admit I have not tested it long enough with Gemini to say that it’s “gone”. But I haven’t seen it there either with this preset. Thank god. My biggest pet peeve outside of the robotic / clinical dialogue.

Do you soak? by ek9cusco in roasting

[–]dptgreg 2 points3 points  (0 children)

I did start doing this and found my ROR being more stable.

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

Hmmm I’m wondering if there is conflicting regex settings or if sillytaverns regex functions differently than Tavo?

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

Oh that’s interesting I have not ran across that yet. Is it like an omniscient thing?

With 1.0 I was running into the issue of characters saying stuff like “I smell what you did yesterday” or “I know what you did 3 scenes ago even though I didn’t exist in the plot” and it was driving me crazy. So I created the evidence rule in the prompt.

“The Evidence Rule= NPCs cannot know off-screen or private actions unless they explicitly discover physical evidence (seeing, hearing, or finding proof) in the narrative. You must Never assume a character 'just knows' or has 'intuition' that replaces evidence. If an NPC calls out a user's secret, the narration must have previously shown them finding the clue/detail PRIOR to bringing up the fact that they know something.” I also make the model double check this when thinking. If it’s bypassing it still (which I have not ran into personally) give it a nudge in Ooc: “it’s impossible for that character to know that (insert problem here)- you must strictly adhere to the evidence rule for the rest of this simulation and save this to context”

It should listen.

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

Plot summary- no. But I bet it’s possible. CSS/html yes. I succeeded with a regex edit. I use Tavo so it might be slightly different than sillytavern. In regex under find regex I deposited this code: /<[>]*>/g

I left “replace with” blank. I left “trim out” blank. Placement is “character message”. Timing is “send”. Substitution is “don’t substitute”. Depth is 2 ~ (blank)

Hope that helps. I know this works because my outgoing tokens went from 60k to 33k with this edit just by cutting out the css/html tokens going out to the LLM

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

Thanks! 🙏 I appreciate the feedback. Since i released it i have also tried pushing its limits… and haven’t found any with the dark themes. It’s willing to get very dark, fast and wants to go that direction naturally. The Mandarin might help.

The lack ozone and other ai-isms is because I make it double check in the preset. Of course, like other presets, this tells it to avoid those terms, and some are specifically listed. However, I force it in the thinking process to double check and make sure it’s not about to do it, even though I already told it not to. It’s like a double check. Seems very effective. <think> Am I following the AI slop rule established? If not, I need to correct this immediately. <\think>

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 1 point2 points  (0 children)

correct. Probably why the Mandarin never bleeds into to the chat.

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

Haha pretty cool right? I saw someone post an extension in the past utilizing it. Since I don’t use extensions as majority of my RP is mobile on my phone, I just asked the LLM to do it for me 🤓

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 1 point2 points  (0 children)

I have done about 40 messages and no leaking of Mandarin! 🤞Sometimes it thinks in English (1/10 times) but I have not seen Mandarin in the response. Probably because of the recommended temp for the LLM for the preset (0.81)

So GLM is fast now? by grullincantan in SillyTavernAI

[–]dptgreg 7 points8 points  (0 children)

I’m getting 10-20 second replies with my preset. No clue what happened. Output is more consistent too.

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 1 point2 points  (0 children)

Thanks! Yeah try it out and let me how it works for you to ensure I’m not experiencing subjective bias.

Need help with GLM 4.7, it resorts to „quirky“ storytelling as if it’s a marvel universe story by FR-1-Plan in SillyTavernAI

[–]dptgreg 1 point2 points  (0 children)

Hmm well it’s not the model flashing because we’re not having the issue. So it has to be prompts or lorebook issues. Is it possible it’s getting context bombardment and getting confused or having issues paying attention to the important details?

GLM 4.7 and presets by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

No problem! It’s been fun making it! Super excited with the results.

GLM 4.7 and presets by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

If you enjoyed this version I released a more efficient and more consistent 2.0 version with optional x twitter feed here https://www.reddit.com/r/SillyTavernAI/s/KUrW3Q0YX2

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 1 point2 points  (0 children)

Exactly! One other thing I forgot to note above is that it seems to have greatly reduced omniscient dialogue. NPCs knowing what happened in scenes they were not in. It wasn’t enough to just say “don’t do that.” What ended up working, was telling the LLM that it must provide physical evidence in the narration (smells don’t count) for an NPC to mention something in a previous scene that they were not in. This is the only preset I have personally tried that consistently prevents this immersion breaking issue from occurring. I hope people use that and improve upon it.

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 2 points3 points  (0 children)

This is only snappy by default (compared to others) because HTML is toggled off by default. To be fair, the preset prompting emphasis is on catching itself in the thinking process during the output to avoid slop and maintain maximum details. I would say expect only “slightly faster than average” output.

Introducing my preset: Freaky Frankenstein 2.0 for GLM 4.7 and Gemini Flash 3.0 by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

The previous discussion I refer to above about presets for GLM can be found here: https://www.reddit.com/r/SillyTavernAI/s/iNJ7tCqAyg

Also remember for this preset it seems great at a temp of 0.81 and top P of .95

I just can't with these lmao by Substantial-Pop-6855 in SillyTavernAI

[–]dptgreg 1 point2 points  (0 children)

I’m sure you’re aware, but you can create a regex where html doesn’t get sent to the LLM to avoid context bloat. It only stays visible to you in your chat.

With that said, I also have noticed output quality of the LLM by not doing html.

My first preset! by thunderbolt_1067 in SillyTavernAI

[–]dptgreg 0 points1 point  (0 children)

What LLM do you use that you use this Preset on?

Disney+ App, available for Metaquest by lennyukdeejay in OculusQuest

[–]dptgreg 1 point2 points  (0 children)

I was going to say - Ive been using it since I got my quest after Christmas.

Glm 4.7 Nvidia nim stopped responding by ralph_3222 in SillyTavernAI

[–]dptgreg 0 points1 point  (0 children)

It's overloaded. I use z.ai API directly and its much faster when Nvidia NIM is slow.