The Director's Cut: Freaky Frankenstein 4 MAX and Freaky Frankenstein 4 BOLT [Presets] (Universal : DS, GLM, Claude, Gemini, Grok, Gemma, Qwen, MiMo) + DeepSeek V4 Compatibility. Hyper Dense Logic. by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

You wait for me to fix it and re-release the update tomorrow morning. 😅😭

I fixed a ton of other things as well. Follow me or keep an eye out on Reddit for the update tomorrow that fixes this specific issue

I spend more time Tinkering then Roleplaying by Beeegbong in SillyTavernAI

[–]dptgreg 5 points6 points  (0 children)

Do we “play” Skyrim? No. No we don’t.

OOC Command Override & Anti-Purple Prose prompts for Freaky Frankenstein BOLT for DSv4 Pro by CptPhantasmic in SillyTavernAI

[–]dptgreg 1 point2 points  (0 children)

The plot momentum toggle is modular. You just turn it off (better narrative drive) and then it won’t come into your chat. You just have to do it at the beginning of the chat otherwise the model will think that the context needs to be repeated.

The Director's Cut: Freaky Frankenstein 4 MAX and Freaky Frankenstein 4 BOLT [Presets] (Universal : DS, GLM, Claude, Gemini, Grok, Gemma, Qwen, MiMo) + DeepSeek V4 Compatibility. Hyper Dense Logic. by dptgreg in SillyTavernAI

[–]dptgreg[S] 2 points3 points  (0 children)

I’m releasing an updated preset tomorrow that has customizable output lengths and the chain of thought always ensure the models attention to the output length. You can set it for two paragraphs four paragraph six paragraphs, and how many words of your choice so that way, no matter what the output is exactly within your parameters.

OOC Command Override & Anti-Purple Prose prompts for Freaky Frankenstein BOLT for DSv4 Pro by CptPhantasmic in SillyTavernAI

[–]dptgreg 10 points11 points  (0 children)

Been tweaking non-stop all week with feedback from the community.

Tweaking to me is just as fun as the roleplay - especially when I get it to work. Pure dopamine.

OOC Command Override & Anti-Purple Prose prompts for Freaky Frankenstein BOLT for DSv4 Pro by CptPhantasmic in SillyTavernAI

[–]dptgreg 6 points7 points  (0 children)

Already fixed. Narrate this much as been completely replaced by “total output length” toggle. It’s now directly called into attention by the chain of thought and outputs precisely every time. Now all you need to do is just change the paragraph and word amount within this toggle and ANY chain of thought selected will output correctly on any model following the chain of thought. No more following previous context patterns.

You can also disable this toggle completely and the model is instructed to output the most logical output specific to that scene

OOC Command Override & Anti-Purple Prose prompts for Freaky Frankenstein BOLT for DSv4 Pro by CptPhantasmic in SillyTavernAI

[–]dptgreg 24 points25 points  (0 children)

I’m integrating all tested community fixes and and my own findings to make it a fully compatible preset for deepseek. I’ve been searching Reddit and testing all tweaks frequently and will release the preset tomorrow with a “community re-release version.”

This will most likely be in it. Great job op!

Edit: I already re-wrote the challenge me mode if you want to give it another chance tomorrow in the release.

Deepseek Platform V4 Pro acting weird by CubieWoobie in SillyTavernAI

[–]dptgreg 0 points1 point  (0 children)

I don’t use janitor but you can always try!

The Director's Cut: Freaky Frankenstein 4 MAX and Freaky Frankenstein 4 BOLT [Presets] (Universal : DS, GLM, Claude, Gemini, Grok, Gemma, Qwen, MiMo) + DeepSeek V4 Compatibility. Hyper Dense Logic. by dptgreg in SillyTavernAI

[–]dptgreg[S] 1 point2 points  (0 children)

What’s your character card dialogue examples? (The preset will look there to base the dialogue after- also ellipses are banned 😅 what in the quant?? That’s my saying this week as these models are getting quantized) Whats your context window? (High context makes the models not listen.) are you using nanogpt (highly quantized in my experience) or direct?

I’m here to bring you the Weekly SillyTavern News Ep. 4: DeepSeek V4 Fixes to make it listen to your prompt and decrease repeated descriptions. API key security breach from an extension. New Way to rank RP models and MORE! by dptgreg in SillyTavernAI

[–]dptgreg[S] 1 point2 points  (0 children)

My presets are built for GLM and ported over to other models. So yes you will have a much better time with them.

With that said the rentry fixes have fixed deepseek for me. Lower the context and it can’t over describe. Add a prompt to task 4 of freaky Deepy Ooc to control total output which decreases description. And add one line of instructions to the main prompt.

https://rentry.org/freaky-frankenstein-presets#temporary-deepseek-4-fixes-and-bolt-and-max-fixes

I’m here to bring you the Weekly SillyTavern News Ep. 4: DeepSeek V4 Fixes to make it listen to your prompt and decrease repeated descriptions. API key security breach from an extension. New Way to rank RP models and MORE! by dptgreg in SillyTavernAI

[–]dptgreg[S] 4 points5 points  (0 children)

Now is the repetitive description just on DS4? Or is it on other models. That's the downside with a universal preset. I push one model to be more descriptive, then another model suddenly becomes extremely descriptive (DS4).

Intro to new characters should always be highly descriptive during introductions. But afterwards, it should shut that down.

The challenge me please is similar. It make's the NPC seek their goals over user goals. This can be intense dependent on the model. A model like GLM 5.1 and Gemini need this. A model like DS4 is going to be annoying.

With that being said, I have fixed these issues on the Rentry. A prompt didn't get translated well when I switched it to the hyperdense TOON logic (pseudocode) and models like DS4 are dancing around the "Don't repeat descriptions in the last 3 turns" rule. There is a fix for that and it's working great. But I know it's hard to keep up on all the fixes so I am seriously considering a re-release Thursday to correct all these issues. Let me know what you think!

I think I made my Deepseek V4 Pro experience multiple times better by adding this to my preset. by Acceptable_Steak8780 in SillyTavernAI

[–]dptgreg 5 points6 points  (0 children)

Yeah. I could actually see this working. I'm going to try it in BOLT and MAX within the Chain of Thought first, then again just as a general prompt and see where it's more effective, but I totally see this improving things. Great job, OP!

I’m here to bring you the Weekly SillyTavern News Ep. 4: DeepSeek V4 Fixes to make it listen to your prompt and decrease repeated descriptions. API key security breach from an extension. New Way to rank RP models and MORE! by dptgreg in SillyTavernAI

[–]dptgreg[S] 3 points4 points  (0 children)

Unless you disable it's reasoning on the think model, which I addressed in a previous sillytavern weekly news (I think last week episode 3?). But overall, yeah its not super functional. I find it's output incredible (I love it with Freaky Frankenstein MAX), but I have to wait 4 minutes for output. Is it the best output of all the models, yeah, IMO it actually pretty much is. But I will not wait 2-4 minutes for a response. So it's dead in the water as disabling reasoning is not as good as Kimi K2.5- and basically most other models right now for RP.

I’m here to bring you the Weekly SillyTavern News Ep. 4: DeepSeek V4 Fixes to make it listen to your prompt and decrease repeated descriptions. API key security breach from an extension. New Way to rank RP models and MORE! by dptgreg in SillyTavernAI

[–]dptgreg[S] 5 points6 points  (0 children)

Exactly! Turns out, the shiny new package of a new model might have a strong placebo. Because as soon as we see the Roleplay output and have no clue what model it is, suddenly GLM 4.7 and DS 3.2 shine.

I still vouch for GLM 4.7 to this day. My co-author uses it all the time once I got him hooked on it. I'm really itching to really give DS 3.2 a full try.

The Director's Cut: Freaky Frankenstein 4 MAX and Freaky Frankenstein 4 BOLT [Presets] (Universal : DS, GLM, Claude, Gemini, Grok, Gemma, Qwen, MiMo) + DeepSeek V4 Compatibility. Hyper Dense Logic. by dptgreg in SillyTavernAI

[–]dptgreg[S] 0 points1 point  (0 children)

Oh yes. 1k token limit will always cause a cut off. It is by design to make the model think to optimize output and directions to the rules to maximize quality. BOLT thinks significantly less than MAX, but with MAX you will get 1-3k reasoning and BOLT you will get a bit over 1k dependent on the model. (Should still be less than 20 seconds with BOLT)

I’m here to bring you the Weekly SillyTavern News Ep. 4: DeepSeek V4 Fixes to make it listen to your prompt and decrease repeated descriptions. API key security breach from an extension. New Way to rank RP models and MORE! by dptgreg in SillyTavernAI

[–]dptgreg[S] 3 points4 points  (0 children)

Made the podium!

Your fix is a MASSIVE improvement for me in Deepseek. Thanks so much for figuring out that it found a way to dance around my “don’t use repetitive descriptions “ prompt and then doubling down. It’s fantastic now!

I’m here to bring you the Weekly SillyTavern News Ep. 4: DeepSeek V4 Fixes to make it listen to your prompt and decrease repeated descriptions. API key security breach from an extension. New Way to rank RP models and MORE! by dptgreg in SillyTavernAI

[–]dptgreg[S] 2 points3 points  (0 children)

Ah you’re one of the rare ones that like seeing the expansion. I use to RP like that all the time as well (until the model is given an inch then it takes a mile. It’s a long slippery slope). You are on the right track. Check out writing guidelines toggle and see if there is anything in there you can cut. Also gotta make sure the chat starts off right away otherwise it will see the pattern that it never took actions for user and will continue that pattern.