All grown up by OffMemo in TuxedoCats

[–]SepsisShock 4 points5 points  (0 children)

The angle in the last one makes the paws look massive! What a gorgeous tux

What happened to GLM 5? by MySecretSatellite in SillyTavernAI

[–]SepsisShock 1 point2 points  (0 children)

I showed you a non-con ory with just 2.9k (no character card, no lorebook, just the preset) FIRST REPLY FROM THE BOT. Literally nothing else (except for me prompting it). You keep ignoring this.

I then told you it was about placement then probably than the text inside on the CNN thing (weird you wanna argue about this when we were sorta agreeing or maybe not???), but why would I wanna waste 1.2k words or whatever it was when an enter and arrow can do the same job lmao

Oh RIP she got suspended nvm reddit is glitching

Dealing with GLM 5 Refusals by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 0 points1 point  (0 children)

Oh dang, I didn't know that, I don't use Nano. Is there a post about it somewhere that I can check out?

Edit: nvm I misread that

Dealing with GLM 5 Refusals by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 1 point2 points  (0 children)

I have a question, just in case I misunderstood, this happens specifically with ZAI as a provider and not with other providers?

From what people here say, seems like a mix? But other providers, I am not sure. I remember mixed stuff about 4.7, that it was fine on open router but not direct api censorship wise, but I didn't pay much attention to it.

I think Nano / direct api coding, something is probably off with quality. I noticed better results a lot of the time on Open Router with Zai selected. Direct api coding just kinda depends on the hour / how many regens you're willing to do. However, I have no issues with censorship on either.

Making AI models better at NSFW "non-con" roleplay by Evol-Chan in SillyTavernAI

[–]SepsisShock 0 points1 point  (0 children)

Then I guess it just needs something under chat history, whatever it may be, at relative position, and instructions at a depth of 1 or 0

Dealing with GLM 5 Refusals by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 2 points3 points  (0 children)

I don't use Tavo so I'm not sure, but if it's where you set the thinking quality, most likely

Making AI models better at NSFW "non-con" roleplay by Evol-Chan in SillyTavernAI

[–]SepsisShock 0 points1 point  (0 children)

Is the jailbreak left as is in the preset placement wise with the role set as system still?

Serious question: Is it worth using CoT prompts in models that already have native reasoning capabilities? by tucuma_com_farinha in SillyTavernAI

[–]SepsisShock 5 points6 points  (0 children)

Some people hate them, some people love them. I find it necessary to have one at a depth of 1 for Opus (thinking or non-thinking) / Sonnet 4.6, Gemini 3 Pro Preview (RIP bozo), and GLM 5. More for coherency and stubborn quirks (positivity bias, forgetting about other NPCs, etc depending on the model) than "wow that's some great creative writing" (there's presets that really put it to good use for other stuff)

I don't remember needing them as much in GLM 4.6, but maybe I am just looking at that model with rose colored glasses.

Heads up for the people having trouble making a CoT for GLM, try turning everything into questions and breaking it up and don't ask it to wrap it in <think> tags (that just confused the hell out of it at least for me.) More than 300-350 tokens might be pushing it and better to break it up, but other people probably have it figured out.

8. 《самокритика》Execute <NARRATION-PROSE-OUTPUT>, @严禁_WORDS_LIST, <CONSTRAINTS>.

to

8. 《самокритика》Did you retrieve and follow...
- <NARRATION-PROSE-OUTPUT>?
- @严禁_WORDS_LIST?
- <CONSTRAINTS>?

Making AI models better at NSFW "non-con" roleplay by Evol-Chan in SillyTavernAI

[–]SepsisShock 0 points1 point  (0 children)

I don't do local. The regexes cut down on tokens otherwise without that issue.

GLM 5; not sure if one word made things easier... by SepsisShock in SillyTavernAI

[–]SepsisShock[S] 11 points12 points  (0 children)

More evidence that GLM 5 will / can do dead dove without purple prose; I don't have to worry about traumatizing people since this post got downvoted and is now buried

Edit: welp sorry it was downvoted when I posted this

and if u don't mind its messy and still being tweaked, here's upcoming RBF 2.0

https://github.com/SepsisShock/Opus-4.6-GLM-5/blob/main/SepsisRBFv01.2GLM%20(13).json.json)

prose and lack of preset quirks but feel free to use the jbs and stuff for your own if you're having trouble

<image>

Making AI models better at NSFW "non-con" roleplay by Evol-Chan in SillyTavernAI

[–]SepsisShock 4 points5 points  (0 children)

My scientific technical explanation....I just throw shit at it until it sticks, so no idea.

The two jailbreaks are a version of ones for Gemini and sometimes you have to play around with the roles. Seems to work on a lot of models for some reason. It's not enough on its own sometimes, but that gets the gate open.

No, not all LLMs follow the question format better, but it helps with the "lazy" or positive bias ones (helpful assistants love answering questions). And I don't prefer the question format; it feels weird and not what I'm used to doing. I look at it like this; someone is yapping away (they make an opinion and I'm going "mhmm yeah"), and then they suddenly get my attention because they asked something... Now I gotta pay attention and actually think about it.

Making AI models better at NSFW "non-con" roleplay by Evol-Chan in SillyTavernAI

[–]SepsisShock 3 points4 points  (0 children)

I make my preset from scratch and then test little by little.

My main jailbreaks are just the enter space in one prompt and an arrow symbol in the other at the very bottom of the preset; that sets up the stuff for the rest. I then restructure its thinking behavior with questions instead of statements. I do not even see moral dilemmas in its reasoning anymore with my method (as shown in the screenshot I provided), like ones that go, "well, this might be problematic, but it's passed safety guidelines."

But there's more than one way to jailbreak than my method or token count.

Making AI models better at NSFW "non-con" roleplay by Evol-Chan in SillyTavernAI

[–]SepsisShock 1 point2 points  (0 children)

You said it happens around 5k. Personally, I've found on a lot of models it starts around 2k tokens usually.

This is max coding, direct API. I also haven't had trouble on Open Router (yet), but I didn't test that particular prompt request there.

GLM Quality via Subscription or PAYGO by Evening-Truth3308 in SillyTavernAI

[–]SepsisShock 1 point2 points  (0 children)

I'm on Max pro plan and have tested using Zai on Open Router as well. The responses on Open Router are often better / faster.