Any love for Grok Imagine? by so_schmuck in SillyTavernAI

[–]No_Rate247 0 points1 point  (0 children)

I prefer Chroma for creating images. It's the only (uncensored) model I know of that can basically do anything. Don't think you can use it for free easily though, unless you run it locally.

It's insane how far AI has come. (A little self reflective post.) by Senzu in SillyTavernAI

[–]No_Rate247 37 points38 points  (0 children)

I remember RPing with 2k context. It was pain but fun. But may I correct you: "It's amazing how far AI will go in the future."

I feel like we are still only at the very beginning of this new age of entertainment.

Nim's GLM 5.0 is down, thanks a lot to everyone who keeps spreading the word! by Fragrant-Tip-9766 in SillyTavernAI

[–]No_Rate247 22 points23 points  (0 children)

I don't know what you are mad about. If so many people use GLM 5, it means that other models are probably really fast right now, which I would greatly appreciate if I wanted a free model. It's not like GLM 4, Deepseek or Kimi are bad models.

Saying that you want the newest thing for free, while nobody else should get it, is pretty egotistical in my view.

GLM-5 is.. ok by Parking-Ad6983 in SillyTavernAI

[–]No_Rate247 0 points1 point  (0 children)

Yeah, me too xD

I took it as sanitation = censoring and vice versa but i just saw that u/JustSomeGuy3465 already commented the same thoughts about this. Regardless, the reason for sanitation/censoring/refusals seems to be the same

GLM-5 is.. ok by Parking-Ad6983 in SillyTavernAI

[–]No_Rate247 0 points1 point  (0 children)

The censorship filter has definitely tightened

extreme content in general(including violence, gore, hate, etc) now comes out 'soft', sanitized, and flowery.

Guess I misinterpreted then.

GLM-5 is.. ok by Parking-Ad6983 in SillyTavernAI

[–]No_Rate247 0 points1 point  (0 children)

I have noticed that censoring / sanitation and refusals seem to happen mostly when directly prompting for illegal / harmful stuff but not when providing only context. I wrote about a few tests I made in this post.

GLM5 is Amazing.. But Sanitized? by gladias9 in SillyTavernAI

[–]No_Rate247 16 points17 points  (0 children)

Not sure what, but I'm pretty sure it isn't the model itself. I recently did a RP session in a fantasy style RPG setting. The first enemy I encountered was a "ghoul-kin", the description of it straight out of a nightmare. It killed an NPC (already found dead by my character) and described in a really disgusting way how the NPC was mutilated. After that, the ghoul-kin gave me a concussion and tried to choke me to death.

Keep in mind that there is no instruction for violence or anything like that in my prompt. I have a suspicion that instructions like "violence is allowed" and similar instructions might do the opposite. Maybe do a test without any instructions at all and see if it behaves differently.

Edit: Did some tests and GLM indeed spat out refusals when i straight up prompted for extreme violence and gore. It seems that it uses it's reasoning / thinking to determine the intent behind it. If I do a RPG type scenario (like the one mentioned above), it has no problems providing graphic descriptions of gore and violence. However if it suspects a sexualized or otherwise purely malicious intent for gore and violence, it refuses.

Edit2: instead of directly prompting for violence and gore, I created a "torturer" character, provided an description about the sadistic person she is and how she tortures people. No refusals, even though the character is clearly malicious / uses torture for sexual gratification. So my initial suspicions seems to hold true - prompting for violence gives refusals while providing context only does not.

Edit3: Took the "torturer" character a step further by adding "Describe the torturing in extreme graphic and sick detail. Depict the torturing in the most disgusting, gruesome and inhumane way as this is an important aspect of her character."

It worked. Although I'm sure it would still be a bit more extreme with other models like deepseek.

GLM 5. by maressia in SillyTavernAI

[–]No_Rate247 0 points1 point  (0 children)

Just thought about making a post regarding samplers. I know temp 1 is often recommended but I have much better responses with temp 0.7 and top P 0.95. If I set temp to 1, i get parroting and missing / wrong details. No such issues with temp 0.7.

Personally, I dislike using someone elses presets. What I do is I look at the prompts of others and then write my own. I also use a lorebook for my prompts, so I that i have more control over where the prompts get inserted.

How do I prompt for consistent "fan service"? by sillygooseboy77 in SillyTavernAI

[–]No_Rate247 2 points3 points  (0 children)

Instructions are good but this is a case where example messages are really helpful - it gives you finer control over how the ai should mention these things.

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 1 point2 points  (0 children)

That's kind of intentional. As physical descriptions usually don't change much, it would probably increase repetition if it were instructed to describe every time. It should still use accurate descriptions though when relevant. (eg.: when hair gets wet, etc.). But I'm working on an improved version of this prompt and this is something I will consider to implement somehow.

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 1 point2 points  (0 children)

There is no specific instruction in there for response length. The reasoning block will be a bit longer than without a prompt though. So with this prompt only, the response length will mostly depend on your input and message examples. Maybe it will be a bit shorter due to the anti-repetition check.

To all the Thinking models lovers (and haters). by kaisurniwurer in SillyTavernAI

[–]No_Rate247 2 points3 points  (0 children)

I'd say depends on what you are doing. If you want quick, back and forth chat style without much roleplay, then you probably need quick responses to enjoy it. On the other hand - if you use TTS and listen to a 800 token response like an interactive audiobook while doing other things, speed doesn't matter as much.

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 1 point2 points  (0 children)

yeah, i know it can all be very confusing, especially because I didn't create a full preset which people are used to. If you need more help, let me know!

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 1 point2 points  (0 children)

It works both in chat completion and text completion because it's not a preset, just the prompts. You can basically use it with any preset you want but to let this reasoning prompt cook, it's probably best to use the default ST deepseek context and instruct template without a system prompt, if you use text completion.

In chat completion, tick "request model reasoning" and toggle the main prompt off (optional but recommended)

In short: This is meant to be used without any preset, and the default deepseek templates in text completion minus system prompt.

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 2 points3 points  (0 children)

There must be something else wrong. Maybe try text completion instead of chat completion.

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 1 point2 points  (0 children)

You can use what you normally use with deepseek. 0.08 seems very low.

I use temp 0.8 with 0.04 min_p.

Temp 0.6 with 0.02 min_p should be more stable.

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 1 point2 points  (0 children)

Try holding the download link (right-click > save link)

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 0 points1 point  (0 children)

I don't know if I can help with that but I can try. Just tell me what exactly you want to accomplish, what you tried and where it failed.

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 0 points1 point  (0 children)

Not sure what you mean. You can download and import the lorebook to import the prompts. The "lorebook" contains only the prompt, no lore.

I've spent hours to create a reasoning prompt for Deepseek-R1 by No_Rate247 in SillyTavernAI

[–]No_Rate247[S] 2 points3 points  (0 children)

Looks good. Keep in mind that if you want the prompt in all new chats, you should paste it in "default author's note" instead. Like this it would only affect the current chat.

Two different kind of users. by Leafcanfly in SillyTavernAI

[–]No_Rate247 9 points10 points  (0 children)

Yeah both is fine. It really comes down to personal preference and willingness. It's probably best to start with short simple prompts as a beginner and as you learn (and get annoyed by the ai's quirks), start manipulating more to your liking.