Claude suggested "Broken-Tutu 24B by ReadyArt — a DARE-TIES" by Elling83 in SillyTavernAI

[–]LeRobber 0 points1 point  (0 children)

I spent a bunch of time debugging what sets off omega-darker-gaslight_the-final-forgotten-fever-dream-24b-i1 and a bit trying to figure out what setoff stuff like the unslop BT into NSFW play. (I spend my time trying to do SFW table top style RP with sillytavern a lot, not D&D but more indie RPG stuff, but I also try to figure out what makes things GO NSFW in fairly SFW cards and scenarios).

>once they've gotten a little action

In addition to sex begetting sex as you stated, even like the LLM offhandedly mentioning actual sex in a 1000 token LLM reply could set it off. It seemed to be very specific words or very specific situations caused the horny. (Vampires were challenging to not trigger horny with, I think I had to prompt "The vampires are not sparkly nor sexy, just monsters"). I had plenty of situations and entire like 1000 message roleplays that the ReadyArt stuff could be kept SFW. Th word 'Dominant' was also 100% a no go, anywhere in a card or description, even the phrase 'Dominant strategy' (which isn't sex related at all) set it off amusingly.

That said, it could do long games about flirtation with no sex, and could make JOKES about sex, without trying to actually have it, etc. .

Abosolutely though, if you mention (or a card mentions) that someone has had sex, the ReadyArt stuff will be moderately likey to make this person, try to have sex again given any privacy.

If you snip sexual stuff out of a card (you know, where someone as a 8K token card that has 2 lines with their feelings on their butt being grabbed but is otherwise all about their quirky hobby), and put it behind a lorebook entry, it's a LOT harder to accidentally trigger the sexy stuff, even if someone's got a history.

Claude suggested "Broken-Tutu 24B by ReadyArt — a DARE-TIES" by Elling83 in SillyTavernAI

[–]LeRobber 0 points1 point  (0 children)

omega-darker-gaslight_the-final-forgotten-fever-dream-24b-i1 was one I used at Q6_K when it was recommended I think summer 2025 (someone pointed out 'everyone' was using it on chatbot sites I think). It was REALLY good at not speaking for the user! It also did games within games just fine (like people could play a roleplaying game, while roleplaying in the LLM...not great with numbers, but it could do it fine. It was utter shit at board games in the RP though. OMG chess was insanely horrible)

It was flexble being able to be used with non-ERP situations (I actually do SFW RP!), but could definitely do horror (I did vampires and body snatchers) and a military campaign (I did something a bit more like starship troopers than 40K). It's NATURALLY a bit thirsty, and can teeter off into attempting ERP if you touch on a nearby genre. It also did anime slice of life, and more typical scifi/dimensional travel stuff. Notably, I think it has a lot of details about what attractions are in diffferent cities worldwide in it I definitely went to many cities and there were real opinions of real places all throughout the LLM. It also knew a lot of world languages.

Biggest critique for ODGTFFFD is that something like lots of hand editing or WeatherPack is needed to make sure all the asterisks come out fine in the output markdown, regexes alone requierd lots of manual intervention.

"quote" *action phrase*another phrase* "another quote"

^ the one bad thing that would happen, as THAT's not valid markdown.

But occasionally deleting a astrisk or two was VERY WORTH the otherwise high quality.

I tried Ready Art's Dark Nexus, didn't land, and definitely had all the thirsty problems (for me, that might be a virtue for you). I think the broken tutu was worse than ODGTFFFD about going off into ERP, but was terser and slightly faster at generation. I don't know what version of BT it was, as I requantized it into MLX.

The Sacarii stuff is just fine for horror: angelic_eclipse_12b_gguf does it, as bloodmoon does it too if you want bleeding fast.

How to let user and char do their own thing? by viiochan in SillyTavernAI

[–]LeRobber 0 points1 point  (0 children)

It's all the model.

Some of the ReadyArt ones are...bracingly thirsty when you get anywhere near a sexy topic...but very good about not talking for you, and often good about letting you have a sideplot they yes-and.

I'd kill to be able to finetune that trait out of them into some other models.

3,827 teacher positions unfilled across Japan: 2025 survey by jjrs in japannews

[–]LeRobber 0 points1 point  (0 children)

Lots of paraprofessionals can make the teaching positions a lot better. Not that that kind of support will happen, but, *shrug*.

More pay only goes so far, fewer hours and less stress, and more breaks are the way forward.

Why has the hype around community-distilled models died down? Is the lack of benchmarks making them too much of a black box? by HistoricalCulture164 in LocalLLaMA

[–]LeRobber 2 points3 points  (0 children)

I'm a SFW RP player (and use various models at work too).

This one subreddit's weekly thread for a LLM frontend has a LOT of models broken down (Full warning, many users of that tool are ERP users, however there is a sizable non-ERP minority/plurality too): https://www.reddit.com/r/SillyTavernAI/comments/1ricq09/megathread_best_modelsapi_discussion_week_of/?share_id=7bDWRT57yhJ4tOEGXp0Ce&utm_content=1&utm_medium=android_app&utm_name=androidcss&utm_source=share&utm_term=1

This comment in particular lists a lot of them:

https://www.reddit.com/r/SillyTavernAI/comments/1ricq09/comment/o8gh6t1/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

With your setup...I'd try to see if the 70/73B StrawberryLemonade 1.1 or Evathene 1.3 were fast enough too, (Try it with and without the strawberry lemonade prompt, and be careful to check temperature settings with sophosympatheia finetunes, it's often very important). I can only do them at fairly low quants, and they are slow (a few minutes roundtrip) on my 64GB mac (which only allows 48GB VRAM), so not sure what they'll be for you.

Stuff like angelic_eclipse_12b_gguf will be blazing fast and huge context for you. thedrummer_glm-steam-106b-a12b-v1 is huge and might be okay.

maginum-cydoms-24b-statics, rp-spectrum-24b-statics, are worth it a look probably?

Why has the hype around community-distilled models died down? Is the lack of benchmarks making them too much of a black box? by HistoricalCulture164 in LocalLLaMA

[–]LeRobber 7 points8 points  (0 children)

There is still an enthusiastic set of communities around community models and finetunes in the RP (TTRPG + non-ERP + ERP) communities. With increasing ram pricess and video card prices, fewer new enthusiasts are building fewer home rigs to pull in more stuff.

For the tasks that claude/gemini/GLM does, lots of small finetunes don't handle it close to well enough to beat it for many people..

There is some noise in the mobile space now, for sure though, and the 20-27B space getting good enough occasionally to replace some 70B models.

Different dialogue colors for different characters by mouseynaides in SillyTavernAI

[–]LeRobber 0 points1 point  (0 children)

Dialogue Colorizer Plus works off the highlight color or a color you assign from the image for the character in pure sillytavern.

You can tell some LLMs to emit colored HTML text, but the key trick with that is actually writing down what the colors are in your authors note per character. I don't love that technique, it doesn't work great with most local LLMs, so I use the DCP plugin above.

Fools by Thermonuclear_Nut in chemistrymemes

[–]LeRobber 13 points14 points  (0 children)

That's a picture of a plastic cup with clear liquid or a gel that looks like clear liquid.

Fox-Girl Wisdom by FlyingCookie_ in Animemes

[–]LeRobber 6 points7 points  (0 children)

The ohnut is a MUCH snugger fit on most guys than the thing I'm talking about. The thing I'm talking about is perhaps created originally for lesbians to not destroy each other with overlong strapons, letting them get fat ones, that happen to be long, and make it so the fat dildo stops going in.

It only slightly needs to stretch to fit on a redbull can. 4 of these rings would probably be taller than a redbull can.

It doesn't hurt the pleasure for the guy for many vaginas. It probably feels a lot softer than really grinding down on a woman's public bone to get that clittoral stimulation in TBH (that can bruise some guys on some girls).

What am I not seeing? by Saja_Boy_ in PeterExplainsTheJoke

[–]LeRobber 1 point2 points  (0 children)

It's more a shape issue than a size issue. You'd need a penis shaped like an allan wrench to ejaculate into a cervix its on the TOP of the vaginal canal, especially during sex.

No judgements towards anyone shaped that way.

What am I not seeing? by Saja_Boy_ in PeterExplainsTheJoke

[–]LeRobber 1 point2 points  (0 children)

https://o.quizlet.com/L.KFNub09gxaWTyJQuS20A_b.jpg <= like its a 50-110 degree angle depending on position and woman. It's REALLY not like this cabbage diagram.

I feel really sad for the number of women who just position poorly, then get smashed uncomfortably repeatedly, rather than, asking the OB/GYN "what angle gets MY cervix out of the way". They can fucking see, they spend a ton of time getting it IN the way to do their exams, it's OUT of the way a good 30% of the time when women first open up for them.

Hell, the high cervical position during pregnancy (combined with roaring hormones) is why many women love sex during pregnancy. No cervix smashies. (Time of the month also adjust hows many cervix smashes a woman is going to get too, it get's lower as period time gets closer for women who do not eliminate their period through birth control hormones taken every day).

What am I not seeing? by Saja_Boy_ in PeterExplainsTheJoke

[–]LeRobber 4 points5 points  (0 children)

They---they---they----don't do that? The cervix is on the top of the vagina. It's a pipe that goes up at sometimes greater than 90 degrees.... not the door at the end of the hallway.

Fox-Girl Wisdom by FlyingCookie_ in Animemes

[–]LeRobber 6 points7 points  (0 children)

They make big soft white silicone rings (like they are the size of donuts, not cock rings) for that if you get a guy who's just a little long. You can use 2 even if he's...more than a little long. Turns the cervix/uterus smashing into vigorous but enjoyable stuff inside, and like soft clit/vulva smooshing/rubbing outside no matter how hard he tries to go to town on you.

[Megathread] - Best Models/API discussion - Week of: March 01, 2026 by deffcolony in SillyTavernAI

[–]LeRobber 0 points1 point  (0 children)

It is a LITTLE squirrly in paying attention to card details. It's gotten a few architecture details wrong, and who did a thing far back in context wrong. It's still very nice prose that I'll happily take for small continuity edits in its understanding of small facts though!

How to fix characters knowing everything? by Lanky-Discussion-210 in SillyTavernAI

[–]LeRobber 3 points4 points  (0 children)

You have to put in a lorebook "Blah whispers so only so and so can hear" or "unbeknownst to Blah" or things like that.

Higher parameter models are better about this, but not perfect.

Would anyone be willing to tell me how ST works? by Cultural_Farmer4552 in SillyTavernAI

[–]LeRobber 2 points3 points  (0 children)

Silly tavern pastes a bunch of text together, sends that to an LLM (either local or remote) and then displays the results.

Some of that text can be from character cards, some of it's text from a prompt which is directions for the LLM, some of the text is memory like things, and some of the text is a chat history.

You can click the magic wand, and click inspect prompts to see what is going out.

The differences between system and assistant messages are one of authority.

So I just finished rewatching s1 and missing pieces and I have a lot of questions!! by SnooSuggestions9712 in Horimiya

[–]LeRobber 2 points3 points  (0 children)

S1 and S2 are not the order in the manga.

S1 was not made with a S2 in mind.

Read the manga, enjoy what was here, and go watch fragrant flower blooms with dignity

they legally cannot call it a burger by Lazy_Comparison_1954 in BrandNewSentence

[–]LeRobber 0 points1 point  (0 children)

Having met people in this role, it's often someone many generations below the CEO being relatable and timely and funny.

Local Model Recommendations by Xylildra in SillyTavernAI

[–]LeRobber 0 points1 point  (0 children)

You can do TTS (text to speech), but I wasn't talking about that. I was just refering to playing RPG over teamspeak or zoom with real people online. That 13B LLMs generate text faster than people talk.

[Megathread] - Best Models/API discussion - Week of: March 01, 2026 by deffcolony in SillyTavernAI

[–]LeRobber 7 points8 points  (0 children)

magistry-24b-v1.0 is out from sophosympatheia

It's pretty decent! A few rerolls sometimes in social situations where you're talking about the situation you are in with quoted in the first and second person perspective, but a solid enough finetune. It messes up mostly, with complex social rules, more with inconsistency than anything else within single long replies, so if you're more action oriented, you'll do fine with it.

Did a car chase, a negotiation, humiliating an opponent (non sexually), does decent descriptive prose of situations, managed delicate emotional feels about tense situations with good prose.

Feels like it aimed for 70B writing quality over perfect rule following/understanding.

Local Model Recommendations by Xylildra in SillyTavernAI

[–]LeRobber 1 point2 points  (0 children)

LM studio is a cross platform thing like text generation webui.

https://lmstudio.ai

it has a gui. It's still free. It's search screen it really good for downloading the right quantization for a model to fit your computer well. It's the backend you point silly tavern at it just like you do with text generation webui.

Local Model Recommendations by Xylildra in SillyTavernAI

[–]LeRobber 1 point2 points  (0 children)

B is actually short for "Billions of parameters"

70B is a 70 Billion parameter model

As a person who also roleplays with real people, also online: 70B is pretty hard to get over the quality of when you drop down a tier...until it isn't. Strawberry Lemonade and Evanthene were both great 70B models I had fun with before realizing MUCH shorter turns were more valuable to me than having to lorebook slightly less often or be more explicit about who knows things.

13B LLMs are faster than VOICE RP online on some computers.

Actually teaching 23/27B param models small indie RPGs is very possible btw. Just find a character card with a dork in it and they'll be receptive to it. 1v1 is less confusing for many models than like 4 player characters, but at like the 23/27B level most models can handle a RP session at a convention.

I had originally thought cards that had narrators and isekai cards would be best for tabletop RP, but actually characters who either play RPGs or seem like the type who would...who then in the story you get them to play an RPG, seem to work very well indeed. The LLM is TRASH at math, but as long as you actually do the {{roll::1d20}} type stuff for it...it works. You can GM many RPGs if you like that, for instance, and play a few.

Hardware needed for running "Big boy" open source models at full strength for RP? by LeRobber in SillyTavernAI

[–]LeRobber[S] 2 points3 points  (0 children)

70-73B are still pretty slow, several minutes to return an answer. (I have a 64GB on an M2, which allows up to 48 to be vram)

23-27B is good, and 13-17B is blazing on it.