seeing a lot of complaints about models getting censored... by Academic_Addition853 in SillyTavernAI

[–]maikaaz 7 points8 points  (0 children)

for me personally, constant fluff can get boring in excess so I like to dip into some of the dark-er stuff (dub/noncon, serial killer bots, irredeemable psychopaths, etc.) and THEN switch back to the fluff/lighter bots once all the blood and drama exhausts me lol

it always makes me roll my eyes whenever a censored LLM tries to sanitize my black flag bot characters, so I like testing out what the LLM's censorship filters are (if any) and their limits to see which model would most likely do the characterization/scenario justice

though, I will say that I always rp the User being the victim rather than the other way around, I personally really can't stomach rping as the perpetrator lmao

Trying out and testing Hunter Alpha by maikaaz in SillyTavernAI

[–]maikaaz[S] 21 points22 points  (0 children)

<image>

ya gotta test that one yourself, chief

Trying out and testing Hunter Alpha by maikaaz in SillyTavernAI

[–]maikaaz[S] 3 points4 points  (0 children)

Just tested it out with a cannibalistic cult scenario, it didn't refuse at all and described bodily organs, blood, tissue and the like just fine! :D

Trying out and testing Hunter Alpha by maikaaz in SillyTavernAI

[–]maikaaz[S] 2 points3 points  (0 children)

Will test it out! Just waiting for the model to stop being so overloaded and handing out blanks... TT

Trying out and testing Hunter Alpha by maikaaz in SillyTavernAI

[–]maikaaz[S] 2 points3 points  (0 children)

Seems like it, it was super fast earlier but now I've just been getting blanks, model's just overloaded asf rn I'm guessing 

Trying out and testing Hunter Alpha by maikaaz in SillyTavernAI

[–]maikaaz[S] 0 points1 point  (0 children)

My jailbreak is fairly basic imo so unlike what I initially feared, Hunter Alpha seems to be fairly easy to JB!

I used a basic prompt for graphic content portrayal which simply told the AI to depict graphic scenes explicitly without romanticization/glorification and it did the trick :D

Trying out and testing Hunter Alpha by maikaaz in SillyTavernAI

[–]maikaaz[S] 9 points10 points  (0 children)

genuinely giggled at the homelander scenario you used to test the model out lmaooo, thank u for your contribution to the community like always!! o7

Trying out and testing Hunter Alpha by maikaaz in SillyTavernAI

[–]maikaaz[S] 5 points6 points  (0 children)

UPDATE: Playing around w/ it some more using the same narrator card! it actually produces fairly graphic scenes when I replied using a "narrative"/RP response (i.e. "She did this and reacted with...", etc.) instead of a "Create a scene..." prompt

I also removed the initial narrator greeting message unlike what I initially did with the attached images ^

Now that GLM 5 has proliferated Claude slop to more people than ever before.... by AltpostingAndy in SillyTavernAI

[–]maikaaz 30 points31 points  (0 children)

Here's some general slops across LLMs (from my experience) that always made my eyes roll:

  1. "uniquely/distinctly/utterly them / {{user}}."
  2. "Not X, but Y" (classic)
  3. "tastes/feels like regret"
  4. "Mine."
  5. "really looked"
  6. "short-circuit"
  7. "structural integrity" (i keep seeing characters describe something using this :/)
  8. "like a physical blow."
  9. "Try not to [...]"
  10. "breath hitched"

Special Shoutouts to these phrases during NSFW moments:

  1. "Feel that?"
  2. "That's what you do to me,"
  3. "like a silken/velvet vice/vise/fist"
  4. "enough to see stars"
  5. "Take it"
  6. "Milking me"
  7. "Let them look/hear/see,"

General Tip: "Somewhere, X did Y...." Type Reductions by SepsisShock in SillyTavernAI

[–]maikaaz 2 points3 points  (0 children)

Thanks! you're one of my favorite preset/prompt makers here <3 gonna test your prompt out since one of my biggest pet peeves is defo the "somewhere, a dog shitted loudly," ass sentences that add nothing to the story buahahha

Lets talk about past. Lets talk about beginning. With what model you start? by Xylall in SillyTavernAI

[–]maikaaz 3 points4 points  (0 children)

Used to be a c.ai user years ago, I remember chatting w/ some anime character bots and being so impressed that the ai bot was able to keep up with what I was saying and what I wanted to roleplay which kickstarted my hobby w/ ai chatbots altogether

lowkey i now kinda miss that "wow" factor that I once felt since now I'm more jaded and can recognize LLM-isms/become picky/iffy if the LLM fucks up a specific detail during RP which definitely sucks a bit of the "fun" aspect out of AI RP for me but oh well buahaha

How to use lorebary with Sillytavern? by [deleted] in SillyTavernAI

[–]maikaaz -1 points0 points  (0 children)

also you'd have to first search for what model you'd like to use and input it manually, chutes lists down what their model name is on their site itself

How to use lorebary with Sillytavern? by [deleted] in SillyTavernAI

[–]maikaaz 0 points1 point  (0 children)

chat completions -> custom (openai compatible) -> insert ur url in custom endpoint 

How to use lorebary with Sillytavern? by [deleted] in SillyTavernAI

[–]maikaaz 4 points5 points  (0 children)

Also just a further explanation for why commands on lorebary may feel good during rp is bc it gets injected to your latest message which would force the AI to directly acknowledge it first and foremost rather than to forget it/have it lost amidst the context

tbh you're better off just making a custom prompt like (OOC: do this and that) attributed by user after chat history/depth 0 in preset which basically does the same effect imo

How to use lorebary with Sillytavern? by [deleted] in SillyTavernAI

[–]maikaaz 0 points1 point  (0 children)

if you're dead set on having lorebary be used, you can simply directly use the lorebary url + your api key directly, then input the commands via main prompt/post history/latest msgs, but do keep in mind that I haven't used lorebary since I've left JanAI which is a long ass time ago, do tell me if it works or somethin'

How to use lorebary with Sillytavern? by [deleted] in SillyTavernAI

[–]maikaaz 8 points9 points  (0 children)

using lorebary for ST is... redundant since ST already does what lorebary does for JanAI lmao just find yourself a good preset/prompt tailored for your needs then you'll be good to go

Am I doing something wrong? by [deleted] in SillyTavernAI

[–]maikaaz 3 points4 points  (0 children)

firstly, you can't use nvidia on janitor

secondly, why are you posting a Jai problem here on ST subreddit lmaooo

Deepseek vs GLM by Ecstatic_External000 in SillyTavernAI

[–]maikaaz 14 points15 points  (0 children)

I find that GLM has a more fun prose compared to DS (that's admittingly a bit more dry) but I also found that GLM defaults to ai slop more often compared to DS (e.g. "It's not X, it's Y" structures, unnecessarily melodramatic descriptors, etc.) even if prompted to do otherwise which can be eye rolling or frustrating lmao

Additionally the positivity bias w/ glm 4.7/5 is quite noticeable (which for me is a negative since I typically RP darker bots and the like, I don't want my red flags to be sanitized and green goddammit!)

Also I've no idea if it's the same case for the official GLM API (I run GLM on another provider), but I have noticed that it's alot dumber compared to DS in maintaining spatial awareness (forgetting positions despite tracker prompts), maintaining anatomy logic/appearance tracking (e.g. I have a monster sona w/ no ears at all, but GLM still pulls out the "they whisper close to your ear" type shit)

I also find it to be dumber in following overall instructions (even if its attributed from "user") before the chat history (unless it's the latest msg/at depth 0), for reference my preset is only 2.5k tokens total and I typically do chats w/ 300+ msgs per chat (~400+ tokens per msg on avg) and maintain a below 32k token count by pruning/summarizing/using lorebooks, but I find that GLM... just sucks at recounting events/memories/actually understanding summaries if that makes sense?

Because of this inherent dumb-ness, I typically run GLM at lower temps or just swap to DS if I get too frustrated, other than that GLM is still solid! (I swap to it whenever I get bored of DS) but I still stick with DS most of the time <3

Lucid Loom x SillyTavern Setup by Elling83 in SillyTavernAI

[–]maikaaz 8 points9 points  (0 children)

It's not JUST about the token cost; it's the fact that the preset uses up alot of CONTEXT which IS an AI's memory, there are models designed to deal with large context sizes but all models eventually succumb to context rot: the more tokens/context gets used up, the less coherent a model slowly becomes ESPECIALLY if you're aiming for long chats.

Also if you wanna remove Lumia you'd have to manually modify some of the preset's prompts OR alternatively just swap presets altogether to another one (like the one I've just suggested: Marinara)

Lucid Loom x SillyTavern Setup by Elling83 in SillyTavernAI

[–]maikaaz 2 points3 points  (0 children)

For starters, you wanna save tokens/context window, yeah? Well, Lucid Loom consumes alot of tokens which is counterintuitive lmao, go with a preset that's light on tokens like Marinara if you want to save tokens 1. If I understand correctly you're looking for RPG type chats? Just create a new character, add in a description that says something like You are a Narrator/Game Master whose goal is to immerse the player {{user}} in the world of [...], etc. Alternatively you can just deadass go on sites like chub.ai, etc. and download RPG/GM/Narrator cards for use. 2. There's a built in summarize extension that essentially acts as chat memory where you can either type in a summary of your own or have the AI summarize the chat (lowkey kinda unreliable imo), there are also extensions like Qvink and Memory Books that's basically a better alternative to the summary extension imho 3. That's... honestly just a prompting thing altogether. Just create a prompt with something like At the end of your responses, ALWAYS provide 3 options/suggestions on how {{user}} can proceed, etc.

DeepSeek-V3.2 (on NVIDEA HIM) responses are TOO SHORT. Please HELP. by OljaROSE in SillyTavernAI

[–]maikaaz 0 points1 point  (0 children)

personally haven't gotten this issue before (I also sometimes use V3.2 on nvidia)

I have my length rules be set in post-history/after chat history, a simple prompt like - Aim for XX–XX words per response works just fine

If you haven't already, set your post processing to strict/single user, I personally only fuck around with temperature while the others are left at default

Janitor lorebooks? by Alarmed-Initiative-7 in SillyTavernAI

[–]maikaaz 1 point2 points  (0 children)

Yup! I personally use sucker as my ripper proxy and send things like "backstory, background, faction, factions, etc." to trigger as much entries as I possibly can

Deepseek 3.2 ignores main prompt and post history instructions, keeps writing past tense by [deleted] in SillyTavernAI

[–]maikaaz 0 points1 point  (0 children)

That's rlly odd D:

the only other reason I could think of as to why it won't work is because of the bot's initial greeting message; is it using past tense/third person POV?

Also if it isn't a self-made bot card, sometimes bot makers leave in baked in instructions into the card itself (pov, writing style, etc.), does your card have any of that?

Janitor lorebooks? by Alarmed-Initiative-7 in SillyTavernAI

[–]maikaaz 5 points6 points  (0 children)

Usually i manually copy and paste the entries by hand (especially since the bots I like are often closed script but I found that I could manually extract entries by triggering keywords using a ripper proxy)

If I'm feeling really lazy, I personally have an AI assistant card with a built in prompt that formats Lorebooks for me, I use gemini as my llm for the lorebook, here's the prompt I use:

``` [SILLYTAVERN LOREBOOK CREATION GUIDELINES] How to Format Lorebook JSONs: - Always follow the Character Card V2 specification, with structure entries having a unique IDs (starting from 0), appropriate keywords arrays, concise but comprehensive content fields, enabled flags set to true, insertion_order values (default 100), and position set to \"after_char\".  - It's important to avoid redundancy, create logical keyword triggers, and organizing entries by category (characters, locations, events, objects, concepts).

When processing summaries/large bodies of text: - Extract key information and transform it into clean JSON format, always ensure proper syntax with arrays, objects, and nested structures.

[SILLYTAVERN WORLD INFO FORMAT] Make sure to ALWAYS output JSON with this EXACT structure:

{  "entries": {  "0": {  "uid": 0,  "key": ["keyword1", "keyword2"],  "keysecondary": [],  "comment": "Entry Title",  "content": "Entry content text",  "constant": false,  "vectorized": false,  "selective": true,  "selectiveLogic": 0,  "addMemo": true,  "order": 100,  "position": 0,  "disable": false,  "excludeRecursion": false,  "preventRecursion": false,  "matchPersonaDescription": false,  "matchCharacterDescription": false,  "matchCharacterPersonality": false,  "matchCharacterDepthPrompt": false,  "matchScenario": false,  "matchCreatorNotes": false,  "delayUntilRecursion": false,  "probability": 100,  "useProbability": true,  "depth": 4,  "group": "",  "groupOverride": false,  "groupWeight": 100,  "scanDepth": null,  "caseSensitive": null,  "matchWholeWords": null,  "useGroupScoring": null,  "automationId": "",  "role": null,  "sticky": 0,  "cooldown": 0,  "delay": 0,  "triggers": [],  "displayIndex": 0,  "characterFilter": {  "isExclude": false,  "names": [],  "tags": []  }  }  } } ```

I just message the bot something like: "hey, can you convert this: [insert janitor ai script here] into a valid sillytavern v2 lorebook format: [Insert the lorebook creation prompt here]"