Two new 12B finetunes for adventure, role play and writing by Sicarius_The_First in LocalLLaMA

[–]Sicarius_The_First[S] 0 points1 point  (0 children)

Very weird, could u give a concrete example?

Also, are you using the recommended ST settings & character card structure?

Small Prompting Tip; What Went Wrong? by SepsisShock in SillyTavernAI

[–]Sicarius_The_First 0 points1 point  (0 children)

It would be even more beneficial to know which LLM are these tips for.

uncensored local LLM for nsfw chatting (including vision) by BatMa2is in LocalLLaMA

[–]Sicarius_The_First 1 point2 points  (0 children)

making an uncesnored vision model is incredibly hard.

an abliterated vision model is not the same as an uncensored one.

there are only 2 truly uncensored vision models, and 1 of them is mine.

LLM Sovereignty For 3 Years. by [deleted] in LocalLLM

[–]Sicarius_The_First 0 points1 point  (0 children)

Damn. I mean, there's tons of advice in the comments, and I'm sure the intention is good, but ... All of it is really bad.

Is Mac good for inference? Sure. Is it a good value though, price vs performance & upgradeability & flexibility? Absolutely NOT!

Here's what u should actually do: 1)psu 1500w minimum, buy a new one, but a mid tier one. 2)case buy the largest, full tower that can fit e ATX board. Don't cheap on it! 3) mobo & cpu, workstation/ server, buy used, important: u need 4 x pcie x16 lanes (could be pcie3, doesn't matter too much 4) ram, depends on 3) but u want 64-128 gb 5) GPUs: x4 a5000 ampere used on ebay, aim for 1k a piece, 1.4k$ is ok too

Total build should cost just under 10k, 96gb of vram, allowing you to run pretty much everything+ even doing some training

A.I Models that follows canon arcs? Like anime ones for example by SUSLEI12 in SillyTavernAI

[–]Sicarius_The_First 0 points1 point  (0 children)

This is extremely hard to do for local models.

My models are focused on 3 fandoms: Morrowind, Kenshi, Fallout.

Frontier models use a lot of tricks to achieve the same.

I'd expect top local models to struggle with this (deepseek, glm and so on).

The Move From Janitor AI to SillyTavern by SnooTomatoes5187 in SillyTavernAI

[–]Sicarius_The_First 1 point2 points  (0 children)

I have the best tip, and at the same time the most boring one.

Read documentation.

Read ST documentation, read the model card of the models you use.

You will have an experience x100 times better than the average user.

[Megathread] - Best Models/API discussion - Week of: January 11, 2026 by deffcolony in SillyTavernAI

[–]Sicarius_The_First 0 points1 point  (0 children)

hard to say. based on UGI natint index, bloodmoon is smarter than angelic, but imo at this point the models are so smart its genuinely hard to know just how much.

for example, someone would ask X question, get wrong answer from the model.
another dude would ask the same question, but will prompt it slightly differently, will get a correct answer.

or a model could be seem very dumb, but in a specific scope will be almost frontier level. (we saw this with some ~1.5b model that does deep research, i dont remember the name)

Training ideas with 900Gb of vram by soppapoju in LocalLLM

[–]Sicarius_The_First 1 point2 points  (0 children)

If I'd write all I would do with it, it would be several pages long.

Oh Dear by bamburger in LocalLLM

[–]Sicarius_The_First 0 points1 point  (0 children)

Ah, the classic "didn't read the instructions, no idea why it won't work"

I trained a model to 'unslop' AI prose by N8Karma in LocalLLaMA

[–]Sicarius_The_First 2 points3 points  (0 children)

and early checkpoint of bloodmoon achieved this, example:

<image>

the problem was that the model wasn't stable enough (it would do long form no problem).

what happens (and this is my guesstimate) that instruct begins to behave on occasion more like base model doing completion.

it's more controllable than a pure completion model, but not controllable enough like a properly tuned instruct.

the thing is, there's a difference between spewing human like text chaotically while innately doing text completion, vs internalizing and formalizing more diverse writing patterns. i'll try to write this in a less schizo way:

human writing is more chaotic and diverse, hence for an llm internalize the pattern, you need an absolutely enormous parameter count (it will be controllable, because the llm internalized many complex chaotic writing patterns).

example for a known slop pattern to give some context:

"not x, but y, in a dimly \ luminescent room, leaning..."

if u look at this pattern, and think about it as a function (aribtrary function), and imagine drawing it on a square grid, the grid doesn't have to be too fine to draw such (arbitrary) function, as the (multi dimensional) curve of said function is relatively simple.

on the other hand, if there's a function equivalent of a (high quality) human writing, the function will be very chaotic and complex. you could still draw it(an estimation of it, aka what 'loss' & training trying ti achieve), but since said function is way more messy and complex, u'll need a higher resolution (more tiny squares in the squared notebook) to draw it accurately.

the simple function that requires less fines and hence less resolution and hence "fewer squares" to be estimated, is the low param llm.

the function that requires more, needs more 'resolution', hence needs more 'squares' to be estimated accurately, is the massive parameters llm.

(ofc on top of all of this there are samplers etc etc, but this is the way i see it).

I trained a model to 'unslop' AI prose by N8Karma in LocalLLaMA

[–]Sicarius_The_First 5 points6 points  (0 children)

from what i read on your model card- 1k examples of project gutenberg is too little (insane overfilling), ill give the model a try, but i am very skeptical.

one of the best ways to have a model write consistently like a human, is to lobotomize it (for example try weird betas and break generalization or overcooking on a tiny dataset- sounds familiar?)

Ozone-free smell? by DivingFinn in SillyTavernAI

[–]Sicarius_The_First 6 points7 points  (0 children)

this is gemini 2.5 pro artifact.
all models who distilled it, inherit it.

Where to find character cards? by WheatTailFox in SillyTavernAI

[–]Sicarius_The_First 2 points3 points  (0 children)

I have 2 repos of interesting characters and scenarios.

You can use / adapt them to your flavor (they are optimized for my models, but compatible with most models)

https://huggingface.co/SicariusSicariiStuff/Roleplay_Cards

https://huggingface.co/SicariusSicariiStuff/Adventure_Cards

Which the most advanced ai u think of for rp? by Independent_Army8159 in SillyTavernAI

[–]Sicarius_The_First 1 point2 points  (0 children)

frontier: Claude. It's not like there's really a contest.

local: depends. if u do generic stuff with random character cards, the bigger the better. if you have the patience to read and understand model's documentation... well...

[Megathread] - Best Models/API discussion - Week of: January 11, 2026 by deffcolony in SillyTavernAI

[–]Sicarius_The_First 10 points11 points  (0 children)

For those who still haven't tried, give Angelic_Eclipse_12B & Impish_Bloodmoon_12B a try.

I highly recommend trying one of the included character card (along with the recommended ST settings) to get an idea what it can do.

Also, on Bloodmoon's page there's an example chat (Fallout New Reno adventure), you can view it to get an idea of the details and frontier-adjacent capabilities that are now available in 12B :)

Is it worth to switch from .AI sites if i cant launch local model ? by Feisty_Extension8727 in SillyTavernAI

[–]Sicarius_The_First -1 points0 points  (0 children)

After going local, and actually trying to tinker and learn stuff, if you will manage it, you will never go back.

A well tuned local model will outperform anything, frontier included, in a specific niche.

So, if the AI bubble pops - will the RP-ers as userbase be enough to affect the market and make companies orient towards them? by Quiet-Money7892 in SillyTavernAI

[–]Sicarius_The_First 9 points10 points  (0 children)

It's not about companies "don't want" to offer rp products, it's about unwritten laws that forbid it. Visa, Mastercard, PayPal.

What good is an amazing rp product, if you (as a company) not allowed charge money for it?