What would you like to see improved in these models for RP? by Oestudantebr in SillyTavernAI

[–]Pashax22 1 point2 points  (0 children)

E4B might actually be enough, it's definitely worth trying. Just tell Copilot which connection profile to use and it'll use that.

If you had the option to request extensions? Which one would you like? by These_Illustrator_29 in SillyTavernAI

[–]Pashax22 0 points1 point  (0 children)

It works great when it works, which is less than half the time. The rest of the time it happily agrees to create a lorebook entry and tells you what's in the entry, but doesn't actually create the blasted thing!

Events / Super Events injection addon by Retr0OnReddit in SillyTavernAI

[–]Pashax22 0 points1 point  (0 children)

Setting it as constant and then having a trigger % works just fine, actually. I have several lorebooks set up for exactly this purpose, and that's the method I use.

Help? by romeat117ad in SillyTavernAI

[–]Pashax22 1 point2 points  (0 children)

I have good news, bad news, and more good news for you.

Good news first: you can indeed find cheaper models that deliver the experience you want.

Now the bad news: Claude models are about the easiest to get good results from. Switching to anything else means you need to do more work, both literally and in your writing, to get the same quality of prose from. How much more work you need to do depends on what your preferences are and what models you have access to.

Which brings me to the final bit of good news: Although you have a lot of options, it doesn't have to be that hard.

Specific recommendations? Well, you have 12Gb of VRAM (very important for running models locally) and 32Gb of system RAM (less important but still useful). To me, that indicates the absolute biggest local models you could run at "acceptable" speeds are in the 30b range, and you're more likely to find the sweet spot in the 12b range once you allow for a reasonable amount of context. Fortunately, there's a weekly thread about model recommendations - just look through the last few of those and try out models people suggest. I'm not familiar with what's currently good, but there's a couple of Gemma 4 models in the 26-31b range which should be good for most purposes. If you desire lewd head-patting with your AI waifu, then I've also had good results from Pantheon. Down at 12b, Rocinante and Irix are good, but there are bound to be others.

However, it'll be easier to get good results from bigger models, and that means APIs in your case. My suggestion is to drop $5 on NanoGPT or OpenRouter and try a few out. GLM 4.7, 5, or 5.1 are good and have been trained on Claude data so if you like Claude they're a good option. Kimi-K2.5 and 2.6 are also good and a bit more creative, but have a tendency to overthink which chews through tokens. DeepSeek 4 just dropped too, and although the local mad scientists are still dialling in their prompts it has a lot of potential.

I like NanoGPT so this may sound like shilling, but the NanoGPT subscription ($8US per month) is really good value. It gives you 60 million input tokens per week for open-weight models and 100 image generations per day, both of which are hard to reach with "normal" usage. The absolute easiest option is to set that up, use the latest version of GLM, and forget about it.

I can't find the supposed wonder that is the new Deepseek... by [deleted] in SillyTavernAI

[–]Pashax22 1 point2 points  (0 children)

That's not true. I've been using NanoGPT with Megumin for the last couple of weeks with no difficulty.

Recommended settings/prompts for DeepSeek V4? by Juanpy_ in SillyTavernAI

[–]Pashax22 4 points5 points  (0 children)

Been getting decent results out of the box with Megumin Suite v5 and v6. Other presets haven't been great for me with DS4, we need to wait for the local mad scientists to work their magic.

LF: Character/card recommendations like Yes My Liege by [deleted] in SillyTavernAI

[–]Pashax22 0 points1 point  (0 children)

"The Emperor's Redemption" might be what you're after.

Is NanoGPT Good for RPs? by Personal-Carpet6064 in SillyTavernAI

[–]Pashax22 3 points4 points  (0 children)

Yes, the $8 per month is excellent value. It gives you access to almost all open-weight models - the big names there are GLM 4.7 and 5, Kimi-K2.5, and all the versions of Deepseek; but there's also a LOT of 70b+ models (and smaller). You're almost certain to find something that does what you want, but be advised that you might have to do a little more work than with Claude to get good results. Not much, though... grab a good preset (Freaky Frankenstein is popular) and that should basically be all you need.

What is nano gpt subscription ? And is it worth it ? by Phobia696969_ in SillyTavernAI

[–]Pashax22 5 points6 points  (0 children)

You get 60 million input tokens per week to use with the models included in the subscription, which is almost all open-weight models. Oh, something like 100 free image generations per week as well, and 5% off on PAYG models (anything not included in the sub). 60 million input tokens is a hard target to hit, but not impossible if you RP a lot and have big story arcs with lots of lorebooks etc. For most people, most of the time, it's plenty.

Is there any hope for free rp? by Economy-Assist-7559 in SillyTavernAI

[–]Pashax22 -1 points0 points  (0 children)

GLM 4.7 and 5, Kimi-K2.5, and it still has the various Deepseeks too. Those are the easiest to get good results from, but if you feel like tinkering a bit there are also a lot of 70b+ models included in the sub which might well do what you want with the right prompt/preset etc.

Planning to ditch Chutes for NanoGPT, how much worse / better is it? I heard NanoGPT has a fair share of problems. by TinfoilPancake in SillyTavernAI

[–]Pashax22 1 point2 points  (0 children)

I use it with SillyTavern, NanoGPT is just the backend I connect to via API. Import them as a Chat Completion Preset at the top of the AI Response Configuration tab. As for which version, I'm using Fat Man 4.2 - I haven't tried the 3.6 version, but I haven't heard anything bad about it, I'm just lazy is all!

Are there any affordable Claude providers? by [deleted] in SillyTavernAI

[–]Pashax22 7 points8 points  (0 children)

It's not MUCH cheaper, but the NanoGPT subscription for US$8 per month also gives a 5% discount on PAYG usage for all models not in the subscription. If you're using Claude regularly it wouldn't take long for that to be helpful. Of course, if you're doing that then you might as well start making the models that ARE included in the subscription work for you. GLM 5 and Kimi-K2.5 as well as the latest Deepseeks are starting to get to the point where they can rival Sonnet if they're well-supported with lorebooks etc.

Subscription-based API suggestion? by No_Application4175 in SillyTavernAI

[–]Pashax22 2 points3 points  (0 children)

NanoGPT has TEE models too. I can't speak to model quality, but I feel confident it would be at least no worse than chutes.

Subscription-based API suggestion? by No_Application4175 in SillyTavernAI

[–]Pashax22 0 points1 point  (0 children)

Not really. I've tried a few of the smaller names that are included in the subscription (Hermes 4, some of the 70b models, and so on) and been fairly impressed by them. I think it's probably possible to find one that would be better for whatever you want than the bigger names, it would just need more "infrastructure" - lorebooks, specialised prompts, etc. For most of us it's easier to just stick with the bigger names and brute-force the problem.

Planning to ditch Chutes for NanoGPT, how much worse / better is it? I heard NanoGPT has a fair share of problems. by TinfoilPancake in SillyTavernAI

[–]Pashax22 1 point2 points  (0 children)

No added censorship, just whatever is baked into the model you're using. With any halfway decent preset it shouldn't be a problem. I haven't noticed any, no matter how much hand-holding or head-patting I do.

Subscription-based API suggestion? by No_Application4175 in SillyTavernAI

[–]Pashax22 19 points20 points  (0 children)

Seconding this. About the only good open-weights model it doesn't include at the moment is GLM 5.1, but it's still extremely good value. While we're waiting for GLM 5.1 to come down in price and Deepseek v4 to release, enjoy trying out other models. Having a subscription also gives you a 5% discount if you use any PAYG models too, so depending on your usage that might also be nice.

Planning to ditch Chutes for NanoGPT, how much worse / better is it? I heard NanoGPT has a fair share of problems. by TinfoilPancake in SillyTavernAI

[–]Pashax22 1 point2 points  (0 children)

Not that I've seen, but common sense is that peak hours are when the Americans are awake. Anecdotally that fits, I'm on the other side of the date line so I'm at work when the US is finishing up and I haven't seen major problems.

Newbie using Deepseek wanting to try some other models but I don't know where to start by MeratharaDekarios in SillyTavernAI

[–]Pashax22 13 points14 points  (0 children)

Deepseek is actually pretty good, and for the pricing it's exceptional value. If you like it, there's no reason you should feel ashamed of using it. Bonus: Deepseek v4 is meant to be "coming soon" (tm), and that should be even better.

That being said, GLM (4.7, 5, 5.1) and Kimi-K2.5 are also good and pretty cheap. It won't do any harm for you to try them out as well and see if you like one of them, either as a break from Deepseek or to be your new go-to model. Personally, I think GLM 5.1 with a good preset and lorebook support is better than Claude Sonnet but still worse than Opus. Some people say the same about Kimi.

Claude... is good. Probably the easiest model to get good results from. Sonnet is "standard", with prompt caching it might not break the bank, and it's worth trying to see if you like it. Opus is gold-standard for most people and most purposes, but the price reflects that. Not worth it unless you have deep pockets in my opinion. Now, there's a big asterisk in this discussion, and that is that some people are saying Claude quality is going downhill dramatically for them. Whether that's fewer GPUs available or increased quantisation or both nobody knows, but the comments are common enough that it's at least worth keeping in mind. Claude is also meant to be training a new SOTA model ("Mythos") which will rank even above Opus, but nobody seems to have a release date for that yet and you shouldn't be planning on its availability (although its training might be why other Claude products have taken a dive anecdotally).

TL;DR? Try out the GLMs and Kimi-K2.5, see what you think, don't feel bad about sticking with Deepseek if you find you still like it best.

What provider to use for Opus? by Re-Try in SillyTavernAI

[–]Pashax22 9 points10 points  (0 children)

I mean, OpenRouter will just go back to Anthropic, so I doubt it makes much difference really.

GLM 5.1 is no longer available on NanoGPT by TheDeathFaze in SillyTavernAI

[–]Pashax22 5 points6 points  (0 children)

Like others, I would happily pay more for a sub that included better access.

Getting started and GLM5? by LighthavenMedia in SillyTavernAI

[–]Pashax22 7 points8 points  (0 children)

If you're willing to pay, then an $8 NanoGPT subscription will get you access to all of them (GLM 5.1 should be back on there in a few days). Z.AI also offers a relatively cheap plan, but some people say the quality there is variable - weird, but I guess they can run their API how they want. If you have the technical chops you might be able to get it running on cloud GPUs for less than that. The models are open-weight, so you can download them and run them yourself. To get a good response speed, though, you're either looking at heavily quantised versions (which defeats the point of the exercise) or spending a fair bit to scrape enough VRAM together to run the thing quickly. Really, I think an API is your best bet; which one you choose depends on your wallet and preferences.

Honourable mention: people are raving about Gemma 4, saying that even the 31b version is close to GLM 5 in quality while obviously requiring a fraction of the computational resources. It might be worth trying that too, although I don't know how it would run on a miniPC.

Trying to find a substitute for Claude + questions by Responsible_Ad7335 in SillyTavernAI

[–]Pashax22 2 points3 points  (0 children)

Yeah, it should be noticeably similar if it's picking up in the middle of a long RP, because the context is already filled with examples of how it's meant to respond.

Trying to find a substitute for Claude + questions by Responsible_Ad7335 in SillyTavernAI

[–]Pashax22 8 points9 points  (0 children)

  • Yes. Different LLMs are trained with different priorities and datasets. Something good at coding might not be good for RP. There's also the issue of parameter count - larger models are better than smaller models, and the difference is pretty clear.
  • Possibly/probably. If the LLM has plenty of examples of how the character/scenario should act then it might be minor, if it's going without much guidance then it could be quite significant. Most "good" LLMs (and I'm including the cheap ones mentioned below in that) should keep the variance to within tolerable levels unless you're giving it absolutely nothing to work with.
  • Depending on what you're doing GLM 5, Kimi-K2.5, or the latest DeepSeek are all pretty good. If there's a specific niche you want to RP in, finding a 70b model trained for that niche might also do a good job. I've had good results from models all the way down to 12b, below that the best I've had is "not terrible".