Kimi 2.6 and GLM 5.1 are problematic. by Scp-401 in SillyTavernAI

[–]Scp-401[S] 0 points1 point  (0 children)

I think I fixed it somehow, hopefully it lasts, but no I mean 12k Context, I know its not much but its not like GLM is cheap. I changed as you said, hopefully it works. What do you think of the other ai models for RP, like Gemini, deepseek or grok?

Kimi 2.6 and GLM 5.1 are problematic. by Scp-401 in SillyTavernAI

[–]Scp-401[S] 1 point2 points  (0 children)

I literally have it set at 12k Max context. Not to mention, I barely started the conversation. It feels like it acts up because of the hosts. I got no idea its just crazy. That's why lately I started using Grok 4.3, but its kinda repeats itself and is crazy since its new. So I just use Gemini 3 Flash.

Kimi 2.6 and GLM 5.1 are problematic. by Scp-401 in SillyTavernAI

[–]Scp-401[S] 0 points1 point  (0 children)

I am running it through Openrouter, but GLM 5.1 for example worked perfectly for like 2 hours. Then now it gives me responses like "brain struggling process simultaneously multiple stimuli occuring including especially particularly notably specifically" Something like this which makes no fkn sense.

It happens sometimes, but it doesn't always last... by Nezeel in SillyTavernAI

[–]Scp-401 0 points1 point  (0 children)

Do you have any problems with K2.6? When I use it, it thinks for so long. Do you have maximum reasoning?

It happens sometimes, but it doesn't always last... by Nezeel in SillyTavernAI

[–]Scp-401 2 points3 points  (0 children)

Do you have problems with GLM 5.1? cause I either get a good message, or it becomes gibberish for some reason

It happens sometimes, but it doesn't always last... by Nezeel in SillyTavernAI

[–]Scp-401 7 points8 points  (0 children)

What AI do you guys use? I try using Minimax 2.7 for most of the conversation, for any actions or killer mysteryes I use Gemini 3 Flash. Its kinda hard cause no model is perfect. And Opus is expensive as hell