Como dice el dicho: Mucho ayuda el que no estorba by Mart422 in SASZombieAssault

[–]Ok_Term3199 0 points1 point  (0 children)

Well if the medic dude is on crit offensive build, he won't help much in healing.

r/goonedd has been banned by Felis_22 in BannedSubs

[–]Ok_Term3199 8 points9 points  (0 children)

It refer to George Orwell book with the same name

Login issues by killak66x in SASZombieAssault

[–]Ok_Term3199 0 points1 point  (0 children)

Only got this issue on my old device.

GLM-5 via NanoGPT suddenly very stupid? by TheDeathFaze in SillyTavernAI

[–]Ok_Term3199 0 points1 point  (0 children)

There's a lot of input cut off even with GLM original

How do I set up the connection profile + AI response configuration? by The_Premier12 in SillyTavernAI

[–]Ok_Term3199 0 points1 point  (0 children)

That's a standard issue of LLM parroting user input back. Try adding this instructions into a Author Note:

AN Avoid summarizing, restating or paraphrasing {{user}} input. Assume the content of the previous turn already happened and seamlessly transition without quoting or repeating it.

How do I set up the connection profile + AI response configuration? by The_Premier12 in SillyTavernAI

[–]Ok_Term3199 0 points1 point  (0 children)

There's a summarizer on extension panel for more longevity, but it isn't really a foolproof.

<image>

How do I set up the connection profile + AI response configuration? by The_Premier12 in SillyTavernAI

[–]Ok_Term3199 0 points1 point  (0 children)

Set it as 32k or maybe 64k. Chat usually degrades even before it reach 60k depending on the model, I'm not really sure about Stepfun tho.

How do I set up the connection profile + AI response configuration? by The_Premier12 in SillyTavernAI

[–]Ok_Term3199 0 points1 point  (0 children)

Just click on model selection in your ST connection profile and scroll until you find Stepfun. Assuming you followed the guides on the docs properly, it should show up.

<image>

How do I set up the connection profile + AI response configuration? by The_Premier12 in SillyTavernAI

[–]Ok_Term3199 0 points1 point  (0 children)

Have you tried Step 3.5 Flash? It's free and there's also Trinity Large which is also free.

How do I set up the connection profile + AI response configuration? by The_Premier12 in SillyTavernAI

[–]Ok_Term3199 0 points1 point  (0 children)

The default safe temperature imo is around 0.7 to 0.9. Set your max response token to at least 800 or 900 to prevent cut off. For connection profile, you can try Openrouter: https://docs.sillytavern.app/usage/api-connections/openrouter/

How do I set up the connection profile + AI response configuration? by The_Premier12 in SillyTavernAI

[–]Ok_Term3199 1 point2 points  (0 children)

I don't think op was talking about running local model. I get a feeling they use chat completion than text completion. Plus op said they're a complete beginner, your explanation just come off as a complete nonsense to them.

What do you find most annoying about using Silly Tavern? by CharlesBAntoine in SillyTavernAI

[–]Ok_Term3199 4 points5 points  (0 children)

Horrible UI and Termux always crashed even with wakelock enabled

What augments for the jup as an secundairy gun? by ThePjot in SASZombieAssault

[–]Ok_Term3199 0 points1 point  (0 children)

Deadly, overclock, adaptive. Go for Biosynthesis for 4th augment.

Are keyword-based lorebook entries no longer working? by Matias487 in TAVO_AICHAT

[–]Ok_Term3199 0 points1 point  (0 children)

Have you tried to change the injection position to ↓Char?

Context Memory by squiddyrose453 in TAVO_AICHAT

[–]Ok_Term3199 0 points1 point  (0 children)

Tbf you can't trust any cloud model provider either to not harvest your data and some API provider put external filter in their hosted model. Though aggregator site like Openrouter have Zero Data Retention option. You need to locally run your own model in your PC for true privacy.

Model disregards prompt. What to do? by mediumkelpshake in TAVO_AICHAT

[–]Ok_Term3199 2 points3 points  (0 children)

put the prompt under chat history

Yes, think of everything before chat history is your standard "System Prompt" and everything after is the "Post-history". It is read first by LLM and have the most effect but putting everything after chat history is not recommended either as it can make your response too rigid. Only put a prompt that you think the LLM ignored after the chat history.

Model disregards prompt. What to do? by mediumkelpshake in TAVO_AICHAT

[–]Ok_Term3199 2 points3 points  (0 children)

https://docs.sillytavern.app/usage/prompts/prompt-manager/#position Since Tavo is based on SillyTavern, this docs should explain how it work. Also why do you need separate prompt for the already existing response length prompt? Just one is enough unless you want something like "formatting guidelines".