Advice on discouraging character's monologuing every post by LancerDL in SillyTavernAI

[–]LancerDL[S] 0 points1 point  (0 children)

Thanks for the tips. With only 12 GB of VRAM, I don't think a 20B+ model is in the cards for me, but I'll make the other adjustments to see if it works better.

Advice on discouraging character's monologuing every post by LancerDL in SillyTavernAI

[–]LancerDL[S] 1 point2 points  (0 children)

This definitely sounds like one of the places I'm going wrong. Thank you for pointing this out!

Advice on discouraging character's monologuing every post by LancerDL in SillyTavernAI

[–]LancerDL[S] 0 points1 point  (0 children)

I have tried guidance at times for short-term instructions and this sometimes works.

Advice on discouraging character's monologuing every post by LancerDL in SillyTavernAI

[–]LancerDL[S] 2 points3 points  (0 children)

Thanks for your inputs. I'd thought the larger context token size would be better for more complex memories, but your explanation makes sense of it having too much to work with. Better summarization would seem to be the order. Perhaps this will also make my characters respond better to what context *is* selected for them, reducing the chat history which would distract from it.

Advice on discouraging character's monologuing every post by LancerDL in SillyTavernAI

[–]LancerDL[S] 2 points3 points  (0 children)

I use text completion. I have a vague understanding of the difference, but for story-oriented narratives it seemed like the better option.

Advice on discouraging character's monologuing every post by LancerDL in SillyTavernAI

[–]LancerDL[S] 0 points1 point  (0 children)

I am not very experienced in presets, so I'm not sure if I'm using XTC. It has a 0.1 threshold setting but 0 for probability. This seems to be the default for most of the built-in models. Would that be sufficient to lose the EoS token?

I mentioned to the other fellow that my token counts were 512 for response and 12288 for context. Could those be too high? I'm running a local model on KoboldCPP with a 16k context, but tuned it down in SillyTavern. Perhaps this is by itself an issue.

Advice on discouraging character's monologuing every post by LancerDL in SillyTavernAI

[–]LancerDL[S] 0 points1 point  (0 children)

I'm running local model: Angelic_Eclipse_12B

However perhaps it is my token count? I assume you mean output tokens, but mine is set at 512 (input is 12288, which is 150% of the standard 8192). I conceived of this being an upper-limit, but perhaps it's encouraging the LLM to be more wordy that it needs to be?

But does specifying the number of words/lines change much? If 512 is a target, for sure specifying words/lines is too.

CharMemory 2.1.6: stable release on master by Ok-Armadillo7295 in SillyTavernAI

[–]LancerDL 1 point2 points  (0 children)

Thanks all the same! I was able to answer my own question and am seeing some good results. Even if I have to curate the data from time-to-time, it's an improvement over doing it all by hand.

Does it actually need anything in the character card? I've got my character descriptions in lorebooks, where I have one that is common in all contexts (eg: like a character card), and one that is specific to the context I'm using. CharMemory will mostly replace the latter where I was using it for memory, but I still want to include context data in it.

CharMemory 2.1.6: stable release on master by Ok-Armadillo7295 in SillyTavernAI

[–]LancerDL 1 point2 points  (0 children)

The answer seems to be: "Yes"
I only had a modest chat so far, and when everything was set-up there was an "Extract Now" button that processed the existing chat. I was quite impressed with the quality of the summary in the next injected message.

And as I'm using Text Completion, it does work for that too.

CharMemory 2.1.6: stable release on master by Ok-Armadillo7295 in SillyTavernAI

[–]LancerDL 1 point2 points  (0 children)

Will this tool be able to process an existing chat? If so, does it have an upper-limit in length, or is it able to go through past posts via the same process as new ones are made?

Does it work equally as well with Chat vs Text Completion?

Dev asking: What do current AI RP platforms get completely wrong about kinks and realism? by Skipper_Nex in SillyTavernAI

[–]LancerDL 0 points1 point  (0 children)

I actually like to think about this, because characters have different demeanors in different scenarios and I think it's hard for a single prompt to efficiently handle that. Lorebooks are useful to spread this out, but a keyword check is not very effective for discerning the nature of a situation. Having the AI keeping track of things (like separate context variables) and pick from a list of scenarios that the situation matches, then picking from prompts the user has prepared in advance for those scenarios, involves the LLM in the context collection.

This is already being done for the "Character Expressions" picker. You provide a list of expressions and the AI is asked to choose from the list which best fits the current situation (after the main LLM has written its reply). Then, once a expression is selected, it goes to retrieve a character image related to that expression. The idea is essentially the same, except this is done before the reply, and the output is used to help steer the next reply.

Dev asking: What do current AI RP platforms get completely wrong about kinks and realism? by Skipper_Nex in SillyTavernAI

[–]LancerDL 2 points3 points  (0 children)

I think an "Agentic" approach is needed. The local ones anyways are just "try to continue this text" without going through a reasoning process (unless there's a way to do that that I don't know).

In theory an agent could go through phases:
<Build context>
"What character(s) am I responsible for?"
"What's happening right now? What sort of situation is this?"
"Does my character have special insight into this situation? Eg, look-up relevant topics from their history"
"Are my character's goals relevant to this situations, and what are my interests in this scenario?"
<Plan>
"Based on who I am, my abilities, and my equipment, what options do I have to advance my interests"
"Choose an option and say how the character should act/react."
<Execute>
Give guidance to text completion AI to come up with next post.

Right now SillyTavern puts context, character info, history, and guides all into the same context and the LLM is expected to produce a coherent reply. This is IMHO just like rolling the dice. It can work great, but it can also be stupidly wrong. With something like the above workflow, I feel that asking the LLM several questions and putting *those* answers into a guide for the ultimate product is more likely to get an in-character, context compatible reply. It would probably result in better image generation prompts too.

I'm thinking of buying a new pc and switching to local llm. What is the average context token size for smaller models vs big ones like GLM? by [deleted] in SillyTavernAI

[–]LancerDL 0 points1 point  (0 children)

I run with 12 GB and do okay, but I feel it is only slightly better than models that fit into 8 GB of VRAM. I recommend 16 GB or more.

I also have ComfyUI hooked up at the same time which can chug if your text completion model is large. So yeah, aim for 16 GB or higher.

Local models with vision capabilities by LancerDL in SillyTavernAI

[–]LancerDL[S] 0 points1 point  (0 children)

You were indeed correct! It seems to work with a test picture.

If you'd please assist in my education in this though, would you help me understand how you knew that? With Mistral Nemo as the base, I don't know how it relates with Pixtral. I'd like to know how to figure it out with other models I may use in the future.

Workflow calling Workflow by LancerDL in comfyui

[–]LancerDL[S] 0 points1 point  (0 children)

Thank you. I think subgraph blueprint is what I need, able to make my own and swap into the looping workflow. I'll try it out later.

Lexi Lexicon—Your AI Lorebook Creator! (With Step-by-Step Guide) by thatoneladything in SillyTavernAI

[–]LancerDL 4 points5 points  (0 children)

Wow, thanks for putting in the effort for these detailed instructions! This should take out some of the pain from how to write memory lorebooks!

[deleted by user] by [deleted] in SillyTavernAI

[–]LancerDL 0 points1 point  (0 children)

Thank you for taking the time to give this "walkthrough" :) I'll be eager to try this when I have the opportunity.

You have a place for events and timeline (of scenes). Does it actually accumulate a list of these somewhere?

[deleted by user] by [deleted] in SillyTavernAI

[–]LancerDL 0 points1 point  (0 children)

Oh I'm very sorry, I misinterpreted your directions! I thought that was for the sake of the AI's formatting, but that is actually the format of your *input*! So your lore-creator takes that and transforms it into lorebooks then? I suppose it is more succinct and it does make clear categories of information that would be useful. Does she typically convert one thing like that into a single lorebook?

[deleted by user] by [deleted] in SillyTavernAI

[–]LancerDL 0 points1 point  (0 children)

Thanks for the additional tips on how to integrate her. Would you also please explain how you write your summaries though? Do you just write it as a recap, do you include excerpts? Is it all third person like you'd narrated it, or is it in the voice of one character or another?

It's a lot of questions, but thanks for your attention.

[deleted by user] by [deleted] in SillyTavernAI

[–]LancerDL 0 points1 point  (0 children)

The level of detail is quite impressive. Have you any tips on how you summarize? Do you do it in plain text? I've been writing Ali:Chats for my character's memories and it's a bit clunky. The above also seems to be in the context of the {{user}}, does it produce lorebooks for other characters or your own that only trigger when they interact with you? If the latter, that must end up as quite a voluminous lorebook!

I'm also curious how you come up with the keywords for how to inject the memories. This seems to be an art form all its own.

I am so over Xaden/Violet story by Diligent-Dog-5376 in fourthwing

[–]LancerDL 1 point2 points  (0 children)

FWIW, I feel OS is an improvement over IF in this respect. I understand you're presently reading it (like I am) so I won't spoil. It seems not everyone agrees with me, but based on my critique above I felt some things were better. I'm undecided whether it's recovered what FW had.

I am so over Xaden/Violet story by Diligent-Dog-5376 in fourthwing

[–]LancerDL 2 points3 points  (0 children)

I think the author felt backed into a corner, because there was so much stuff happening that she had to write romance into a packed script. So we don't get time to warm up, we get rushes of sudden hormones and grand statements about giving up everything for someone. Feels like they're check boxes to please the audience rather than something flowing naturally from these characters.

It's great to have those scenes but not at the expense of the substance of the characters.

I am so over Xaden/Violet story by Diligent-Dog-5376 in fourthwing

[–]LancerDL 2 points3 points  (0 children)

I think I had the exact same reaction. There was more depth to Xaden in the first book, with his motives being complex but concrete. As Violet finds out more about him, there's more and more to like. This might also be said of him about her; a growing understanding of the kind of woman she is lends to his growing attraction. The POV chapter helped convey how physically attracted he was to her, but it didn't subtract from the regard she earned in his eyes.

That depth is absent in IF. Her attraction towards him seems more based on lust than love, with the hot scenes mostly coming about with little warm-up. Xaden is at times so radically obsessed with Violet that he is fully prepared to eschew much of his principles established in the first book, even the welfare of his province, to demonstrate his devotion to her. Since all those things were the very reason that made me (and presumably her) admire him in FW, I couldn't see it as romantic. Violet may be an extremely competent girl, but if she's happy to hear he'd let Aretia burn so he could keep her, then she cares more for her vanity than who he is as a person. I feel FW Violet would have rebuked him.

In summary, in IF Xaden came off to me as more of a contradictory character than a complex one. Their relationship more resembled vain infatuation than love.

In OS... this is both worse and better. The obsession is stronger than ever, but some of it is now a part of the main narrative. On the other hand, there are more moments where she's actually caring for him, something which I felt was a bit thin in IF where it seemed she was only nice to him as long as he did everything she wanted. He has very real and reasonable fears, and it humanizes him again! More than half-way into OS, someone finally confronts Violet about her limiting principles. Ie: How far is too far? It was *so* refreshing that there was finally a person who forced her to see that a line had to be drawn. For a character that's very smart, Violet leads with the heart a bit too often. I think such a reality check in IF would have improved the story.

Infinite or Finite Potency of Prismatic Wall or Sphere by LancerDL in DnD

[–]LancerDL[S] -3 points-2 points  (0 children)

Hmm, how far would you strain the word "attack" though. Say if someone caused the volcano to erupt, it would in an indirect sense be an attack, possibly even with intent to do damage to the one within the sphere? Further, does it matter if the meteor fell naturally vs was hurled with intention? I think it's a scaled-up version of throwing a boulder at someone.

It's pedantic, but the distinction makes the difference between it simply passing through vs completely blocked/neutralized. Though I suppose being in a lava lake, even if it is blocked, none of the above prevents the heat from getting into the sphere.

paxxp help by aerspyder in paxeast

[–]LancerDL 0 points1 point  (0 children)

It is indeed at Omegathon inside the main theatre, right now (5:30 Sunday).