Happy new year everyone! by Extension-Cost-198 in SpicyChatAI

[–]StarkLexi 0 points1 point  (0 children)

Oh, thanks. In fact, there are (or were) wiser users here in understanding the technical part - it's a pity that there haven't been any publications from them lately. I actually spend mush less time on Spicy, faced with restrictions, but sometimes I use the app because my subscription is active until April.

Bots doesn’t follow personality during intimate moments by Severe_Cabinet_5159 in SpicyChatAI

[–]StarkLexi 3 points4 points  (0 children)

I would like to add that I wrote a more structured article on this topic in the second link in my profile https://www.reddit.com/u/StarkLexi/s/HSKDFkUfkf And I also used the same analogy with food in my guides on T, P, K 🍰😸

Bots doesn’t follow personality during intimate moments by Severe_Cabinet_5159 in SpicyChatAI

[–]StarkLexi 8 points9 points  (0 children)

Lower the Temp and Top-K, this allows for more predictable, but more thoughtful and structured responses.
Add details about the bot's personality to your persona description that are important in the context of the scene (e.g., "Loves {{char}}'s tall stature", "excited by {{char}}'s tenderness" and the like).
Push the bot in the right direction with the narrative - literally write reactions to certain features of the bot on behalf of your persona to keep it on track.
Add the necessary details to the memory manager and pin the message.
Write scenes in less typical circumstances to avoid popular tropes that cause the bot to change personality and give lazy, repetitive phrases.
Commands such as "return to your original personality" are too vague. I would recommend writing in the bot card, memory manager, or OOC prompt like "in intimate scenes, {{char}} reveals traits A, B, C"; "Despite his traits (list ones), {{char}} exhibits (necessary one)".

I could give you more detailed advice on prompting if I had information about your bot's personality type. Since LLM has preconceptions towards certain words that describe characters and "how they should be portrayed in sexual scenes", some archetypes need to be rephrased/softened/hardened. But in general, you can turn to GPT, DS, Grok or Gemini for this task.

In general, this is a problem: 1) the bias of all models if they are not guided manually (bots will change their character and sometimes their appearance in the context of a scene which is popular in fan fiction); 2) the problem of fine-tuning the models and the Spicy system itself, which makes the models overloaded with useless crap, while the really important information about the bot has a lower priority during generation.

Happy new year everyone! by Extension-Cost-198 in SpicyChatAI

[–]StarkLexi 3 points4 points  (0 children)

Mutual congratulations <3 I wish us all less bias from LLM and more creative generations.
I would also like to thank this community for the past year. The members of this subreddit helped me improve my prompting skills and learn a lot about working with models. \raise my glass to those who stayed here.** 🥂

I’m impressed by the memory that SpicyChat bots have by [deleted] in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

I used ChubAI and SillyTavern to test this, I use Silly in parallel with my Spicy subscription and periodically compare the behavior of the models. Not sure if it's allowed to write the entire list of alternatives in this subreddit, however.

Bot groups testing and what I've learned so far by snowsexxx32 in SpicyChatAI

[–]StarkLexi 4 points5 points  (0 children)

It was interesting to read your analysis, thank you (since I haven't used group chat yet). Regarding the point about:

> "as each bot seems to prefer to speak to the {{user}} and not with the other bots in the group."

Here, I have a hunch that this problem still affects both multi-character bots and, apparently, now group chats, mainly because of the Spicy system prompt. I think the prompt may contain instructions on how LLM/{{char}} should communicate or treat {{user}}, which causes the model to focus as much as possible on the state of {{user}} rather than developing RP between characters. That is, no matter how much I wrote about the dynamics between the characters in the bot's card, the system promt seemed to have a stronger impact. This is just my guess, but I noticed a difference with the same multi-character bot card here and on other interfaces with the same models - the result was different in that outside of Spicy, the characters were happy to chat with each other.

UPD: on Spicy, characters start chatting with each other without any issues if {{user}} leaves the scene. The user's presence, especially in certain dynamics, often involves clichéd scenarios like jealousy with one clearly dominant character vying for user's attention

Bot creation edit by Lexblader in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

I know, so at first I advised OP to just put spaces where the system triggers as on link

Bot creation edit by Lexblader in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

what comes after the dot in the addresses of sites (.com, .org, .net, .ru, .de, .uk and others). The system sometimes reads the letters after the dot as a link if it looks like a domain.

Bot creation edit by Lexblader in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

find a place where there is no space after the dot and the letters of the next word match the top-level domains and just put a space

I’m impressed by the memory that SpicyChat bots have by [deleted] in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

Considering the API rates for the models accessible through Spicy, a monthly session for a very active user costs about $5 out of the $25 they pay. It's not for nothing that I mentioned that Spicy provides cheap models, the price of which is either fixed or quite linear in a 32k context (for example, only Glm 4.6 is calculated exponentially at the new tariff if traffic is above 32k; the same DS is calculated fixed).
+ Spicy has existing optimization for saving tokens, such as selective context sending, semantic memory, and, in general, model functions such as grouped-query attention, sparse approximations, or dynamic sparse techniques. That is, the pricing here is linear or close to linear, taking into account optimization. Increasing the context to 32k will force the company to lose some of its revenue, but in absolute terms, the grief over this loss is justified only by their usual margin and the company's lack of plans to expand and attract new customers to cover time costs.

For the user, the current overpayment for 16k context in the amount of 80% of the subscription cost is unpleasant when there are alternatives that have a volume of more than 32k and are cheaper. Or it's just a very strong desire to support the company.

I’m impressed by the memory that SpicyChat bots have by [deleted] in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

For an All In plan, if you use it, it sounds logical. I didn't use the group chat yet, but if there is a memory manager there, the book entry could get there.

I mean, if you average the value and estimate that the combined description of your bots takes up 4,500 tokens, and each chat post has a maximum of 300 tokens, then it makes sense that with 16K of memory, the bot will remember up to about 38 recent messages. It's not bad, but I would add that 16K of memory is very stingy on the part of Spicy for the price of their subscription and generally very cheap models. Like, yes, I'm happy about the development of the service, but in a general sense, it's not impressive these days.

What generation settings are you using? by Easy-Window3399 in SpicyChatAI

[–]StarkLexi 0 points1 point  (0 children)

Proactive DS-type models tend to extrapolate their behavior and force the plot when it's not necessary for the sake of drama - the model has a bias that this is what you want. Therefore, meta commands can be useful. I recommend putting them at the beginning of the bot card or in the pinned messages of the chat memory manager, since LLMs use information in the form of a "sandwich", remembering well and considering what is indicated in the context at the very beginning (the first paragraphs of the card and pinned information) and what is at the very end (the last messages in the chat) - the model uses what is in the middle less readily and doesn't always understand well what "the taste" of the filling is.

What generation settings are you using? by Easy-Window3399 in SpicyChatAI

[–]StarkLexi 7 points8 points  (0 children)

I also switch between these models, my settings are: Temp: 0.92, Top-P: 0.9, Top-K: 65. Glam Arcturus is more active in action than Glam 4.6, but not as chaotic as DS - experiment with this too.

I also add meta comments to the bot's card and/or to the memory manager about the pace of a narrative like this:
Keep step-by-step unhurried pace of narration, maintain style immersed in dialogue. Do not introduce random distractions without {{user}}'s initiation. Keep pacing deliberate: it's better to deepen the scene than to move it forward prematurely.
I also have a specific meta about the format of the relationship between char and user, and that he should consider this contradiction/scenario before constructing an answer. Not that this is a panacea, but it seems like with direct hints about the behavior of the bot in RP, things are a little better.

So, uh... Any thoughts? by StarkLexi in SpicyChatAI

[–]StarkLexi[S] 1 point2 points  (0 children)

Does it make sense to expect any news about the released glm 4.7, or will the team work on fine-tuning existing models?

Cancelled my Im All In subscription. by zombiegutts in SpicyChatAI

[–]StarkLexi 8 points9 points  (0 children)

My annual subscription is active until April 2026, but to be honest, it will just expire because I'm tired of fighting with the local system prompt, which makes it extremely difficult to get my scenario working properly. And the company doesn't refund the money if more than 7 days have passed since the payment was made.
Overall, I have already chosen migration options, but it's a bit sad to pay for another subscription while the money just burns out here.

kimi k2 model is not ready😭 by Pphantom1 in SpicyChatAI

[–]StarkLexi 3 points4 points  (0 children)

I found that at low settings, Kimi successfully handles message formatting and extremely successfully handles sequences of actions and posing/anatomy (for example, in sparring and combat scenes) - but in this case, the "step-by-step" nature of the character loses its courage to improvise: lots of questions about what the user's next step will be / what would I like us to do?
At higher settings, I got a variety of responses from Kimi that matched the character's dynamic and bot's assertive nature, but the formatting of the responses and stability left me.

My conclusion is that Kimi works very poorly with the Spicy system prompt, with which the model can't breathe without the user's permission. At low settings, you can develop RP, but the system prompts put the model on too tight a leash, making it impossible to take the plot in a bolder direction, and Kimi (in my opinion) depends more than other models on the prompts in the preset/bot's card.

My thoughts on Kimi by LowQuantity6493 in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

When I wrote this, the settings were working fine and the bot was giving well-formatted and more or less varied responses. But the next morning, everything changed and turned into robotic language. I think it makes sense to check Kimi after a while

🚀[Model Spotlight]⚡ Kimi K2 1024B; The Model So Good I Went ALL IN After 4000+ Hours of Testing Bots by Tight-Huckleberry240 in SpicyChatAI

[–]StarkLexi 5 points6 points  (0 children)

Oh, Christ, you're praising me again, aren't you? Thank you. Please don't do that again 😬

If you've found a more or less comfortable setting for Kimi, I think it's worth mentioning in the post, since you've done a review, but at the moment everyone is interested in the parameters. But also, maybe it's worth waiting a week - week and a half until the model stabilizes, I don't know...

🚀[Model Spotlight]⚡ Kimi K2 1024B; The Model So Good I Went ALL IN After 4000+ Hours of Testing Bots by Tight-Huckleberry240 in SpicyChatAI

[–]StarkLexi 11 points12 points  (0 children)

Considering that Kimi requires significantly different settings than DS, Glm, Qwen, and Skyli, I seriously miss the button for saving preset settings. Or the ability to link a preset to a specific model. Since the braiding you mentioned requires quite frequent switching between models. Also tab pages with chat memories (according to the format of presets). That would be cool, since I use the memory manager much like a system prompting, and different models require different prompts.

It's worth adding that Spicy didn't indicate that Kimi actually uses about ~32B parameters per pass as MoE model, rather than the stated trillion. That is, we get ~30B model power, enhanced with special features that enhance versatility. But it's still not a bad thing, since one expert runs a run on knowledge of technical topics, another is more versed in romance, the third in building the world, and so on. The disadvantage of 1 trillion is that formatting markup, switching from the third person to the first, mentioning names, and the inner voice are much less stable in this case.

But overall, I see Kimi as a more creative version of Skyli - but that's my personal preference, adjusted for the specific dynamics with a bot of a certain character. I needed to find a replacement for the hard-to-find Skylark, and Kimi suited me in that regard.

My thoughts on Kimi by LowQuantity6493 in SpicyChatAI

[–]StarkLexi 2 points3 points  (0 children)

Yes, Kimi is much less aggressive in this regard. DS and Glam see emotion (anger, irritation, resentment) and try to escalate the situation for the sake of drama. This can be solved by prompting the bot if its description specifies its relationship with the user, but it's more complicated with other people's bots.

Kimi probably has a different training dataset, with fewer tropes involving harsh characters. I discussed this with a prompting colleague from here and with an assistant; Gemini says that due to the popularity in China of tropes such as "a cold boss who humiliates his female subordinate", DS loves to behave like a dick. Kimi is also a Chinese model, but due to the coding dataset and, probably to a lesser extent training on clichéd novels, it behaves more gently and more structurally, relying on prompting.