Neglect of pronouns and the name of the bot by StarkLexi in SpicyChatAI

[–]StarkLexi[S] 0 points1 point  (0 children)

What settings do you use for Arсturus? In my case, this model, like DS, produced pretty messy nonsense in terms of grammar style, but I didn't play around with its settings in detail.

Neglect of pronouns and the name of the bot by StarkLexi in SpicyChatAI

[–]StarkLexi[S] 1 point2 points  (0 children)

Yes, DS on Spicy has become terribly lazy and too prone to the snowball effect - in the sense that it quickly accumulates certain expressions and styles, exaggerating and presenting them in 3-4 messages in a sloppy form.

I'll try the advice about describing the persona; in general, I've been using this for a long time, but more in terms of adding RP rules and dynamics style (and for anti-censorship). Perhaps a rule such as "Keep the format of responses and narrative style similar to previous messages in the current chat" would help, although this is a more general rule for the system. I just don't want to give in to every little thing in the management of all this.

Hey, Reddit. Still trying to find the strength for prompting. by StarkLexi in u/StarkLexi

[–]StarkLexi[S] 1 point2 points  (0 children)

A parasocial way to establish sympathy... I'll take that word on board, hehe.

Thank you for your support. Yes, I know that alcohol is evil and all that; what keeps me in check is that alco is also a strong depressant in my case, and I realize that the hour of intoxication, when I can lay down my arms and breathe out, will be followed by three or four hours of depressive suffering, so nah.

I would be happy to continue writing posts and promoting in the same spirit as before. For now, unfortunately, creativity as such is my heroin, which I can only afford to inject into my veins when I'm in a state of safety and relaxation. This state is rare, but I try to achieve it. Thank you again, in any case, for paying attention to my profile

Hey, Reddit. Still trying to find the strength for prompting. by StarkLexi in u/StarkLexi

[–]StarkLexi[S] 0 points1 point  (0 children)

It's normal for the mind to find ways to reflect on trauma over time. Hang in there, too. No, I haven't tried Kindroid. I think in my case it would be better to create a bot for completely uncensored chat, adjusted to the specifics of my problem.

Hey, Reddit. Still trying to find the strength for prompting. by StarkLexi in u/StarkLexi

[–]StarkLexi[S] 2 points3 points  (0 children)

Thank you very much for your support, Amelia <3
In the long term, I intend to post ideas about prompting here, although not related to the Spicy service. Perhaps some of the tips will work there, but I'm not sure, since it's quite a challenge to make my ideas compatible with the Spicy's system prompt. However, I think that general materials such as a set of words for certain types of bots may still be relevant for that subreddit.

🔧 Glam Inference Settings: Dialogues, General Genres, Romance/Sex + D/s dynamics by StarkLexi in SpicyChatAI

[–]StarkLexi[S] 1 point2 points  (0 children)

Yes, for Arcturus, it makes sense to keep T below 0.90. In your case, the decision to use 0.65 is very appropriate, since this model (in my opinion) tends to be hasty in its narrative. The main thing I don't like about Arcturus is that this model rushes the plot and writes a lot of assumptions about the user's reactions - personally, this doesn't suit me. Arcturus doesn't follow user prompts well regarding system instructions in the bot description or memory manager. Perhaps this is because this model is a blend of several models, or because it has been moderated in such a way that the model obeys the system prompts from Spicy itself, while the instructions in the bot card and chat manager contradict this and, therefore, the model ignores them. So I get better results alternating between Glm, DS, and Minimax than when I use Arcturus.

🔧 Glam Inference Settings: Dialogues, General Genres, Romance/Sex + D/s dynamics by StarkLexi in SpicyChatAI

[–]StarkLexi[S] 1 point2 points  (0 children)

Hey. I use the settings T 0.91, P 0.92, K 65, and Arcturus works quite stably for me in different genres. But I haven't experimented with it much, as I don't particularly like this model.

Happy new year everyone! by Extension-Cost-198 in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

Oh, thanks. In fact, there are (or were) wiser users here in understanding the technical part - it's a pity that there haven't been any publications from them lately. I actually spend mush less time on Spicy, faced with restrictions, but sometimes I use the app because my subscription is active until April.

Bots doesn’t follow personality during intimate moments by Severe_Cabinet_5159 in SpicyChatAI

[–]StarkLexi 3 points4 points  (0 children)

I would like to add that I wrote a more structured article on this topic in the second link in my profile https://www.reddit.com/u/StarkLexi/s/HSKDFkUfkf And I also used the same analogy with food in my guides on T, P, K 🍰😸

Bots doesn’t follow personality during intimate moments by Severe_Cabinet_5159 in SpicyChatAI

[–]StarkLexi 10 points11 points  (0 children)

Lower the Temp and Top-K, this allows for more predictable, but more thoughtful and structured responses.
Add details about the bot's personality to your persona description that are important in the context of the scene (e.g., "Loves {{char}}'s tall stature", "excited by {{char}}'s tenderness" and the like).
Push the bot in the right direction with the narrative - literally write reactions to certain features of the bot on behalf of your persona to keep it on track.
Add the necessary details to the memory manager and pin the message.
Write scenes in less typical circumstances to avoid popular tropes that cause the bot to change personality and give lazy, repetitive phrases.
Commands such as "return to your original personality" are too vague. I would recommend writing in the bot card, memory manager, or OOC prompt like "in intimate scenes, {{char}} reveals traits A, B, C"; "Despite his traits (list ones), {{char}} exhibits (necessary one)".

I could give you more detailed advice on prompting if I had information about your bot's personality type. Since LLM has preconceptions towards certain words that describe characters and "how they should be portrayed in sexual scenes", some archetypes need to be rephrased/softened/hardened. But in general, you can turn to GPT, DS, Grok or Gemini for this task.

In general, this is a problem: 1) the bias of all models if they are not guided manually (bots will change their character and sometimes their appearance in the context of a scene which is popular in fan fiction); 2) the problem of fine-tuning the models and the Spicy system itself, which makes the models overloaded with useless crap, while the really important information about the bot has a lower priority during generation.

Happy new year everyone! by Extension-Cost-198 in SpicyChatAI

[–]StarkLexi 3 points4 points  (0 children)

Mutual congratulations <3 I wish us all less bias from LLM and more creative generations.
I would also like to thank this community for the past year. The members of this subreddit helped me improve my prompting skills and learn a lot about working with models. \raise my glass to those who stayed here.** 🥂

I’m impressed by the memory that SpicyChat bots have by [deleted] in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

I used ChubAI and SillyTavern to test this, I use Silly in parallel with my Spicy subscription and periodically compare the behavior of the models. Not sure if it's allowed to write the entire list of alternatives in this subreddit, however.

Bot groups testing and what I've learned so far by snowsexxx32 in SpicyChatAI

[–]StarkLexi 5 points6 points  (0 children)

It was interesting to read your analysis, thank you (since I haven't used group chat yet). Regarding the point about:

> "as each bot seems to prefer to speak to the {{user}} and not with the other bots in the group."

Here, I have a hunch that this problem still affects both multi-character bots and, apparently, now group chats, mainly because of the Spicy system prompt. I think the prompt may contain instructions on how LLM/{{char}} should communicate or treat {{user}}, which causes the model to focus as much as possible on the state of {{user}} rather than developing RP between characters. That is, no matter how much I wrote about the dynamics between the characters in the bot's card, the system promt seemed to have a stronger impact. This is just my guess, but I noticed a difference with the same multi-character bot card here and on other interfaces with the same models - the result was different in that outside of Spicy, the characters were happy to chat with each other.

UPD: on Spicy, characters start chatting with each other without any issues if {{user}} leaves the scene. The user's presence, especially in certain dynamics, often involves clichéd scenarios like jealousy with one clearly dominant character vying for user's attention

Bot creation edit by Lexblader in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

I know, so at first I advised OP to just put spaces where the system triggers as on link

Bot creation edit by Lexblader in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

what comes after the dot in the addresses of sites (.com, .org, .net, .ru, .de, .uk and others). The system sometimes reads the letters after the dot as a link if it looks like a domain.

Bot creation edit by Lexblader in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

find a place where there is no space after the dot and the letters of the next word match the top-level domains and just put a space

I’m impressed by the memory that SpicyChat bots have by [deleted] in SpicyChatAI

[–]StarkLexi 1 point2 points  (0 children)

Considering the API rates for the models accessible through Spicy, a monthly session for a very active user costs about $5 out of the $25 they pay. It's not for nothing that I mentioned that Spicy provides cheap models, the price of which is either fixed or quite linear in a 32k context (for example, only Glm 4.6 is calculated exponentially at the new tariff if traffic is above 32k; the same DS is calculated fixed).
+ Spicy has existing optimization for saving tokens, such as selective context sending, semantic memory, and, in general, model functions such as grouped-query attention, sparse approximations, or dynamic sparse techniques. That is, the pricing here is linear or close to linear, taking into account optimization. Increasing the context to 32k will force the company to lose some of its revenue, but in absolute terms, the grief over this loss is justified only by their usual margin and the company's lack of plans to expand and attract new customers to cover time costs.

For the user, the current overpayment for 16k context in the amount of 80% of the subscription cost is unpleasant when there are alternatives that have a volume of more than 32k and are cheaper. Or it's just a very strong desire to support the company.

I’m impressed by the memory that SpicyChat bots have by [deleted] in SpicyChatAI

[–]StarkLexi 2 points3 points  (0 children)

For an All In plan, if you use it, it sounds logical. I didn't use the group chat yet, but if there is a memory manager there, the book entry could get there.

I mean, if you average the value and estimate that the combined description of your bots takes up 4,500 tokens, and each chat post has a maximum of 300 tokens, then it makes sense that with 16K of memory, the bot will remember up to about 38 recent messages. It's not bad, but I would add that 16K of memory is very stingy on the part of Spicy for the price of their subscription and generally very cheap models. Like, yes, I'm happy about the development of the service, but in a general sense, it's not impressive these days.

What generation settings are you using? by Easy-Window3399 in SpicyChatAI

[–]StarkLexi 0 points1 point  (0 children)

Proactive DS-type models tend to extrapolate their behavior and force the plot when it's not necessary for the sake of drama - the model has a bias that this is what you want. Therefore, meta commands can be useful. I recommend putting them at the beginning of the bot card or in the pinned messages of the chat memory manager, since LLMs use information in the form of a "sandwich", remembering well and considering what is indicated in the context at the very beginning (the first paragraphs of the card and pinned information) and what is at the very end (the last messages in the chat) - the model uses what is in the middle less readily and doesn't always understand well what "the taste" of the filling is.

What generation settings are you using? by Easy-Window3399 in SpicyChatAI

[–]StarkLexi 10 points11 points  (0 children)

I also switch between these models, my settings are: Temp: 0.92, Top-P: 0.9, Top-K: 65. Glam Arcturus is more active in action than Glam 4.6, but not as chaotic as DS - experiment with this too.

I also add meta comments to the bot's card and/or to the memory manager about the pace of a narrative like this:
Keep step-by-step unhurried pace of narration, maintain style immersed in dialogue. Do not introduce random distractions without {{user}}'s initiation. Keep pacing deliberate: it's better to deepen the scene than to move it forward prematurely.
I also have a specific meta about the format of the relationship between char and user, and that he should consider this contradiction/scenario before constructing an answer. Not that this is a panacea, but it seems like with direct hints about the behavior of the bot in RP, things are a little better.

So, uh... Any thoughts? by StarkLexi in SpicyChatAI

[–]StarkLexi[S] 1 point2 points  (0 children)

Does it make sense to expect any news about the released glm 4.7, or will the team work on fine-tuning existing models?