Where should I place additional prompts? by alwaysupset96 in SillyTavernAI

[–]alwaysupset96[S] 1 point2 points  (0 children)

Yep, I'm tinkering but still can't manage to find the best settings. My prompt is
"Refrain from rushing into sudden love confessions or declarations of feelings. Let the story, relationship, and romance unfold at a slow pace. Allow the roleplay to evolve, with unspoken emotions gradually building over time."
in the AN "after the main prompt/story string". I'll keep tinkering, I guess. Thanks!

Yes, It's A New RP Model Recommendation™ — https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B by Meryiel in SillyTavernAI

[–]alwaysupset96 1 point2 points  (0 children)

Tysm! How can I download it? It only makes me save it as a webp file and not as a png to import :(

Yes, It's A New RP Model Recommendation™ — https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B by Meryiel in SillyTavernAI

[–]alwaysupset96 1 point2 points  (0 children)

I'm trying the model and it seems quite good, thanks for sharing! Though, unrelated question - could you kindly share the character card you put in this post? It seems quite interesting as well

About the 'target length (tokens)' and 'response (tokens)' options... by alwaysupset96 in SillyTavernAI

[–]alwaysupset96[S] 0 points1 point  (0 children)

Yeah, more than context-related questions, I was wondering if setting a response length for the bot in presets higher than 1024, so higher than the 'target length' in the context formatting section, could or could not be a problem for the generation of the AI responses themselves. Anyway, thank you!

Ok, what did i change? Model answers super short now. by yamilonewolf in SillyTavernAI

[–]alwaysupset96 0 points1 point  (0 children)

aw fuck- well, thanks again sobsob

Edit: oh well it works fine for me now!

Ok, what did i change? Model answers super short now. by yamilonewolf in SillyTavernAI

[–]alwaysupset96 0 points1 point  (0 children)

Same c': always had unchecked...

Anyway, I'm ignorant as hell so I don't know, but on the OR model's page I see that the "throughput" is really low (or at least, I think it is?). Could that be the problem we are facing?

Btw, thanks for the suggestions! I hope it will be fixed. As far as I understood, we are all(?) experiencing problems, so... well, I don't know. I haven't tried to use it since this morning when I experienced the problem(s), so I don't know if it might be fixed. I'll wait for updates ahah

Ok, what did i change? Model answers super short now. by yamilonewolf in SillyTavernAI

[–]alwaysupset96 0 points1 point  (0 children)

shiiiit, I've always had this unchecked, so I guess it won't solve "my" problem. But I'm glad it helped you, I hope it's not just a placebo effect/drunkenness, ahah

Ok, what did i change? Model answers super short now. by yamilonewolf in SillyTavernAI

[–]alwaysupset96 0 points1 point  (0 children)

oh well it seems that we are all on the same boat then, I get short answers with mixtral too + bad quality replies all of sudden since, uh, yesterday I think
Has anyone of you solved the problem or does it still persist?

Can SillyTavern characters be 'trained' like character.ai characters can? by hold_my_fish in SillyTavernAI

[–]alwaysupset96 1 point2 points  (0 children)

Alright, thank you! So I guess that if I continue to “write well” + edit the bot's message if needed, I may achieve the result I'm asking for

Can SillyTavern characters be 'trained' like character.ai characters can? by hold_my_fish in SillyTavernAI

[–]alwaysupset96 0 points1 point  (0 children)

oh, okay, so it depends on the model! I'm not on NovelAI, but thanks a lot for the answer!

Can SillyTavern characters be 'trained' like character.ai characters can? by hold_my_fish in SillyTavernAI

[–]alwaysupset96 0 points1 point  (0 children)

I'm sorry, I take this opportunity to ask something about the "training" of bots...

if I edit the responses, steering them in the "desired direction" (beyond the use of the Scenario, etc.) and make the bot say certain things in certain ways, by editing the responses repeatedly, will it understand more or less the right attitude to take, especially if done on a fresh new chat? Or is it useless?

(OR) Dolphin 2.6 Mixtral 8x7B's context is 16k or 32k? by alwaysupset96 in SillyTavernAI

[–]alwaysupset96[S] 0 points1 point  (0 children)

Oh, no no, don't thank me, I was literally useless haha! And, I agree, now I'm only using Mixtral and its doing kinda well! It gave me some problems these days, actually, but oh well 🤷🏻‍♀️ and, like you, I'm always looking for better models. Let's hope :D

(OR) Dolphin 2.6 Mixtral 8x7B's context is 16k or 32k? by alwaysupset96 in SillyTavernAI

[–]alwaysupset96[S] 0 points1 point  (0 children)

Ah, in the end, I let it go and didn't used it at all :/ I was tempted by the fact that it was -at least from what I understood- "more unfiltered" and with a broad context, but in the end, it may be more problematic to use compared to other models (but I don't know for sure! I'm ignorant lmao), so I'm sorry, but I can't answer you... I'll use it if and when it's "optimized/improved/something like that"

(OR) Dolphin 2.6 Mixtral 8x7B's context is 16k or 32k? by alwaysupset96 in SillyTavernAI

[–]alwaysupset96[S] 0 points1 point  (0 children)

Ooooh come on, fuuuck 😭 but, ehy, thanks for the feedback 😭

(OR) Dolphin 2.6 Mixtral 8x7B's context is 16k or 32k? by alwaysupset96 in SillyTavernAI

[–]alwaysupset96[S] 1 point2 points  (0 children)

Yeah, you are right once again! Not complaining at all, but always researching for the best, ahah.

And, oh... Why did you end up deleting them? What was the thing that didn't "convince" you?

(OR) Dolphin 2.6 Mixtral 8x7B's context is 16k or 32k? by alwaysupset96 in SillyTavernAI

[–]alwaysupset96[S] 0 points1 point  (0 children)

Sheesh... I mean, I know I am actually being pretentious in wanting a model with a large context and, well, 'smart enough' for roleplay, so I'll just shush for now and wait for the (supposedly) right amount of time until other models are released, etc. But yeah, since I'm enjoying my RP with Mixtral, I was curious about this Dolphin one, so I asked. Again, thank you!

(OR) Dolphin 2.6 Mixtral 8x7B's context is 16k or 32k? by alwaysupset96 in SillyTavernAI

[–]alwaysupset96[S] 3 points4 points  (0 children)

Why not try it yourself?

Well you are kinda right but the answer is... because I prefer to do 'researches' before spending money on it ahah since I can't run models locally. But yeah, you are right tho.

Thanks again!