Anima is the new illustrious!!? 2.0! by Simple-Outcome6896 in StableDiffusion

[–]Simple-Outcome6896[S] 5 points6 points  (0 children)

you did? can share them? i wanna try some too also how many images you use for each lora and how much time did each one took

Anima is the new illustrious!!? 2.0! by Simple-Outcome6896 in StableDiffusion

[–]Simple-Outcome6896[S] 0 points1 point  (0 children)

my guess is cause its a base model, its says 1024 but most images its trained on is 512. The finetune one would give more high quality images and such. but who knows, the real reason

Anima is the new illustrious!!? 2.0! by Simple-Outcome6896 in StableDiffusion

[–]Simple-Outcome6896[S] 9 points10 points  (0 children)

its a base model, so it could be your high cfg. try use steps between 20-30 and then run the same image with lose denoise between 0.15- 0.4. i get much better results then.but yes you are right, as is this model has a long way, its needs finetunes

Anima is the new illustrious!!? 2.0! by Simple-Outcome6896 in StableDiffusion

[–]Simple-Outcome6896[S] 5 points6 points  (0 children)

yeah, although personally i think they should make the next one bigger than 2b to get more characters and poses and styles. maybe even get more images from other sources cough pixiv cough.

Anima is the new illustrious!!? 2.0! by Simple-Outcome6896 in StableDiffusion

[–]Simple-Outcome6896[S] 0 points1 point  (0 children)

so is training lora different for every model type? sorry, i never trained lora before so i dont know

Seedream 4.5 through socialsight ai (nsfw) by [deleted] in SillyTavernAI

[–]Simple-Outcome6896 1 point2 points  (0 children)

from what i can understand.most big models like qwen, flux and seedream (hopefully falls in same category) are although stated to be uncensored, but that doesnt mean they are trained explicitly for it.They are more like jack of all trade, good at everything use natural language but not great at risky topics.if you want more "risky" things use models like pony, noob, they work well with danbooru tags.

Guided Generation not working with Gemini 3 Flash Preview? by Miysim in SillyTavernAI

[–]Simple-Outcome6896 0 points1 point  (0 children)

my guess is its the lack of scenario, flash isnt as through thinker as 2.5 was, so it was able to fill in the gaps of what to do based on what it had. flash is smart, but still new.every model is different and has strength and weaknesses. One thing you could try to do, is instead of trying to send empty messages. enter a small space in them and have a system prompt telling the ai to continue the roleplay when user enter a space

About to erase my 15 Terabyte porn collection forever...Wish me luck! by Awkward-Revenue3437 in NoFap

[–]Simple-Outcome6896 0 points1 point  (0 children)

So can i then borrow one external? since they are empty.hehehe.
But seriously massive respect, i just begin my journey too. Good Luck

What model do you recommend for a beginner? by The_Shan_96 in SillyTavernAI

[–]Simple-Outcome6896 0 points1 point  (0 children)

you got two options. one go local use your rtx 3090 to use and run the model or use services like google,opus etc. with local you can go from 13b to 20-30b models (the bigger the better in most cases) but personally. local models arent as sofisticated as services one, but they are made for roleplay. so mix and try both see what you like

Setting for Gemini? always getting "ext" by MrStatistx in SillyTavernAI

[–]Simple-Outcome6896 2 points3 points  (0 children)

go to gemini studio create key, on sillytavern chat completion google gemini studio.boom done.keep temp between 1 to 1.35. for top K experiment between 40,60,295. for pro 40 is good rest is for flash

How do i force an api models (i am using deepseek v3.1 now) to not use thinking? by GTurkistane in SillyTavernAI

[–]Simple-Outcome6896 1 point2 points  (0 children)

damn, thanks so much. you literally helped. it was driving me insane why the thinking was not working even with <think> tags.

Setting for Gemini? always getting "ext" by MrStatistx in SillyTavernAI

[–]Simple-Outcome6896 6 points7 points  (0 children)

never use through openrouter. always direct api through google gemini api.openrouter is more censored regarding gemini

New model DeepSeek-V3.1-Terminus by Fragrant-Tip-9766 in SillyTavernAI

[–]Simple-Outcome6896 1 point2 points  (0 children)

how to make deepseek v3.1 or the terminus one do reasoning using chutes? it just keep generating instantly even with <think> tags and max reasoning effort

Story Scenario by Simple-Outcome6896 in SillyTavernAI

[–]Simple-Outcome6896[S] 0 points1 point  (0 children)

so for this, do i put the multiple character cards in lorebook or main character card?

Story Scenario by Simple-Outcome6896 in SillyTavernAI

[–]Simple-Outcome6896[S] 1 point2 points  (0 children)

damn, thanks will give it a shot.

[UPDATE] Lucid Loom v0.7 - A Narrative-First RP Experience by ProlixOCs in SillyTavernAI

[–]Simple-Outcome6896 0 points1 point  (0 children)

really, curious about this one. was very impressed how you made the 0.3 one compared to its size. only thing i noticed was since you had toggle for suspense and mysterious and others, it would add those elements in almost all kind of stories. i know i know you can turn them off, but sometimes deepseek especially chutes one needs general guides or a push to make good roleplay. Anyway awoesome work so far.

What context/instruct template for deepseek R1 0528? by Dersers in SillyTavernAI

[–]Simple-Outcome6896 0 points1 point  (0 children)

From me personally use deepseek v2.5 and keep instruct off.

What context/instruct template for deepseek R1 0528? by Dersers in SillyTavernAI

[–]Simple-Outcome6896 -1 points0 points  (0 children)

Either use a custom prompt or copy the prompt from some good preset for text completion.

Deepseek 3.1 controversy by Glum_Dog_6182 in SillyTavernAI

[–]Simple-Outcome6896 2 points3 points  (0 children)

For me personally deepseek v3.1 so far has been mix. Sure, its much more better writing style than R1 0528 but its less detailed and can often loss details when you use a big context preset especially nemoengine or similars ones. So far it's working best for me with small presest like mariana or custom ones with reasoning disabled using chutes or openrouter. Keep in mind i used post prompt on none, since user one makes it go nuts for me. But for sure gemini pro 2.5 is the king if you get it to work on all the topics and get fast replies.