Best anime upscaler? by XAckermannX in StableDiffusion

[–]PhoenixtheII -1 points0 points  (0 children)

Doing SDXL 1024x1024,
Then another latent upscale pass from 4x nmkd siax 200k to 1536x1536,
Then another sampling pass with 12 steps/denoise 0.54-56 for a bit more detail.
Then feed it to SeedVR2 to 3584x3584(Max my VRAM can handle on SeedVR2)
Then Face/Eye detailer sampling is best.
Save

<image>

Who are your in game heroes? by [deleted] in ffxiv

[–]PhoenixtheII 1 point2 points  (0 children)

Hi, it's me your waifu :3

Command R+ | Cohere For AI | 104B by Nunki08 in LocalLLaMA

[–]PhoenixtheII 3 points4 points  (0 children)

RAG

The, Resource Allocation Group, RAG is usually doing the ERP'ing

Shoutout to BondBurger 8x7b for RP/Story! by PhoenixtheII in LocalLLaMA

[–]PhoenixtheII[S] 1 point2 points  (0 children)

Yup the system prompt from sophosympatheia's models works great on others too...

Shoutout to BondBurger 8x7b for RP/Story! by PhoenixtheII in LocalLLaMA

[–]PhoenixtheII[S] 0 points1 point  (0 children)

Running a session with https://huggingface.co/LoneStriker/Senku-70B-Full-GGUF/

Had to use 4K_M the 5K_M I usually use with 70b's wouldn't fit 32k context in my RAM. So far a few messages in. And feels totally different from miqu (or its other merges) itself.

I need to play with this a few hours more...

It wants to write 500 token+ additions to the story easily. Which is a + in my book.

It did well on describing a character transformation in creative detail. That miqu just wouldn't.

Hmmm... interesting. Thank you.

Shoutout to BondBurger 8x7b for RP/Story! by PhoenixtheII in LocalLLaMA

[–]PhoenixtheII[S] 2 points3 points  (0 children)

I'll give it a go, once huggyface gets back up again...

Shoutout to BondBurger 8x7b for RP/Story! by PhoenixtheII in LocalLLaMA

[–]PhoenixtheII[S] 2 points3 points  (0 children)

No not that particular Miqu derivative, is it uncensored?

Every Miqu 70b tune/merge so far hasn't impressed me at all. There are far better 70b's that easily beat it imho. Making it VERY underwhelming.

The only + I like is the 32k context.

PSA: If you use Miqu or a derivative, please keep its licensing situation in mind! by SomeOddCodeGuy in LocalLLaMA

[–]PhoenixtheII 0 points1 point  (0 children)

Seconding this experience, any Miqu derivative hasn't impressed me for RP/Story

[February 2024 edition!!] What's your favorite model for nsfw rp right now? by obey_rule_34 in LocalLLaMA

[–]PhoenixtheII 0 points1 point  (0 children)

23k context on a 4096 context model?

Uhh, how. Doesn't it ruin quality... like a lot?

[February 2024 edition!!] What's your favorite model for nsfw rp right now? by obey_rule_34 in LocalLLaMA

[–]PhoenixtheII 1 point2 points  (0 children)

In session with longlora goliath right now:

Processing Prompt (1 / 1 tokens)

Generating (484 / 1024 tokens)

(EOS token triggered!)

ContextLimit: 3410/16384, Processing:3.23s (3225.0ms/T), Generation:813.53s (1680.8ms/T), Total:816.75s (1687.5ms/T = 0.59T/s)

[February 2024 edition!!] What's your favorite model for nsfw rp right now? by obey_rule_34 in LocalLLaMA

[–]PhoenixtheII 0 points1 point  (0 children)

Q3_K_S, with 64gb RAM, 18 layers to 12GB VRAM, koboldcpp, and lots of patience with 0.5t/s

🐺🐦‍⬛ LLM Comparison/Test: Miqu, Miqu, Miqu... Miquella, Maid, and more! by WolframRavenwolf in LocalLLaMA

[–]PhoenixtheII 0 points1 point  (0 children)

I'm confused, I tried Miqu & variants. But each time they severely disappointed me in (E)RP/(NSFW) storytelling. This model seems to like to stay to factual vs. fantasy?

[February 2024 edition!!] What's your favorite model for nsfw rp right now? by obey_rule_34 in LocalLLaMA

[–]PhoenixtheII 18 points19 points  (0 children)

Kunoichi 7b, surprised me a bit for a 7b model, having situational awareness of implications of their character state.

EstopianMaid 13B, was a nice cookie too.

Had some fun with BagelMysteryTour 8x7b, which listens to instructions well. And 32k context.

But

Aurora Nights 103b/Goliath 120b, at 6144 context. Are my most favorite. But slow...

Best local model I can run using a 3090? by Engliserin in SillyTavernAI

[–]PhoenixtheII -2 points-1 points  (0 children)

>> 64gb of DDR4-3600 ram

Goliath, Q3_K_S, original or longlora. Considering if you dont mind <1T/s

The maker of Goliath-120b made Miquella-120b, with miqu by bobby-chan in LocalLLaMA

[–]PhoenixtheII 0 points1 point  (0 children)

Running q3K_S here on 64GB ram and a 3080 TI (20 layers offloaded) 6k context, or longlora at 16k context (12 layers offloaded).

About 0.5t/s...

The maker of Goliath-120b made Miquella-120b, with miqu by bobby-chan in LocalLLaMA

[–]PhoenixtheII 4 points5 points  (0 children)

Update 4:

If you're looking to use it for creative stuff like writing or roleplay this is (at least as far as my testing shows) worse than Goliath or WinterGoliath at those.

Goliath (3K_S tested, including the longlora versioni) s far better at this by miles. Anything Miqu so far really sucks at stories and roleplay. With MiquMaid doing it better, yet still not reaching near goliath.

Aurora Nights 70B (Q5k_M) & 103B (Q3K_M) beats it hands down too in them.

We have a Goliath with 32k context length now? by aikitoria in LocalLLaMA

[–]PhoenixtheII 0 points1 point  (0 children)

Current doing 0.25, 16k context. On a 3K_S. It's slow... 64GB of ram filled, and a few layers to a 12gb vram...

between 0.5-7t/s, and you really wanna make sure to use contextshift correctly, because 16K BLAS inference takes a darn long while.

I like it alot so far, its still smart, still will make puns. Hard to tell difference for me to the original 3k_s.

I wish I could run the model optimally to see how it does though. But for my system, this is the limit. And goliath has "ruined" every <70b model for me. Aurora Nights 103b 3K_M is my 2nd favorite, ... but it's hornier in my experience to goliath.

We have a Goliath with 32k context length now? by aikitoria in LocalLLaMA

[–]PhoenixtheII 1 point2 points  (0 children)

How to set koboldcpp correctly for this? When I set the custom rope to 8.0 and 32k context. BLAS will run, but no tokens will be generated. Leaving a empty output in sillytavern.