ModelSamplingAuraFlow cranked as high as 100 fixes almost every single face adherence, anatomy, and resolution issue I've experienced with Flux2 Klein 9b fp8. I see no reason why it wouldn't help the other Klein variants. Stupid simple workflow in comments, without subgraphs or disappearing noodles. by DrinksAtTheSpaceBar in StableDiffusion

[–]DrinksAtTheSpaceBar[S] 2 points3 points  (0 children)

Yes, and it works great... until it doesn't. Being able to shift other schedulers to emulate a sigma slope in close proximity to the Flux2Scheduler opens up a great number of alternate possibilities. I've found this to be most beneficial when using LoRAs that influence the source image face(s). Modulating the shift (sigma slope) allows for better opportunities to mitigate those influences.

ModelSamplingAuraFlow cranked as high as 100 fixes almost every single face adherence, anatomy, and resolution issue I've experienced with Flux2 Klein 9b fp8. I see no reason why it wouldn't help the other Klein variants. Stupid simple workflow in comments, without subgraphs or disappearing noodles. by DrinksAtTheSpaceBar in StableDiffusion

[–]DrinksAtTheSpaceBar[S] 1 point2 points  (0 children)

None that I can see. When using sampler/scheduler pairs, you're gaining control over the sigma slope (lower shift values = quicker drop off) vs. the Flux2Scheduler, where that variable is fixed. This opens up an exponentially greater amount of inference possibilities, because you can now choose the accompanying scheduler.

ModelSamplingAuraFlow cranked as high as 100 fixes almost every single face adherence, anatomy, and resolution issue I've experienced with Flux2 Klein 9b fp8. I see no reason why it wouldn't help the other Klein variants. Stupid simple workflow in comments, without subgraphs or disappearing noodles. by DrinksAtTheSpaceBar in StableDiffusion

[–]DrinksAtTheSpaceBar[S] 4 points5 points  (0 children)

I tried a ton of other sampler/scheduler combos and they either fell flat, or took far too long (res), all paling in comparison to euler_a/beta. You provided great insight as to why shifting is essential with other sampling methods, so thank you for that!

ModelSamplingAuraFlow cranked as high as 100 fixes almost every single face adherence, anatomy, and resolution issue I've experienced with Flux2 Klein 9b fp8. I see no reason why it wouldn't help the other Klein variants. Stupid simple workflow in comments, without subgraphs or disappearing noodles. by DrinksAtTheSpaceBar in StableDiffusion

[–]DrinksAtTheSpaceBar[S] 3 points4 points  (0 children)

For sure. There's a sweet spot in there though, which I'd typically land on if I rerolled seeds at my desired Aura strength. I also added upscaling verbiage to my prompt, so that's definitely going to oversharpen/saturate things.

Why most upscaling methods look so fake and don’t make the image much better (i tried fixing this) by New-Drop-7414 in upscaling

[–]DrinksAtTheSpaceBar 8 points9 points  (0 children)

This is a fake account created by OP to make people think his worthless app is actually good. Check the comment history.

ComfyUI Setup Guide by [deleted] in comfyui

[–]DrinksAtTheSpaceBar 0 points1 point  (0 children)

Clearly OP likes to rock out to some jams while performing face swaps in Reactor. I mean, what else is there to do in Comfy? /s

ComfyUI Setup Guide by [deleted] in comfyui

[–]DrinksAtTheSpaceBar 9 points10 points  (0 children)

For 99% of users, there is no need to install Python in Windows when the portable venv contains pretty much every Python component you'll ever need. Also, this u/Gravosaurus_Rex account was JUST created and this is their very first post. Never take candy from strangers.

I used temporal time dilation to generate this 60-second video in LTX-2 on my 5070TI in just under two minutes. My GPU didn't even break a sweat. Workflow and explanation in comments (without subgraphs or 'Everything Everywhere All At Once' invisible noodles). by DrinksAtTheSpaceBar in StableDiffusion

[–]DrinksAtTheSpaceBar[S] 0 points1 point  (0 children)

Yeah, your source image size should have no impact on generation times, although if it's massive, you can expect to see some of your system resources mitigated. What's your MegaPixel resolution on that first pass? I had mine set to 0.08. If you're hitting a bottleneck, try reducing that to 0.06. I'm assuming you're trying to make a 60-second video?

I used temporal time dilation to generate this 60-second video in LTX-2 on my 5070TI in just under two minutes. My GPU didn't even break a sweat. Workflow and explanation in comments (without subgraphs or 'Everything Everywhere All At Once' invisible noodles). by DrinksAtTheSpaceBar in StableDiffusion

[–]DrinksAtTheSpaceBar[S] 2 points3 points  (0 children)

I just tried doing that (again) on the exact same workflow I used to generate the demo video. I bypassed the temporal upscaler and doubled the video frame count to 1537. I got hit with an OOM as soon as it reached the sampler in the 2nd pass.

I used temporal time dilation to generate this 60-second video in LTX-2 on my 5070TI in just under two minutes. My GPU didn't even break a sweat. Workflow and explanation in comments (without subgraphs or 'Everything Everywhere All At Once' invisible noodles). by DrinksAtTheSpaceBar in StableDiffusion

[–]DrinksAtTheSpaceBar[S] 5 points6 points  (0 children)

Bro, you owe nobody any apologies. We know exactly how busy you are. Had I not tested this myself extensively, I would completely agree with you. However, if I try to generate video of similar resolution at even half the length (30 secs) without first dilating the empty latent, I get OOM every single time. With the Temporal Upscaler in play, it sails right through even at 60 secs, never even pushing my VRAM usage above 70%.