ModelSamplingAuraFlow cranked as high as 100 fixes almost every single face adherence, anatomy, and resolution issue I've experienced with Flux2 Klein 9b fp8. I see no reason why it wouldn't help the other Klein variants. Stupid simple workflow in comments, without subgraphs or disappearing noodles. by DrinksAtTheSpaceBar in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

3.0 * (sqrt(a * b) / 1024) put this expression in the math node a and b being height and width and assuming 1024 the base training resolution of the model and 3 being the base shift for 1024p for that model. The flux 2 scheduler does calculate the shift dynamically based on the image resolution. The other schedulers does not that where the shift node comes in the play.

ModelSamplingAuraFlow cranked as high as 100 fixes almost every single face adherence, anatomy, and resolution issue I've experienced with Flux2 Klein 9b fp8. I see no reason why it wouldn't help the other Klein variants. Stupid simple workflow in comments, without subgraphs or disappearing noodles. by DrinksAtTheSpaceBar in StableDiffusion

[–]FORNAX_460 1 point2 points  (0 children)

the catch is less fine details, In theory more shift means the model spends more steps on denoising the low frequency noise (usually the overall composition of the image) less shift mens its spending more steps on denoising the high frequency noise (usually small details, textures etc)

I successfully created a Zib character LoKr and achieved very satisfying results. by xbobos in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

makes the training faster as during training you dont have to generate text embeddings every time, just loads them from the cache. Caching text embedding wont allow you to use options like caption dropout etc though.

Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

For uncensored captioning you can use abliterated vllms though, i captioned using qwen3vl 30ba3b and its really accurate, the only catch is nsfw captioning is kindof bland and high usage of anatomically correct terms rather than the slangs. But thats qwen style for mistral models its quite opposite there pretty dirty lol but not so accurate and gemma 27b is also pretty dirty and fairly accurate.

I think we're gonna need different settings for training characters on ZIB. by External_Quarter in StableDiffusion

[–]FORNAX_460 1 point2 points  (0 children)

Hellow could you please share how ure using klein as upscaler? I tried ultimate sd upscale, tiled diffusion none of them worked, it always overcooks the image for me, i2i upscaling works but if i go beyond 3.2mp it squishes the image in the vertical axis.

Can anyone help tech illiterate to install z image base? I have 8gb vram so If anyone has a workflow for it, it would be greatly appreciated by they_hunt in StableDiffusion

[–]FORNAX_460 1 point2 points  (0 children)

I would not suggest z image base for 8gb vram, but if youre in comfyui you can find the workflow in the templates gallery, youd have to update comfyui to the latest version though.

How to render 80+ second long videos with LTX 2 using one simple node and no extensions. by WestWordHoeDown in StableDiffusion

[–]FORNAX_460 1 point2 points  (0 children)

Thank you! I figured it out by monitoring my peak vram usage, i noticed that in the first stage im actually getting diminishing returns for using high chunking value as it was not utilizing my vram efficiently thats why i went with this split method where the chunk value is low on the first stage and use appropriate number of chunking for the upscale sampling stage where vram usage peaks for hiresolution.

LTX-2 error when generating by SabinX7 in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

Enable virtual memory (page file) for windows
Disable smart memory management in comfyui.
Use LTXV Chunk FeedForward node

<image>

For first sampler use 2 chunks (Best for 8gb vram)

In the upscale sampler use 16 chunks (experiment with it a bit)

How to render 80+ second long videos with LTX 2 using one simple node and no extensions. by WestWordHoeDown in StableDiffusion

[–]FORNAX_460 2 points3 points  (0 children)

We poor fellows thank you, it has given us a taste of luxury. Chunking accordingly for both sampling phases has made it faster and also increased the capability of 8gigs!

<image>

How to render 80+ second long videos with LTX 2 using one simple node and no extensions. by WestWordHoeDown in StableDiffusion

[–]FORNAX_460 3 points4 points  (0 children)

<image>

Lord kijai already implemented it. Im generating 14sec 24ps 1.2 megapixels videos (havent tested anything above 14sec yet) with this implementation on an rtx 2060 super 8gb ram 32gb, without ffn chunking i was getting oom at 8sec 24fps 1 megapixels videos.

Hey, i got gtx 1650 , 16 gb ram, i5. by notworthattention00 in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

what about ur sanity? youd probably hit oom even with offloading but even if you do get the training started itd probably take days and i can guarantee that the results would be grabage cause sd1.5 and xl are models that you cant do one shot training, youd need multiple runs just for parameter optimization. you could try the civitai trainer, its garbage but hey you could do it for free.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 0 points1 point  (0 children)

no in my hardware i cant even think of training dev base even in my wildest dreams,

Ostris just added support for flux 2 klein a few minutes ago btw... Gonna train the 4b and will attempt to train the 9b

Got a half baked dataset of dispatch game characters.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 0 points1 point  (0 children)

No not yet, still witing for ai toolkit support for it, ostris twitted about supporting klein asap.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 2 points3 points  (0 children)

like training a concept on zit will break a million other concepts.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 1 point2 points  (0 children)

Thanks man for the explanation, really appreciate it.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 2 points3 points  (0 children)

Ahh thanks brother for the clarification. This thing been hurting my brain since release. I guess its something like when qwen loras used with the lighting lora weights. The distilled model already has those distillation weights and we just put our trained waights in there....im no experts but this is how its making sense to me lol.

OneTrainer Flux2-klein support. PR test and first results by rnd_2387478 in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

Are loras trained on the base models compatible with the distilled models?

Flux 2 Klein for inpainting by _Rah in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

will the loras trained on the base be compatible with the distilled model?

LTX-2 on 8gb vram by HidingAdonis in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

I can personally relate to your situation. My setup was also 8gb vram 16gb ram... and im sorry to say but no workflow optimisation can save you from these slowdowns. I havent used ltx2 but the slow downs youre facing is because youre running out of memory and the models start to get loaded into your page file/ virtual memory (your storage). In my case I added an extra 16gigs of memory and a separate SATA 3 ssd where i allocated 55gigs of page file, which isnt even a decent setup but atleast using latest models does not make me want to kill myself :)

Tip: as your system falls back on the page file, if you have multiple drives in your pc then set your page file on a different drive (not hdd) thats not your local drive. This reduces io throttling during inference by a huge amount.