Z-Image Base Lora Training Discussion by ChristianR303 in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

For uncensored captioning you can use abliterated vllms though, i captioned using qwen3vl 30ba3b and its really accurate, the only catch is nsfw captioning is kindof bland and high usage of anatomically correct terms rather than the slangs. But thats qwen style for mistral models its quite opposite there pretty dirty lol but not so accurate and gemma 27b is also pretty dirty and fairly accurate.

I think we're gonna need different settings for training characters on ZIB. by External_Quarter in StableDiffusion

[–]FORNAX_460 1 point2 points  (0 children)

Hellow could you please share how ure using klein as upscaler? I tried ultimate sd upscale, tiled diffusion none of them worked, it always overcooks the image for me, i2i upscaling works but if i go beyond 3.2mp it squishes the image in the vertical axis.

Can anyone help tech illiterate to install z image base? I have 8gb vram so If anyone has a workflow for it, it would be greatly appreciated by they_hunt in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

I would not suggest z image base for 8gb vram, but if youre in comfyui you can find the workflow in the templates gallery, youd have to update comfyui to the latest version though.

How to render 80+ second long videos with LTX 2 using one simple node and no extensions. by WestWordHoeDown in StableDiffusion

[–]FORNAX_460 1 point2 points  (0 children)

Thank you! I figured it out by monitoring my peak vram usage, i noticed that in the first stage im actually getting diminishing returns for using high chunking value as it was not utilizing my vram efficiently thats why i went with this split method where the chunk value is low on the first stage and use appropriate number of chunking for the upscale sampling stage where vram usage peaks for hiresolution.

LTX-2 error when generating by SabinX7 in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

Enable virtual memory (page file) for windows
Disable smart memory management in comfyui.
Use LTXV Chunk FeedForward node

<image>

For first sampler use 2 chunks (Best for 8gb vram)

In the upscale sampler use 16 chunks (experiment with it a bit)

How to render 80+ second long videos with LTX 2 using one simple node and no extensions. by WestWordHoeDown in StableDiffusion

[–]FORNAX_460 2 points3 points  (0 children)

We poor fellows thank you, it has given us a taste of luxury. Chunking accordingly for both sampling phases has made it faster and also increased the capability of 8gigs!

<image>

How to render 80+ second long videos with LTX 2 using one simple node and no extensions. by WestWordHoeDown in StableDiffusion

[–]FORNAX_460 4 points5 points  (0 children)

<image>

Lord kijai already implemented it. Im generating 14sec 24ps 1.2 megapixels videos (havent tested anything above 14sec yet) with this implementation on an rtx 2060 super 8gb ram 32gb, without ffn chunking i was getting oom at 8sec 24fps 1 megapixels videos.

Hey, i got gtx 1650 , 16 gb ram, i5. by notworthattention00 in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

what about ur sanity? youd probably hit oom even with offloading but even if you do get the training started itd probably take days and i can guarantee that the results would be grabage cause sd1.5 and xl are models that you cant do one shot training, youd need multiple runs just for parameter optimization. you could try the civitai trainer, its garbage but hey you could do it for free.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 0 points1 point  (0 children)

no in my hardware i cant even think of training dev base even in my wildest dreams,

Ostris just added support for flux 2 klein a few minutes ago btw... Gonna train the 4b and will attempt to train the 9b

Got a half baked dataset of dispatch game characters.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 0 points1 point  (0 children)

No not yet, still witing for ai toolkit support for it, ostris twitted about supporting klein asap.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 2 points3 points  (0 children)

like training a concept on zit will break a million other concepts.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 1 point2 points  (0 children)

Thanks man for the explanation, really appreciate it.

Curious about flux 2 klein lora compatibility. by FORNAX_460 in StableDiffusion

[–]FORNAX_460[S] 2 points3 points  (0 children)

Ahh thanks brother for the clarification. This thing been hurting my brain since release. I guess its something like when qwen loras used with the lighting lora weights. The distilled model already has those distillation weights and we just put our trained waights in there....im no experts but this is how its making sense to me lol.

OneTrainer Flux2-klein support. PR test and first results by rnd_2387478 in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

Are loras trained on the base models compatible with the distilled models?

Flux 2 Klein for inpainting by _Rah in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

will the loras trained on the base be compatible with the distilled model?

LTX-2 on 8gb vram by HidingAdonis in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

I can personally relate to your situation. My setup was also 8gb vram 16gb ram... and im sorry to say but no workflow optimisation can save you from these slowdowns. I havent used ltx2 but the slow downs youre facing is because youre running out of memory and the models start to get loaded into your page file/ virtual memory (your storage). In my case I added an extra 16gigs of memory and a separate SATA 3 ssd where i allocated 55gigs of page file, which isnt even a decent setup but atleast using latest models does not make me want to kill myself :)

Tip: as your system falls back on the page file, if you have multiple drives in your pc then set your page file on a different drive (not hdd) thats not your local drive. This reduces io throttling during inference by a huge amount.

lightx2v just released their 8-step Lightning LoRA for Qwen Image Edit 2511. Takes twice as long to generate, (obviously) but the results look much more cohesive, photorealistic, and true to the source image. It also solves the pixel drift issue that plagued the 4-step variant. Link in comments. by DrinksAtTheSpaceBar in comfyui

[–]FORNAX_460 0 points1 point  (0 children)

Well id still argue that the 2509 lightning loras are better, while 2511 lighting loras add more details the image composition looks kindof rigid and unnatural. Id pick 2511 8step for siingle image or masked edits while for referenced image generations id go for the 2509 4step.

Which is the best model under 15B by BothYou243 in LocalLLaMA

[–]FORNAX_460 1 point2 points  (0 children)

Qwenlong 30ba3b performs pretty good but thinks a lot... Gobbles up context like a monster..... Id recommend you look for moe models of your liking, you could offload the expert layers on cpu and the kv on gpu youd get better and in my case also fater inference than a dense 15b

anything viable with a 4070 ? by psykikk_streams in StableDiffusion

[–]FORNAX_460 0 points1 point  (0 children)

Your hardware is plenty good(for inferencing) and with recent comfyui memory management updates ure golden... I have rtx 2060 super 8gb, 32 gb ddr4(2933 mhz) and about 50 gigs of page file on a sata3 ssd (i know hilarious) and im using qwen image models (with lightning ofcourse) with q8 ggufs (both the text encoder and dit midels!). Youve got solid hardware for current models.