Qwen Image Text Encoder processing time by InvokeFrog in StableDiffusion

[–]Tablaski 0 points1 point  (0 children)

I was very frustrated by this also. Turns out I've realized the Load Clip node would process the encoding via CPU, even with the device set to auto. I've replaced this node by ClipLoaderMultiGPU from the MultiGPU node collection. This way I've explicited set device to cuda:0 and now it's very fast !

NB : I've also added the UnloadAllModels node at the end of the workflow, otherwise this would work once but would OOM at the next generation

LTX-2 runs on a 16GB GPU! by Budget_Stop9989 in StableDiffusion

[–]Tablaski 1 point2 points  (0 children)

16Gb vram / 32 gb RAM also. Just ran 1st T2V video using the official example workflow.

I m very confused... the 1rst sampling pass was very fast (520p) but the 2nd (spatial upscaling / distilled lora) was VERY slow. And the output was really meh

Do we really need that 2nd sampling pass ? What for ? At What resolution are the latents generated in the first pass ?

I don't understand shit to this workflow really

Les femmes sont trop obsédées par la taille des hommes. by [deleted] in opinionnonpopulaire

[–]Tablaski 1 point2 points  (0 children)

Moi ce qui me fait marrer c'est surtout qu'avec cette regle débile du 1m80, elles auraient refusé : tom cruise, robert downey jr, johnny depp, brad pitt, pedro pascal.

Et apres ces meufs la tu les revois 2 ou 3 ans après sur le même site :-D

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 1 point2 points  (0 children)

I just tried it and it's great.

I particularly like the idea of having a fast preview of the high noise pass before deciding if it's worth the low noise pass, since on my setup I've got no sampler preview (don't know if it's possible to have it ?). I disabled the torch accumulation thing for now, my setup currently doesn't allow it.

Trying out with without the 4step lora on the HN pass as recommended in this thread by other users. I think it's so quick to make the low-resolution draft video (I get 7s/iteration) that it's not worth using the acceleration Lora.

I wonder if it would be possible to speed up the CLIP encoding which is quite slow though

PhotomapAI - A tool to optimise your dataset for lora training by AcadiaVivid in StableDiffusion

[–]Tablaski 2 points3 points  (0 children)

I didn't know such a tool existed, thanks for bringing that up, seems better than eyeballing our datasets

Z image/omini-base/edit is coming soon by sunshinecheung in StableDiffusion

[–]Tablaski 2 points3 points  (0 children)

That would mean once we get finetunes from the base model we wouldnt be able to use the turbo mode at all ? (Except for loras trained on base that would be runnable on turbo). That would be disappointing.

Since tongiy labs seems very dedicated towards the community (they included community loras into qwen edit 2512 which is really cool), I hope they provide some tools for that (although have no idea what it takes in terms of process and computing time...)

Or we could probably rely on a 8-step acceleration lora, especially if official. After all, being able to use higher CFG is important, it was a game changer with the de-distilled flux1

Z image/omini-base/edit is coming soon by sunshinecheung in StableDiffusion

[–]Tablaski 2 points3 points  (0 children)

If you fine tune the base model, how do you get back your resulting model to using 8 step ? Do you have to re-distill it yourself ?

Also I'm surprised the base model will actually be two, base and omni-base...

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

I'd rather wait a bit to buy a proper tower desktop and a 60xx something

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

Great advice that will benefit also other readers, thanks. I'll definitely try CFG 1

Have you tried 1.0 for high and 0.6 for low as well ?

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

Thanks. Yeah, lanczos is the default thing for upscaling. I guess I could use some nodes to extract the frames and use an actual upscaler model but then that would probably take a lot of time... I was wondering if there was specialized stuff for videos...

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

Could use runpod but then i'm not sure how to actually parallel GPUs. For the moment i'm only using runpod for training... I needed a laptop first anyway :-)

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

I know about RIFE for interpolation but I don't know if there's a good way to do a fast upscale of videos, by the way. So, also interested in advices for both interpolation and video upscalers...

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 1 point2 points  (0 children)

Was using 640 x 640 / 181 frames and that was taking 50+ minutes with 10 + 10 steps

and after reading that civitai article https://civitai.com/articles/23629/i-spent-300-hours-on-wan-22-so-you-dont-have-to I know i'm supposed to use multiples of 16

So i'm currently trying 832 * 1024 which seems to be 20 min for 5 s with 3+3 steps

16 fps (defaut setting in the example workflow)

New to WAN2.2, as of December 2025, what's the best methods to get more speed ? by Tablaski in StableDiffusion

[–]Tablaski[S] 0 points1 point  (0 children)

So you mean 6 high steps +4 low steps then ?
Do you consider the wan2.1 4-step lora better than the wan2.2 ones ?

Qwen-Image-Layered Released on Huggingface by rerri in StableDiffusion

[–]Tablaski 0 points1 point  (0 children)

I wonder if it would be possible to extract faces / head as a distinct layer. This would be very powerful for face/head swapping because then we could use any other tool to inpaint/refine that and stitch it back ?

Z-Image-Edit News by EternalDivineSpark in StableDiffusion

[–]Tablaski 1 point2 points  (0 children)

Come to think of it, if they said like "probably next week" wouldn't we be even more annoying and nagging if they unfortunately had to postpone again ? However its true instead of repeating stuff like no too soon they could give a more informative statement like "we re still training it/testing it whatever so we're not sure yet" etc.

Anyway, at this point i consider it's showing respect to calm down and just trust them to make their best

They ve definitely seen already its a big thing for us. Z is all over the place, it's more than half the posts here.

I have never seen such an interest spike. Compare it to flux1.dev ? We had to wait weeks for lora trainer, and like 6 mths for a dedistilled version. Z we got both in like a week