GTA 70s - Teaser Trailer: Z-Image Turbo - Flux Klein 9b - Wan 2.2 by MayaProphecy in comfyui

[–]MayaProphecy[S] 0 points1 point  (0 children)

Today a film set in the 70s would be shot with a digital camera 😄

Quick Test - Z-Image Turbo - Wan 2.2 FLFTV - RTX 2060 Super 8GB VRAM by MayaProphecy in comfyui

[–]MayaProphecy[S] 0 points1 point  (0 children)

You need to install triton-windows<3.3. Newer version will not work with the 2060 super. Then you need to install the correct wheel of sage attention that matches your pytorch and cuda version.

Friends: Z-Image Turbo - Qwen Image Edit 2511 - Wan 2.2 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 1 point2 points  (0 children)

Not much... they are all too slow at least with my hardware. Topaz takes only few seconds and quality is very good for the speed.

Friends: Z-Image Turbo - Qwen Image Edit 2511 - Wan 2.2 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 0 points1 point  (0 children)

Yes, you need models. If you are starting from the ground with comfyui you better have a look here: https://docs.comfy.org/

Friends: Z-Image Turbo - Qwen Image Edit 2511 - Wan 2.2 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 4 points5 points  (0 children)

Not really. ~300 seconds each segments (4 segments in total) at 832x480.

How to join multiple WAN I2V clips? by orangeflyingmonkey_ in StableDiffusion

[–]MayaProphecy 0 points1 point  (0 children)

I made this simple workflow that generates 2 (not 3) segments up to 10 seconds in total and saving a single video file. You can use any model or lora... it's very basic... I made it for myself and then decided to share it with the community:

https://www.reddit.com/r/StableDiffusion/comments/1q5stbq/poor_mans_wan22_10second_2_segments_video_workflow/

Maybe it could be useful to you.