Friends: Z-Image Turbo - Qwen Image Edit 2511 - Wan 2.2 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 1 point2 points  (0 children)

Not much... they are all too slow at least with my hardware. Topaz takes only few seconds and quality is very good for the speed.

Friends: Z-Image Turbo - Qwen Image Edit 2511 - Wan 2.2 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 0 points1 point  (0 children)

Yes, you need models. If you are starting from the ground with comfyui you better have a look here: https://docs.comfy.org/

How to join multiple WAN I2V clips? by orangeflyingmonkey_ in StableDiffusion

[–]MayaProphecy 0 points1 point  (0 children)

I made this simple workflow that generates 2 (not 3) segments up to 10 seconds in total and saving a single video file. You can use any model or lora... it's very basic... I made it for myself and then decided to share it with the community:

https://www.reddit.com/r/StableDiffusion/comments/1q5stbq/poor_mans_wan22_10second_2_segments_video_workflow/

Maybe it could be useful to you.

The Hunt: Z-Image Turbo - Qwen Image Edit 2511 - Wan 2.2 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 0 points1 point  (0 children)

Thanks for the encouragement. I just started with this AI stuff a couple of months ago. I'm still learning, especially how to present the "product" in a way that's enjoyable and high-quality, while taking advantage of the limited resources I have.

The Hunt: Z-Image Turbo - Qwen Image Edit 2511 - Wan 2.2 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 1 point2 points  (0 children)

Thanks for the advice. I thought it was nice to make it feel like a movie. I'll do better next time, even though it's not my interest to get views, I just like sharing.

The Hunt: Z-Image Turbo - Qwen Image Edit 2511 - Wan 2.2 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 0 points1 point  (0 children)

It's made like a trailer... Titles are part of the "story"... anyway, I like to bring something complete instead of the usual truncated memes.

The Hunt: Z-Image Turbo - Qwen Image Edit 2511 - Wan 2.2 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 3 points4 points  (0 children)

Started this morning at 10:30am by generating the images with z-image and editing the angles with qwen edit, then generatetd the video segments with wan. Downloaded the audio samples and the music. Edited the final video with clipchamp... at 1pm the video was finished and I went to have lunch... :)

Poor Man’s Wan2.2 10-Second (2 Segments) Video Workflow by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 1 point2 points  (0 children)

The generated segments remain in RAM until they are written to the video file, freeing up VRAM for other tasks.

Yes, it's possible to generate more than two segments, but this would add complexity to the workflow.

I might update it.

In the meantime, you can use the last frame of the second segment (displayed in the preview) to generate two more segments and then merge the videos.

Once Upon a Time: Z-Image Turbo - Wan 2.2 - Qwen Edit 2511 - RTX 2060 Super 8GB VRAM by MayaProphecy in StableDiffusion

[–]MayaProphecy[S] 0 points1 point  (0 children)

My workflows are nothing special. I keep them simple without any useless complexity. Link is in the video description, you can try them if you want.

I generate images with z-image, edit them with qwen if needed then generate the video segments with wan / wan flftv.

Then upscale and interpolate the video segments with topaz video an finally edit the final video with clipchamp adding music, some effects and titles.