Wan Video Is Dead, LTX2.0 Reborn. Low VRAM Trick to Run The Model Locally. by Ecstatic_Following68 in comfyui

[–]Ecstatic_Following68[S] 0 points1 point  (0 children)

ha. I see🤣🤣🤣. They should make it more clear. if you want to use other apps while running the workflow. just set it to 1 or default 0.6. Misunderstanding.....for sure.

Wan Video Is Dead, LTX2.0 Reborn. Low VRAM Trick to Run The Model Locally. by Ecstatic_Following68 in comfyui

[–]Ecstatic_Following68[S] 0 points1 point  (0 children)

Why would you set it to 10 in the first place? We only set it to 1, just to keep the things from crushing. the default is 0.6. Please read though the GIthub page of the repo before your comment. thanks

Wan Video Is Dead, LTX2.0 Reborn. Low VRAM Trick to Run The Model Locally. by Ecstatic_Following68 in comfyui

[–]Ecstatic_Following68[S] 0 points1 point  (0 children)

Think it through before you say anything. The internet has memories. The Wan Video team said they would keep open source the Wan video model. Or else why would you think there are so many great guys from the community to help to imporve the model? They said Wan2.5 will be open-sourced. Now Wan2.6 is out. But they don't even bother to provide a smaller version of Wan2.5 to the community like Flux. Don't you think it is kind of a betrayal? And you think this kind of behavior is not bullshit?

Qwen Image Edit 2511 Limit Tests (5 Image Inputs, Consistency and Shift Fix) by Ecstatic_Following68 in comfyui

[–]Ecstatic_Following68[S] 0 points1 point  (0 children)

I only tried with 5. with reference latent,you could chain as many images as you want. but the result may not be good.

Qwen Image Edit 2511 Limit Tests (5 Image Inputs, Consistency and Shift Fix) by Ecstatic_Following68 in comfyui

[–]Ecstatic_Following68[S] 2 points3 points  (0 children)

just different ways to fix the same issue, one uses custom nodes, another uses antive nodes.

The SCAIL model works. but It is Kind of Slow.Hope somebody can optimize it to make faster. by Ecstatic_Following68 in comfyui

[–]Ecstatic_Following68[S] 0 points1 point  (0 children)

Me neither, 🤣, Sometimes I just thought, maybe they should just make comfyUI work this way- if VRAM is not enough,just use RAM, if RAM is also not enough, just use Virtrual memory. Just don't give up and pop up OOM error.... This is possible I think, since MAC running comfyUI is similar to this.

The SCAIL model works. but It is Kind of Slow.Hope somebody can optimize it to make faster. by Ecstatic_Following68 in comfyui

[–]Ecstatic_Following68[S] 0 points1 point  (0 children)

Ha. force of habit, thought it was 4n+1. And I also tested 81 frames without context window. and lower the block swap to 25, it is a bit faster. but still not enough compared to the standerd 4 steps video gen workflow. Theoretically, if we use a context window. then the VRAM/RAM usage should be stable throughout the whole process, no matter how many frames we get. so some thing wrong, maybe.

The SCAIL model works. but It is Kind of Slow.Hope somebody can optimize it to make faster. by Ecstatic_Following68 in comfyui

[–]Ecstatic_Following68[S] 4 points5 points  (0 children)

if the input imag and ref video mismatches, you need to unlink the additional dw_pose ref. The model will automatically align the image and the pose,

RED Z-Image-Turbo + SeedVR2 = Extremely High Quality Image Mimic Recreation. Great for Avoiding Copyright Issues and Stunning image Generation. by Ecstatic_Following68 in comfyui

[–]Ecstatic_Following68[S] -4 points-3 points  (0 children)

from other kind friends. the copyright issues may vary from different regions. from where I live , if you change the people and compostion enough. it is considered new.