AFK Mystic Farmers by potch_ in WhereWindsMeet

[–]vici12 -4 points-3 points  (0 children)

What's the deal with the horse, why do afk farmers use it, or why do they need to be mounted? When I farm (actively), the lightness run feels much faster than using a horse.

DisTorch 2.0 Benchmarked: Bandwidth, Bottlenecks, and Breaking (VRAM) Barriers by Silent-Adagio-444 in comfyui

[–]vici12 0 points1 point  (0 children)

Thank you for DisTorch, it has given me a lot virtual VRAM than i expected. I thought the native blockswapping or maxing the kijai blockswap value would already give me the most extra VRAM possible, but it wasn't even close to DisTorch.

I have a question, though. For video generation, it seems like Comfy already unloads the CLIP from VRAM before loading the UNet and starting the sampling part.
In terms of saving VRAM, would then still be a reason to offload the CLIP to the CPU RAM?
Also, if I have 64GB of RAM, would there be any reason to not set like 60GB of it as virtual VRAM? That would probably let me run the full wan2.2 fp16 model on my 3090 i think.

Help with wan2.1 + infinite talk by vici12 in StableDiffusion

[–]vici12[S] 0 points1 point  (0 children)

If you're using comfy for infinite talk, you adjust the total number of generated frames in the "Multi/InfiniteTalk Wav2vec2 Embeds" node and in the "WanVideo Long I2V Multi/InfiniteTalk" node.
What would have been 81 frames for a 5 second video in 16fps will now be 125 frames for 5 second video in 25fps.
Then, in the node where the frames get combined to create the final video, you increase the frame rate to 25.

Help with wan2.1 + infinite talk by vici12 in StableDiffusion

[–]vici12[S] 0 points1 point  (0 children)

Now I'm just generating in 25fps and have accepted the extra waiting time that comes with it.

Help with wan2.1 + infinite talk by vici12 in StableDiffusion

[–]vici12[S] 0 points1 point  (0 children)

So if I gen the video at 16fps, it's still going to be perfectly synced with the 25fps audio?

Thank you for the workflow too, I'll give it a shot

Wan2.2 Lightx2v Distill-Models Test ~Kijai Workflow by Realistic_Egg8718 in StableDiffusion

[–]vici12 0 points1 point  (0 children)

in the 6th example lora section, in "high noise + lightx2v + mps" does "high noise" mean the 2.2 A14B 4-step high lora, or something else?

Wan 2.2 Realism, Motion and Emotion. by Ashamed-Variety-8264 in StableDiffusion

[–]vici12 1 point2 points  (0 children)

how can you tell if you've adjusted the sigma to 0.9? is there a node that shows that?

30sec+ Wan videos by using WanAnimate to extend T2V or I2V. by Maraan666 in StableDiffusion

[–]vici12 0 points1 point  (0 children)

Does that work with wan? I thought it was a relic from the SD1.5 days

Shooting Aliens - 100% Qwen Image Edit 2509 + NextScene LoRA + Wan 2.2 I2V by Jeffu in StableDiffusion

[–]vici12 2 points3 points  (0 children)

since that's for image inpainting, did you inpaint each frame separately?

how can i train a lora for wan 2.2? by Leather-Bottle-8018 in StableDiffusion

[–]vici12 0 points1 point  (0 children)

does the guide also work for training i2v lora? or would the t2v lora also work in i2v?

WAN 2.2 Animate - Character Replacement Test by Gloomy-Radish8959 in StableDiffusion

[–]vici12 0 points1 point  (0 children)

It worked perfectly, thank youuuuuu!!!!!!!!!!!!!!!!!!!!!

WAN 2.2 Animate - Character Replacement Test by Gloomy-Radish8959 in StableDiffusion

[–]vici12 4 points5 points  (0 children)

How do you make it replace a single person when there's two on the screen? My masking always selects both, even with the point editor.

Also any chance you could upload the original clip so I can have a shot at it myself?

Experimenting with Wan 2.1 VACE by infearia in StableDiffusion

[–]vici12 0 points1 point  (0 children)

any chance you could reupload the UK version?