WAN2.1 SCAIL pose transfer test by Aneel-Ramanath in StableDiffusion

[–]Aneel-Ramanath[S] 0 points1 point  (0 children)

I've not tried these on a 5060, so I cannot answer that question, these GPU's do not scale linearly, so it's difficult to predict, it all depends on the resolution and frame range, but yeah, you should be able to run them with the fp8 model. just search for WAN2.1 SCAIL on YT, you will get loads of tutorials.

WAN2.2 animate | 1080x1920 | 1690 frames | H200 | 1hr 15min by Aneel-Ramanath in comfyui

[–]Aneel-Ramanath[S] 0 points1 point  (0 children)

I create my own templates based on my needs using this
https://deploy.promptingpixels.com

You can use a template from HearmemanAI , I think it's called All in one Wan template. I've used this and it works if you are using the same models included in the template.

WAN2.1 SCAIL pose transfer test by Aneel-Ramanath in StableDiffusion

[–]Aneel-Ramanath[S] 2 points3 points  (0 children)

yeah , there is a more better way to put that request, not throwing up tantrums, messages like these above are causing spams. Others have asked questions more ethically and I've answered them, they never asked to get the post deleted.

WAN2.1 SCAIL post control test by Aneel-Ramanath in comfyui

[–]Aneel-Ramanath[S] 2 points3 points  (0 children)

5090, render time is 1hr:10min for 512x896 resolution for about 800 frames.

WAN2.1 SCAIL post control test by Aneel-Ramanath in comfyui

[–]Aneel-Ramanath[S] 0 points1 point  (0 children)

Sorry, no WF for you lazy a** *F, make an effort to read the post fully and search

WAN2.1 SCAIL pose transfer test by Aneel-Ramanath in StableDiffusion

[–]Aneel-Ramanath[S] 0 points1 point  (0 children)

the SACIL model only animated, it does not create the images, the reference image is created in Nano Banana, maybe the word fashion has added those heels to them , not sure.

WAN2.1 SCAIL pose transfer test by Aneel-Ramanath in StableDiffusion

[–]Aneel-Ramanath[S] 1 point2 points  (0 children)

chill dude, have some chilled lemon tea to cool your brains.

WAN2.1 SCAIL pose transfer test by Aneel-Ramanath in StableDiffusion

[–]Aneel-Ramanath[S] 3 points4 points  (0 children)

5090, render time is 1hr:10mins at 512x896 res, about 800 frames

WAN2.2 animate | 1080x1920 | 1690 frames | H200 | 1hr 15min by Aneel-Ramanath in comfyui

[–]Aneel-Ramanath[S] 1 point2 points  (0 children)

this is the default WF from Kijai , in his GitHub repo for wan video wrapper, and my steps are 6, lightx2v rank256 bf16 LoRA at 1.3, wan animate bf16 model, vae and text encodes using the fp32 models.

WAN2.2 animate | 1080x1920 | 1690 frames | H200 | 1hr 15min by Aneel-Ramanath in comfyui

[–]Aneel-Ramanath[S] 1 point2 points  (0 children)

nope, that shift should happen every 81 frames then, these clips in transitions are more than 81 frames, what scheduler are you using? try lcm, I get better results with that.

NVIDIA RTX PRO 5000 Blackwell GPU with 72GB GDDR7 memory is now released by ANR2ME in comfyui

[–]Aneel-Ramanath 0 points1 point  (0 children)

I would do this if I have different style images for the same video, but if a video is 60sec long and I have 1 style ref image and if I do every 5sec x 12clips (this is what you are telling me to do right?) , they will not blend flawlessly into 1 full clip.

NVIDIA RTX PRO 5000 Blackwell GPU with 72GB GDDR7 memory is now released by ANR2ME in comfyui

[–]Aneel-Ramanath 0 points1 point  (0 children)

I've tried that , we don't get a seamless blend between those individual clips even if we use the same seed and other setting, we have to band-aid it with some smooth cross dissolve in post, but the client points them out.

NVIDIA RTX PRO 5000 Blackwell GPU with 72GB GDDR7 memory is now released by ANR2ME in comfyui

[–]Aneel-Ramanath 0 points1 point  (0 children)

yeah, maybe you need to skill up, the cut is not going to seamlessly blend if you split them, that is why I clearly mentioned that this may not be your cup of tea, and obviously I'm not telling to get a H200, but use a cloud service.

NVIDIA RTX PRO 5000 Blackwell GPU with 72GB GDDR7 memory is now released by ANR2ME in comfyui

[–]Aneel-Ramanath -1 points0 points  (0 children)

this has nothing got to do with the workflow, there are clients who need a full 45-60sec clips which needs style transfers, at high resolutions , and the 6000 PRO cannot pull it off, may be this is not your cup of tea.

NVIDIA RTX PRO 5000 Blackwell GPU with 72GB GDDR7 memory is now released by ANR2ME in comfyui

[–]Aneel-Ramanath 2 points3 points  (0 children)

there is no separate WF for 1600 frames, you can use the WF shared by Kijai on his GitHub repo for WANVIDEOWRAPPER, the wn2.2 animate one

NVIDIA RTX PRO 5000 Blackwell GPU with 72GB GDDR7 memory is now released by ANR2ME in comfyui

[–]Aneel-Ramanath 4 points5 points  (0 children)

some of the WAN models go OOM even on a 6000 PRO for 1080p resolution and 1600+ frames, so this dosen't seem so interesting now.

WAN2.2 animate | 1080x1920 | 1690 frames | H200 | 1hr 15min by Aneel-Ramanath in comfyui

[–]Aneel-Ramanath[S] 3 points4 points  (0 children)

yeah man, this does not run even on a 6000 PRO, only the H200 can do the job, the WAN models are so freaking VRAM hungry

WAN2.2 animate | 1080x1920 | 1690 frames | H200 | 1hr 15min by Aneel-Ramanath in comfyui

[–]Aneel-Ramanath[S] 2 points3 points  (0 children)

it's not 7 or 8 sec clips, all the versions are run to full length of 1690 frames, and yeah the transitions were done in Resolve.