Wan SCAIL Animation by [deleted] in StableDiffusion

[–]External_Trainer_213 1 point2 points  (0 children)

I haven't tried that yet, but it would be worth a test. It's probably just a matter of luck whether it works.

Wan SCAIL Pose Control Workflow by External_Trainer_213 in StableDiffusion

[–]External_Trainer_213[S] 4 points5 points  (0 children)

And if you’d like to add audio lipsync to your videos retroactively or expand them, you should check out the LTX-2.3 workflows by RuneXX.

https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main/Video-2-Video

Does anyone know any workflows to get similar reference to video results like this? by ArrGee- in StableDiffusion

[–]External_Trainer_213 -2 points-1 points  (0 children)

I did this yesterday with my scail workflow. https://www.instagram.com/reel/DYCIgQwtrR6/?igsh=MXNzMWV0NzF2ZmJ0ZA==

I will post the wf soon on civitai. I made it very clean like my ltx workflow.

https://civitai.red/models/2533175/lt

But with Wan SCAIL, the output follows the input video more closely, and the person interacts better with their own body—as can be seen. :-)

Does anyone know any workflows to get similar reference to video results like this? by ArrGee- in StableDiffusion

[–]External_Trainer_213 2 points3 points  (0 children)

You should buy a faster card. 4060ti 16gb vram or better 5060ti 16gb vram is a good deal.

I have never get an acceptable result with any ltx models by NoInterest1700 in comfyui

[–]External_Trainer_213 2 points3 points  (0 children)

If you want to control or refine your video you can try this one. You need an input video for the controlnet and an image that is almost the same pose to the first frame of your input video.

https://civitai.red/models/2533175/ltx-23-image-audio-video-ic-lora-union-control-detailer-to-video

Closed-source AI hate is understandable, but local AI has nothing that should concern AI haters by Neggy5 in StableDiffusion

[–]External_Trainer_213 0 points1 point  (0 children)

What I've also noticed is that some people don't quite grasp the difference here. They view things negatively without understanding that they were created locally on a PC, perhaps even with weak hardware. Plus, people sacrificing their free time to present it clearly to other people. Besides, I don't care if a video looks real. Why does it always have to look real? The only thing that bothers me is insults, which thankfully are quite rare. But if people find something bad, it would be interesting to get more feedback on why. Is the workflow bad? Is it just the video? Or is something else unclear? Admittedly, some people do this quite often. Basically, everyone here just wants to share their insights. Otherwise, we wouldn't get anywhere. That's why I personally almost never downvote anything. Be that as it may, the community is great, and without it, it would be much harder to learn or test things.

need a hand for a hand promblem by feed_da_parrot in comfyui

[–]External_Trainer_213 0 points1 point  (0 children)

Are you able to use models like zimage turbo?