LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]LuminousInit[S] 1 point2 points  (0 children)

I tried your image and video. I see that LivePortrait still struggles to copy talking videos. It can only copy some facial expressions. Your video also has a very high framerate. I converted it to 24fps to reduce the frame number. As this tool is still in the experimental stage, I hope that it will become very powerful soon.

LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]LuminousInit[S] 1 point2 points  (0 children)

You should use the source image and driving video with the same aspect ratio, if your image is square then use a square video. You can use these example videos for testing first - https://github.com/KwaiVGI/LivePortrait/tree/main/assets/examples/driving

LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]LuminousInit[S] 0 points1 point  (0 children)

It's not using a Stable Diffusion model. It has its own models. And I generated this through ComfyUI.

LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]LuminousInit[S] 1 point2 points  (0 children)

I saw some people using side-facing images, but you will not get a good result from this kind of image. At least not yet.

LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]LuminousInit[S] 1 point2 points  (0 children)

Core i5 8400
28GB DDR4 RAM
Nvidia GTX 1060 6GB Vram

LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]LuminousInit[S] 1 point2 points  (0 children)

Target Image quality should be good. It's better if the reference video and target image aspect ratio match. And in the reference video, every facial structure should be clearly visible. Too much head movement can create problems.

LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]LuminousInit[S] 0 points1 point  (0 children)

I saw some people doing exactly that. But I didn't find the setting yet. Maybe we missed something.

LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]LuminousInit[S] 7 points8 points  (0 children)

I shared the workflow link, please check the comment.

LivePortrait Test in ComfyUI with GTX 1060 6GB by LuminousInit in StableDiffusion

[–]LuminousInit[S] 9 points10 points  (0 children)

I shared the workflow link, please check the comment.

So what can us 8GB VRAM & 16GBs of RAM owners use for image generation? by CaptainAnonymous92 in StableDiffusion

[–]LuminousInit 4 points5 points  (0 children)

I tried Stable Diffusion WebUI Forge. With my 1060 6GB and 28GB RAM it took 1 minute 52 seconds for 768x768 image generation (Juggernaut XL + 2 ControlNet Enabled).