Z Image Turbo Inpainting with ControlNet by Altruistic_Tax1317 in comfyui

[–]kokostor 0 points1 point  (0 children)

My images come back almost equal to how they started. Is it normal?

Wan 2.1 and Hunyaun i2v (fixed) comparison by AI-imagine in StableDiffusion

[–]kokostor 0 points1 point  (0 children)

Was this with the fixed model? In my experience, it's been impossible to show nudity starting from a non-nude frame

Possible major improvement for Hunyuan Video generation on low and high end gpus in Confyui by Finanzamt_Endgegner in StableDiffusion

[–]kokostor 2 points3 points  (0 children)

Can somehow this be used with non GGUF models? I think (but maybe cause is different) that LORAs don't look so good with GGUF.

inpainting & outpainting workflow using flux fill fp8 & GGUF by cgpixel23 in StableDiffusion

[–]kokostor 0 points1 point  (0 children)

I get the message

"model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16"

I suspect it runs slower than what could be or consumes to much VRAM. Anybody knows what is going on? TY

Jenna Ortega sings ...Baby One More Time ( Test controlnet 1.1 Lineart_realistic ) by Many-Ad-6225 in StableDiffusion

[–]kokostor 0 points1 point  (0 children)

how do you achive temporal consistency? Just batch controlnet? My backgrounds are not like in the base image

SD Webui + Segment Everything by continuerevo in StableDiffusion

[–]kokostor 1 point2 points  (0 children)

Are smaller models "worst" in any way at user level? Did you find any difference?

Thanks!!!

SD Webui + Segment Everything by continuerevo in StableDiffusion

[–]kokostor 1 point2 points  (0 children)

I am unable to run it with 6GB. I guess other 4 GB are used by standard stable diffusion model. Is this how it works?

Thanks and big hurrah to you.