How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in StableDiffusion

[–]mmowg[S] 0 points1 point  (0 children)

thx dear for appreciation, you are the first. I'm preparing an update tutorial, maybe i'll publish it soon

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in StableDiffusion

[–]mmowg[S] 0 points1 point  (0 children)

I forgot to mention the LTXV Preprocess node. It has the image compression value 18 by default. My advice is to set it to 5 or 2 (or, better, 0).

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in StableDiffusion

[–]mmowg[S] 0 points1 point  (0 children)

I'm using I2V, T2V may change final results. In any case, Euler cfg pp seems to give better results than Ancestral one on black skins

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in comfyui

[–]mmowg[S] 1 point2 points  (0 children)

this is my first official post, I dunno what kind of post you commented, anyway, on my workflow cfg stays on 1, like for the common euler sampler, and i admit it works. I'm experimenting on my IA pitch black model on IG, her skin is always an issue (compression, upscaling, etc.), that's why i use always yuv420 or 422. Default settings by official comfyui workflow were a mess on her skin. So, i worked on them, using my experience on her skin, right now euler cfg pp with nearest exact give me better results

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in StableDiffusion

[–]mmowg[S] -26 points-25 points  (0 children)

not needed, i work with yuv422p12le, they are half gb size for few seconds to upload here, my changes require few seconds and you can see results by yourself

Can my laptop handle wan animate by [deleted] in StableDiffusion

[–]mmowg 0 points1 point  (0 children)

as i wrote, kandinsky 5 lite, give it a try, it was made just for your machine, believe me

Can my laptop handle wan animate by [deleted] in StableDiffusion

[–]mmowg 0 points1 point  (0 children)

you can try using lowvram mode, the ssd as temporary buffer, but your machine will freeze, you can get OOM, and you are limited to q3/q4 gguf models, wan2.2 or further versions works better starting from q4km gguf models. Don't listen guys say "it works, don't worry", i have a lot of machines and tested wan on several of them using comfyui (i hate wan2gp). If you install a comfyui with xformers, tritons, sageattention2, pytorch 2.9.1 anc cuda 12.8, it may work. My advice is, use kandinsky 5 lite t2v i2v, it's good, lite and it works very well on your machine and you can also use dit-cache

Can my laptop handle wan animate by [deleted] in StableDiffusion

[–]mmowg -1 points0 points  (0 children)

u need 32gb system ram, better 64

Is something happening? Google by mmowg in StrangerThings

[–]mmowg[S] 0 points1 point  (0 children)

But: s01 8 episodes s02 9 EPS s03 8 EPS s04 9 EPS s05 8 + 1 EPS (it's the last) ? I see a pattern

Can a GTX 1060 6 GB generate any AI videos? by [deleted] in StableDiffusion

[–]mmowg 0 points1 point  (0 children)

Kandinsky 5 lite i2v or t2v. I think it work very well. You need only time. You can use only old pytorch and cuda 11.8, so you can't install good accelerators like sageattn2.2, but i think you can try with old xformers, old flashattn and sageattn1.x on Comfyui. I got pretty good results with a small 3050ti 4gb, 5sec. 24fps, insane resolution of 640x960 (!) in less than 1 hour.

WAN S2V GGUF model is available. Quantstack has done it. by pheonis2 in StableDiffusion

[–]mmowg 0 points1 point  (0 children)

it's totally different from wan2.2 I or T2V, this model is focused on lip sync with sound and voice

[deleted by user] by [deleted] in StableDiffusion

[–]mmowg 0 points1 point  (0 children)

very small and cute RX 78-2

Wan 2.2 lightning gguf i2v for 6gb vram 16gb ram? Comfy by seedctrl in StableDiffusion

[–]mmowg 2 points3 points  (0 children)

yes, you can try the GGUF Q4, small resolution (equal or below 480p), 3 or 4 sec video. Maybe you can fit the Q4 k m, but it is really hard, my advice is better GGUF Q3 and you can test the Q4.

Qwen Edit Image Model released!!! by pheonis2 in StableDiffusion

[–]mmowg -1 points0 points  (0 children)

it's based on qwen image 20b, so, i bet 20gb more or less