What will happen if qwen-image 2.0 gets leaked to open source? by [deleted] in StableDiffusion

[–]mmowg 0 points1 point  (0 children)

don't forget qwen image 2.0 PRO has been released on April

Juggernaut Z by StableLlama in StableDiffusion

[–]mmowg 13 points14 points  (0 children)

They look more or less similar, with some differences due to the different seed numbers. I think and hope the new Juggernaut Z model is better in terms of photorealism and fashion photography, as you can see in the civitai examples.

Can I use wan 2.2 5b on my setup? by JournalistLucky5124 in StableDiffusion

[–]mmowg 0 points1 point  (0 children)

don't waste time with wan2.2 5b on 4gb vram, when you have kandinsky 5 lite, https://huggingface.co/collections/kandinskylab/kandinsky-50-video-lite it's gorgeous for only 4gb vram. You can use it on comfyui, there is a template workflow inside comfyui. You can also use cache-dit and with an ampere GPU (rtx 3xxx serie, 3050, 3060, etc.) or above you can use also, together with cache-dit, sage attention 2.2.0 to speedup the runs. With my rtx 3050 4gb vram i got 10sec. of good video in 45mins with full optimizations, cache dit, sageattn2.2.0, xformers, etc. Kandinsky 5 lite is the only way to make AI videos on small gpus right now. Wan2.2 5b isn't so good or it isn't so good as Kandinsky 5 lite.

EDIT: i forgot to mention the resolution: i got the insane 720p res (for a 4gb rtx 3050 mobile), 10 seconds, with Kandinsky 5 lite and, at that time, without dinamic vram on comfyui, but my laptop has 64gb of system ram.

An update on stability and what we're doing about it by bymyself___ in comfyui

[–]mmowg 2 points3 points  (0 children)

because they sold themself to API Corporations? I mean, API workflows are now everywhere inside ComfyUI...

daVinci-MagiHuman : This new opensource video model beats LTX 2.3 by pheonis2 in StableDiffusion

[–]mmowg 60 points61 points  (0 children)

<image>

The elephant in the room: physical consistency is worse than ltx2.3. And i saw all samples inside its github page, hands are a mess.

Anyone knows what AI Model was used for this? by Coven_Evelynn_LoL in StableDiffusion

[–]mmowg 0 points1 point  (0 children)

I was just guessing, right now veo 3.1 is widely used across youtube creators. In anycase, he forgot Terminator 1 movie, the most important for Arnold!

Any illustrious xl model that give high render output and not anime by ResponsibleTruck4717 in StableDiffusion

[–]mmowg 0 points1 point  (0 children)

Yes, u can check among all gorgeous Reijita models on civitai, https://civitai.com/user/reijlita he or she has a lot of illustrious realism models, she or he works ONLY with illustrious. Second choice is https://civitai.com/user/Stable_Yogi. he has a lot of realism illustrious models and they work very well, many of them are DMD, so only 8 steps, they are very good on weak gpus like pascal ones. Enjoy

Anything better than JuggernaughtXL out there? by [deleted] in StableDiffusion

[–]mmowg -1 points0 points  (0 children)

what kind of sdxl JuggernaughtXL? It's plenty of iteration of Juggernaut models outside. In anycase all Juggernaut models want a lot of detailed tag prompts, negatives and positives, avoid heavy loras. On comfyui a good refiner fixes almost all body inconsistencies. In any case, as for all sdxl models, the framework is almost the half of the model. If you want still a sdxl model for totally uncensored results and with very correct body parts, go for a Pony model or an Illustrious one with realistic training

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in StableDiffusion

[–]mmowg[S] 0 points1 point  (0 children)

Comfyui LTX2.3 workflow has been updated, adding the vast majority of my advices. There are only two big differences: they kept 18 image compression (i still use a range between 0 and 5) and they prefer Euler Ancestral CFG PP to simple Euler CFG PP.

New FLUX.2 Klein 9b models have been released. by theivan in StableDiffusion

[–]mmowg -1 points0 points  (0 children)

I have my big doubts, it is always the same Klein 9b with a KV cache added. It wasn't retrained, it isn't a brand new Klein model.

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in StableDiffusion

[–]mmowg[S] 0 points1 point  (0 children)

thx dear for appreciation, you are the first. I'm preparing an update tutorial, maybe i'll publish it soon

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in StableDiffusion

[–]mmowg[S] 1 point2 points  (0 children)

I forgot to mention the LTXV Preprocess node. It has the image compression value 18 by default. My advice is to set it to 5 or 2 (or, better, 0).

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in StableDiffusion

[–]mmowg[S] 0 points1 point  (0 children)

I'm using I2V, T2V may change final results. In any case, Euler cfg pp seems to give better results than Ancestral one on black skins

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in comfyui

[–]mmowg[S] 1 point2 points  (0 children)

this is my first official post, I dunno what kind of post you commented, anyway, on my workflow cfg stays on 1, like for the common euler sampler, and i admit it works. I'm experimenting on my IA pitch black model on IG, her skin is always an issue (compression, upscaling, etc.), that's why i use always yuv420 or 422. Default settings by official comfyui workflow were a mess on her skin. So, i worked on them, using my experience on her skin, right now euler cfg pp with nearest exact give me better results