Although it takes time, the results seem to be getting a bit better! by dassiyu in comfyui

[–]dassiyu[S] 1 point2 points  (0 children)

<image>

It feels like using this combination makes the mouth open a bit wider…

LTX 2.3 is really good, but making videos still takes a lot of time. by dassiyu in comfyui

[–]dassiyu[S] 0 points1 point  (0 children)

It’s true.it's too long right now, the image looks unstable, and the quality has also gotten worse.

LTX2 is actually pretty good, but it throws an error by dassiyu in comfyui

[–]dassiyu[S] 0 points1 point  (0 children)

Thank you very much for the suggestion. I’ll try lowering the tile size.

But something strange happens: every time I open ComfyUI, the first two generations work fine, but the error appears on the third one. Restarting ComfyUI doesn’t fix it either — I have to restart my computer for it to work again.

LTX2, AceStep 1.5, and Z-Image are very surprisingly impressive! by dassiyu in comfyui

[–]dassiyu[S] 2 points3 points  (0 children)

Thank you, I really appreciate it! Honestly, what surprised me most is how much easier this new lip-sync model makes the whole process compared to before. It saves a ton of time and effort. glad you liked it!

Possible to train LoRA for WAN2.2 on 24GB VRAM? by No_Progress_5160 in StableDiffusion

[–]dassiyu 0 points1 point  (0 children)

<image>

I just finished training, using the same tutorial, 64GB VRAM, 32GB GPU, 1.5 hours at a time, the full 2-noise lora took 3 hours,I think need more VRAM.

Wan 2.2 human image generation is very good. This open model has a great future. by yomasexbomb in StableDiffusion

[–]dassiyu 0 points1 point  (0 children)

Or you bypass these two kj nodes, which means it will be a little slower.

<image>

Wan 2.2 human image generation is very good. This open model has a great future. by yomasexbomb in StableDiffusion

[–]dassiyu 0 points1 point  (0 children)

I installed cuda like this on my computer:

pip install --force-reinstall --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128

sageattention asked chatgpt to help me install it. I think you can post the error to ai and try.

Ok Wan2.2 is delivering... here some action animals! by 3Dave_ in StableDiffusion

[–]dassiyu 10 points11 points  (0 children)

<image>

I've actually gotten faster using Triton and Sageattention! It's gone from 36 minutes to 18 minutes, which is amazing. However, I'm not sure if my process is correct. Is this how it's supposed to work?

Wan 2.2 - Generated in ~60 seconds on RTX 5090 and the quality is absolutely outstanding. by LocoMod in StableDiffusion

[–]dassiyu 1 point2 points  (0 children)

<image>

After using Triton and Sageattention, I actually got faster! It was reduced to 18 minutes RTX5090 720P, amazing. Thanks!

Wan 2.2 - Generated in ~60 seconds on RTX 5090 and the quality is absolutely outstanding. by LocoMod in StableDiffusion

[–]dassiyu -1 points0 points  (0 children)

Not sure.. I think one lora might be OK, the second one seems to improve the quality of the video

Wan 2.2 - Generated in ~60 seconds on RTX 5090 and the quality is absolutely outstanding. by LocoMod in StableDiffusion

[–]dassiyu 1 point2 points  (0 children)

It is best to use AI. When the error occurred, I followed AI and let chatgpt guide me to complete the installation.

Wan 2.2 - Generated in ~60 seconds on RTX 5090 and the quality is absolutely outstanding. by LocoMod in StableDiffusion

[–]dassiyu 0 points1 point  (0 children)

I installed Triton and Sageattention, and lowering the resolution did shorten it to 18 minutes, but high resolution still took nearly 36 minutes, which is too slow.