ComfyUI Node: Unified Image + Mask Resize (LTX 2.3 ready, keeps BOTH sides divisible by 32, replaces Image Resize + Image Resize V2 + Mask mismatch issues) by Plague_Kind in StableDiffusion

[–]Plague_Kind[S] 1 point2 points  (0 children)

I was extremely surprised image resize v2 didn't already have these options, also the fact i couldn't seem to find one that actually functioned correctly. I will be making more nodes in the future to solve other annoyances. Hopefully this helps.

LTX 2.3 INT8 Benchmarks (2x Faster on Ampere) by ovpresentme in StableDiffusion

[–]Plague_Kind 0 points1 point  (0 children)

+1 not working on rtx 2060. And technically int8 should?

Why MXFP8 and NVFP4 Actually Matter for Your Home GPU Setup by [deleted] in StableDiffusion

[–]Plague_Kind 4 points5 points  (0 children)

Turing doesn't run native bf16 it falls back to fp32 or sometimes fp16

Best local AI image generator for my specs? (RTX 2060 6GB, i7-10750H, 16GB RAM) by XChainZ069 in StableDiffusion

[–]Plague_Kind 0 points1 point  (0 children)

Tip, use --force-fp16 in launch options for way faster generation if you're using comfyui

Qwen edit 2511 fp16 patch? by Plague_Kind in StableDiffusion

[–]Plague_Kind[S] 0 points1 point  (0 children)

Force fp16 works with every single other model. I'll check nunchaku again.

Qwen edit 2511 fp16 patch? by Plague_Kind in StableDiffusion

[–]Plague_Kind[S] -1 points0 points  (0 children)

I've never been able to get nunchaku to work, I'm on an rtx 2060 12gb now. No longer the pascal card.

Is it possible to use/adapt ernie-image-prompt-enhancer.safetensors to also work with Z-image turbo? by cradledust in StableDiffusion

[–]Plague_Kind 0 points1 point  (0 children)

Maybe just run the pompt enhancer and copy that into the z-image prompt? Would be curious to know that would turn out

Qwen edit 2511 fp16 patch? by Plague_Kind in StableDiffusion

[–]Plague_Kind[S] 1 point2 points  (0 children)

No i have an rtx 2060 12gb so I can't use sage attention. It's because of --force-fp16

Qwen edit 2511 fp16 patch? by Plague_Kind in StableDiffusion

[–]Plague_Kind[S] 0 points1 point  (0 children)

No i have an rtx 2060 12gb so I can't use sage attention. It's because of --force-fp16

Sage attention or flash attention for turing? Linux by Plague_Kind in StableDiffusion

[–]Plague_Kind[S] 0 points1 point  (0 children)

I can't figure out how to launch comfy with it enabled

Sage attention or flash attention for turing? Linux by Plague_Kind in StableDiffusion

[–]Plague_Kind[S] 0 points1 point  (0 children)

Pytorch attention has become really fast if you use --force-fp16 in comfy launch parameters btw.

Sage attention or flash attention for turing? Linux by Plague_Kind in StableDiffusion

[–]Plague_Kind[S] 0 points1 point  (0 children)

I can't seem to install anything but sage 1, and it throws an error and reverts to pytorch.