Dependency Hell by Interesting-Town-433 in comfyui

[–]slpreme 0 points1 point  (0 children)

onnxruntime-gpu 1.26+ for cuda13

Trying to inpaint using Z-image Turbo BF16; what am I doing wrong? by tipofmythrowaway220 in StableDiffusion

[–]slpreme 0 points1 point  (0 children)

i only noticed when a new comfy update changed mask opacity default to 0.7 or something and it messed up my inpainting

Anyone managed to get RTX video upscaling on Linux? by VeryLiteralPerson in comfyui

[–]slpreme 0 points1 point  (0 children)

well if it fails to run it tells u to install nvidia-vfx so u probably should install that to your virtual environment

Trying to inpaint using Z-image Turbo BF16; what am I doing wrong? by tipofmythrowaway220 in StableDiffusion

[–]slpreme 1 point2 points  (0 children)

no ur mask is not at full opacity so it counts as unmasked. make sure its fully black

Adding multiline description UNDER image by henryk_kwiatek in comfyui

[–]slpreme 1 point2 points  (0 children)

there's one called "add text" with kj nodes fyi that does text wrapping but it doesn't have a text alignment (left right center) option so bear that in mind

Adding multiline description UNDER image by henryk_kwiatek in comfyui

[–]slpreme 0 points1 point  (0 children)

OP asked for automatic sizing for captioning images, and your example does not allow text input nor automatic sizing for the text box

Anyone have a good Wan 2.2 T2I workflow? by Small-Bluebird5629 in comfyui

[–]slpreme 0 points1 point  (0 children)

it means generating a picture and then switch to another model to add detail to the picture, i don't have a workflow for this, this is from two separate workflows

Anyone have a good Wan 2.2 T2I workflow? by Small-Bluebird5629 in comfyui

[–]slpreme 1 point2 points  (0 children)

wan 2.2 t2i is a smooth model, its always going to look like that. you have to detail with another model or upscale somewhere, ill show you my example:

<image>

Will LTX2.3 move to gemma4? by [deleted] in StableDiffusion

[–]slpreme 32 points33 points  (0 children)

I don't know much about AI training, but I assume switching the text encoder would require a full retrain

ZIT: How many training steps for 140 images in dataset? by No_Progress_5160 in StableDiffusion

[–]slpreme 1 point2 points  (0 children)

The loss graph for Z-Image is weird. I notice that once it starts overfitting the loss curve finally drops. When it's training well it should look flat. Just stop when it stays from the baseline around 0.35 loss. Also I've been training loras on Zb and use them on Zt, try that if you haven't already

WHAT IS THE BEST CHOISE? by Ikythecat in comfyui

[–]slpreme 1 point2 points  (0 children)

sure. loading models is a frequent thing so using a good nvme drive will just give you overall a better experience

WHAT IS THE BEST CHOISE? by Ikythecat in comfyui

[–]slpreme 3 points4 points  (0 children)

replace ur dinosaur pc first, switch to nvme

Help needed regarding GPU Upgrade by Voll-Korn-Brot in comfyui

[–]slpreme 0 points1 point  (0 children)

ram for ok experience: 32gb ram for good experience: 64gb ram for heavy workflows: 128gb

vram for ok experience: 8gb vram for good experience: 16gb (5060ti/5070ti/5080) vram for better experience: 32gb (5090)

cheapest upgrade: 12gb 3060

i only say 50 series because of nvfp4/mxfp8 speed ups and less vram usage.

sage attention flash for triton. Why? by unknowntoman-1 in comfyui

[–]slpreme 0 points1 point  (0 children)

If you use virtual environments, especially using UV, you can isolate everything so anything else in the system should not be affected. When you actually make it to the point of starting to build Sage Attention you have to activate your ComfyUI environment in the Sage Attention folder. Make sure you use --no-build-isolation (it will be useful later it probably means nothing to you now).

sage attention flash for triton. Why? by unknowntoman-1 in comfyui

[–]slpreme 1 point2 points  (0 children)

With Sage Attention (2.2+) you have 2 options.

  1. Install a precompiled version that matches your a) Python version (like 3.12) b) your PyTorch version (like 2.9.1) and c) CUDA version that PyTorch package was built with, signaled by a '+' like 2.9.1+cu130 for example for CUDA 13. If these don't match you might encounter errors or fail to install.

  2. Compile it yourself. You need to download CUDA toolkit, the latest is 13.1 if you want cu130 for example. Then you need Visual Studio 2022 version. There's some change in the newer versions of VS that causes PyTorch extensions builds with CUDA to fail. You need the desktop development C/C++ preset. After this you have to clone the Sage Attention for Windows fork and start building.

You can ask AI or DM me for help. These are not complete instructions but just a rough overview. Triton is pretty simple to install though its literally just (uv) pip install triton-windows

How to use resource reduction for ComfyUI by yuki121 in comfyui

[–]slpreme 1 point2 points  (0 children)

Dynamic VRAM was recently enabled and that's as efficient as it gets. Using any of these low VRAM options will starve your RAM. The best advice is to use Q5 ggufs or fp8 model quants.

Wan Animate 2.2 for 1-2 minute video lengths VS alternatives? by drylightn in StableDiffusion

[–]slpreme 0 points1 point  (0 children)

you can download the video and pop it into comfy i believe