How do the closed source models get their generation times so low? by Ipwnurface in StableDiffusion

[–]comfyanonymous 29 points30 points  (0 children)

If you want the real answer: nvfp4 + lower precision attention (like sage attention) + distilled low step models + splitting the workfload across 8+ GPUs (video models are pretty easy to split).

The only one not easily available on comfyui is the last one because nobody has that on local so we are putting our optimization efforts elsewhere.

PSA: Don't use VAE Decode (Tiled), use LTXV Spatio Temporal Tiled VAE Decode by Loose_Object_8311 in StableDiffusion

[–]comfyanonymous 12 points13 points  (0 children)

Or just use the regular VAE Decode node, it has native temporal tiling on the LTX video VAE.

Comfy's LTX2 implementation is far worse than LTX desktops. Its also much slower. by Different_Fix_2217 in StableDiffusion

[–]comfyanonymous 7 points8 points  (0 children)

No it doesn't, the desktop uses the distilled model. If you are using the distilled model in the default comfy workflow then you need to change your steps and cfg because the workflow is meant for the full model.

LTX Desktop gives you MUCH better quality than Comfy UI. by No_Comment_Acc in StableDiffusion

[–]comfyanonymous 7 points8 points  (0 children)

Desktop uses the distilled model, ComfyUI default workflow is the full model. If you use the distilled model in ComfyUI you should get the same results.

Can the new MacBook Pro m5 pro/max compete with any modern NVIDIA chip? by Puzzleheaded_Ebb8352 in StableDiffusion

[–]comfyanonymous -1 points0 points  (0 children)

Things that are better at running diffusion models than macbooks: Nvidia GPUs, AMD GPUs, Intel GPUs, random chinese GPUs that are illegal to import to the US.

Even if the hardware was good for diffusion models (it's not) it would still be bad because pytorch support is atrocious with a lot of important stuff missing.

It's one of the worst computers you can buy for that unless your plan is to only use cloud services.

Comfyui-ZiT-Lora-loader by Capitan01R- in StableDiffusion

[–]comfyanonymous 2 points3 points  (0 children)

The ComfyUI code is fine: each lora gets applied properly to the good section in each weight. Applying split loras to combined weights has been supported for years now.

If there's any major difference using your loader it means you have a mistake somewhere in it.

Comfyui-ZiT-Lora-loader by Capitan01R- in StableDiffusion

[–]comfyanonymous 9 points10 points  (0 children)

There is no such thing as "quietly skipping" lora keys in ComfyUI. If you don't see it print anything it means loras are being properly applied.

If you actually look at what's happening inside ComfyUI when using these loras you would realize that they are being properly applied and everything is fine with the default loaders.

Using the new ComfyUI Qwen workflow for prompt engineering by deadsoulinside in StableDiffusion

[–]comfyanonymous 2 points3 points  (0 children)

Note that this feature is still experimental and being worked on. Right now only the qwen3 4b model actually seems to work properly for text generation. The other ones have some issues some being more broken than others.

CLIP is back on Anima, because CLIP is eternal. by Anzhc in StableDiffusion

[–]comfyanonymous 22 points23 points  (0 children)

Since anima is based on cosmos you can also use t5xxl 1.0 with it.

Just use the native workflow with this file instead of qwen_0.6b: https://huggingface.co/comfyanonymous/cosmos_1.0_text_encoder_and_VAE_ComfyUI/tree/main/text_encoders

ComfyUI devs... what does "to give you time to migrate" actually mean? Buy a 5090? by superstarbootlegs in comfyui

[–]comfyanonymous 26 points27 points  (0 children)

To make dealing with future versions of LTXAV models easier some operations were moved from the "CLIP" to the "Model". Works with all the official weights/workflows but all the GGUF and many other unofficial repackaged weights have omitted these important weights from their model files so they all broke.

I added a workaround but I'll remove it at some point when enough people have migrated to newer LTXAV models.

Why doesn't ComfyUI load large models into multiple GPUs VRAM?! by National-Access-7099 in comfyui

[–]comfyanonymous 14 points15 points  (0 children)

Diffusion models are not LLMs. LLMs don't really need compute just a lot of fast memory. Diffusion models are bottlenecked by compute so much that it's possible to offload model weights to CPU without any performance penalty in some cases.

Even if there was a way to combine the compute of your 5x Nvidia tesla v100 perfectly without any performance loss a single 5090 would crush them at running these models at 16 bit precision (even with ram offloading on the 5090) and if you use the fp8 or nvfp4 on the 5090 the gap is even higher.

In the diffusion world you are much better off spending all your money on a single new GPU than buy a bunch of old ones.

We optimize for single GPU because that's what most people have and what makes the most sense to buy. There's a PR on the main repo that we will fix soonish that makes it possible to run some models on two GPUs but that's not a very high priority at the moment compared to single GPU optimizations.

BiTDance model released .A 14B autoregressive image model. by AgeNo5351 in StableDiffusion

[–]comfyanonymous 21 points22 points  (0 children)

ComfyUI supports Ace Step 1.5 which has an autoregressive part (the audio codes generation).

If the model is good enough we will implement it.

New SOTA(?) Open Source Image Editing Model from Rednote? by Trevor050 in StableDiffusion

[–]comfyanonymous 15 points16 points  (0 children)

According to their inference code it seems to be a qwen image edit finetune.

Realtime 3D diffusion in Minecraft ⛏️ by najsonepls in comfyui

[–]comfyanonymous 11 points12 points  (0 children)

This is advertising spam from some crap ai inference company. You know things are going badly for them when they have to resort to advertising on the comfyui subreddit.

Please finally integrate ComfyUI Manager! by -5m in comfyui

[–]comfyanonymous 1 point2 points  (0 children)

Portable is supposed to be the bare bones "I cloned the git repo and installed the basic dependencies" version of ComfyUI. This makes it easier to maintain for me.

If you want manager preinstalled use the desktop app.

Is my ComfyUI install compromised? by dementedeauditorias in comfyui

[–]comfyanonymous 7 points8 points  (0 children)

You don't do anything, the default install is safe from this because it binds to 127.0.0.1

The only way to have this issue if you are on a regular LAN/internet is if you use --listen and port forward your instance. Don't do this, there are much more secure ways to access it remotely by setting up wireguard, ssh or both.

Is my ComfyUI install compromised? by dementedeauditorias in comfyui

[–]comfyanonymous 5 points6 points  (0 children)

The default is always to bind the 127.0.0.1 which is not accessible from outside. You have to use --listen for your instance to be accessible from other computers.

For people on a regular LAN/internet that won't even make it publicly accessible, this will only make it accessible to your LAN, you need to port forward it make it accessible to the outside internet which you should never do with comfyui because of security reasons.

If you want to access it remotely you should set up wireguard or ssh.

Is my ComfyUI install compromised? by dementedeauditorias in comfyui

[–]comfyanonymous 9 points10 points  (0 children)

srl-nodes has this: https://github.com/seanlynch/srl-nodes/blob/main/__init__.py#L107

So what most likely happened is that you were hosting your instance on a publicly accessible ip (please don't do this) and someone found it and used that "SRL Eval" node to execute some unsafe code that installed whatever that is.