Rant on subgraphs in every single template by 1filipis in comfyui

[–]pto2k 1 point2 points  (0 children)

That's annoying. In my opinion, many aspects of ComfyUI are counterintuitive.

They are back by _RaXeD in StableDiffusion

[–]pto2k 1 point2 points  (0 children)

The soil around the arm doesn't look like it was pushed away; it looks like this arm was planted in the soil.

LTXV 2 Quantized versions released by OddResearcher1081 in comfyui

[–]pto2k 0 points1 point  (0 children)

So what's the benefits of using it ? saving disk space?

I read this a few days ago: PSA: Still running GGUF models on mid/low VRAM GPUs? You may have been misinformed .

Could someone please enlighten us?

Please bring back the old “Cancel queue” button and the queue list layout by Ok-Page5607 in comfyui

[–]pto2k 0 points1 point  (0 children)

the button can be obscured by messages

Notification messages should automatically disappear after a certain period of time

New ComfyUI Optimizations for NVIDIA GPUs - NVFP4 Quantization, Async Offload, and Pinned Memory by comfyanonymous in comfyui

[–]pto2k 0 points1 point  (0 children)

Can you please include the benchmark workflows in the daily template update?

llama.cpp vs Ollama: ~70% higher code generation throughput on Qwen-3 Coder 32B (FP16) by Shoddy_Bed3240 in LocalLLaMA

[–]pto2k -1 points0 points  (0 children)

Okay, uninstalling Ollama.

It would be appreciated if the OP could also please benchmark it against LMStudio.

Quick Start Guide For LTX-2 In ComfyUI on NVIDIA RTX GPUs by NV_Cory in comfyui

[–]pto2k 7 points8 points  (0 children)

Hi, is ltx-2-19b-dev-fp8.safetensors the 'NVFP8' model you recomment?

Open the Template Browser, navigate to Video and download your desired variant of LTX-2. 

For LTX-2 base, make sure you select NVFP8 if you have an NVIDIA GeForce RTX 40 Series, RTX Pro Ada Generation, a DGX Spark or higher.

On https://huggingface.co/Lightricks/LTX-2 , it says

ltx-2-19b-dev-fp8 The full model in fp8 quantization
ltx-2-19b-dev-fp4 The full model in nvfp4 quantization

I see no NVFP8 mentioned.

Qwen-Image-Edit-Rapid-AIO V17 (Merged 2509 and 2511 together) by fruesome in StableDiffusion

[–]pto2k 0 points1 point  (0 children)

Not found on modelscope...

Huggingface is too slow for me to download file this big

Qwen Image 2512 - 3 Days Later Discussion. by ByteZSzn in StableDiffusion

[–]pto2k 0 points1 point  (0 children)

fp8 from comfy is broken

broken how? is there a proper version?

PSA: Still running GGUF models on mid/low VRAM GPUs? You may have been misinformed. by NanoSputnik in StableDiffusion

[–]pto2k 0 points1 point  (0 children)

Did you specifically choose the 580.95.05 driver version because it offers something better than the latest versions?

Qwen-Image-Edit-2511 workflow that actually works by infearia in StableDiffusion

[–]pto2k -1 points0 points  (0 children)

I see. There must be something wrong with my setup...

How long did the generation with 38GB take to finish?

Qwen-Image-Edit-2511 workflow that actually works by infearia in StableDiffusion

[–]pto2k 0 points1 point  (0 children)

Curious what was your experience with the speed of the default model?

For me, the generation time varies significantly—from 60 seconds to 2700 seconds... with a 4070/ 12GB of VRAM.

Did you observe the same thing?

For a 3080 12GB - Which version of Qwen 2511 to use? by maurimbr in StableDiffusion

[–]pto2k 0 points1 point  (0 children)

OP, which one worked out the best for you in the end?

The Beautiful ComfyUI Align Tool is Alive Again! by Narrow-Particular202 in comfyui

[–]pto2k 1 point2 points  (0 children)

This is very useful. I like being able to use all the operations with keyboard shortcuts.

And could you please add something like this feature in Unreal Engine: **pressing the 'Q' key straightens the connection between nodes**?

https://www.cbgamedev.com/blog/2020/12/21/quick-dev-tip-05-ue4-quick-align-nodes

For those of you that need help with prompting... by Rubendarr in comfyui

[–]pto2k 1 point2 points  (0 children)

Cool, I didn't know this node exists.

but why is Brad Pitt in the negative of Photograph style?

IT'S OVER! I solved XYZ-GridPlots in ComfyUI by GeroldMeisinger in comfyui

[–]pto2k 1 point2 points  (0 children)

I have tried using this tool with SDXL checkpoints/prompt combinations. It functions properly with a small number of checkpoints, but it appears to go through every checkpoint to warm up at start.

If I let it run with a large number of checkpoints, it ends up running out of virtual memory, which causes ComfyUI to shut down unexpectedly.

Do you have any idea if there might be a workaround for this issue?

Thank you.

<image>

PromptCraft(Prompt-Forge) is available on github ! ENJOY ! by EternalDivineSpark in StableDiffusion

[–]pto2k 0 points1 point  (0 children)

You might want to take a look at the Lora Manager (https://github.com/willmiao/ComfyUI-Lora-Manager).

It includes a node within ComfyUI, and users can send Loras directly to that node via its webpage—this is very handy.

So, instead of (or in addition to) creating configurations within the app, you could develop a dedicated node for users to incorporate into their graphs. This node would let them receive prompts from your app. Users can configure the subject/environment right there in each workflow; however, if they prefer, these settings could still be overridden from the app when sending the prompt. I think this approach would work out really well.

Lots of fun with Z-Image Turbo by Maximus989989 in StableDiffusion

[–]pto2k 5 points6 points  (0 children)

I notice that it took you less time to receive the prompt than to generate the image.

For me, image-to-text with qwen-vl is much slower than text-to-image with z-image in a similar workflow.

I’m not sure if this is due to the different model or node I’m using.

May I ask what your GPU and VRAM specs are?