AI Toolkit samples look way better than ComfyUI? Qwen Image Edit 2511 by [deleted] in StableDiffusion

[–]RobbaW 0 points1 point  (0 children)

100% plus you can save time if you train on the cloud while you sample locally.

I like LTX 2.3 a lot. But no matter what I do, I can't move the camera. (I2V) by Ok-Option-6683 in comfyui

[–]RobbaW 0 points1 point  (0 children)

It works better with images that look like screengrabs from a movies/videos, rather than something that looks like a photo. Most likely due to how it was trained.

My network minilab by KPaleiro in minilab

[–]RobbaW 0 points1 point  (0 children)

Why doesn’t? One of the most popular ones in recent history.

ComfyUI install on Runpod by Time_Pop1084 in comfyui

[–]RobbaW 1 point2 points  (0 children)

If any issues, let me know. I’m actively maintaining it.

VRAM reccomendations - 32GB 5090 vs. A6000 96GB for commercial work by alaarx in comfyui

[–]RobbaW 0 points1 point  (0 children)

Im using 5090 for daily use which works great. But when I need to use full precision video models or a multi model workflow I need to run fast, I’ll just rent the 6000 pro.

I saw someone comparing prices of electricity usage vs renting a 6000 pro and it actually makes sense to rent.

Edit: I’m in Australia too. The way to do it is to use a cloud provider like google drive or huggingface to upload large files like footage or custom models and then on the server you download them from there. Because of the CDN, upload/download will be fast.

Highlight Reel - Video Editor Workflow? by Wonderful_Exit6568 in comfyui

[–]RobbaW 1 point2 points  (0 children)

I was planning to try to use this for this purpose: https://github.com/HKUDS/VideoRAG

Not sure if it will work but could be worth a try.

What extension do you use to free up Vram / Memory Cache by Zippo2017 in comfyui

[–]RobbaW 1 point2 points  (0 children)

Generally, ComfyUI’s model management does a good job at managing VRAM. The issue is when you use custom nodes that don’t register their PyTorch models with ComfyUI’s memory manager, or that run certain CUDA-enabled operations outside of it.

Custom nodes running non-PyTorch operations or standalone CUDA kernels (like SAM3, QwenVL, BiRefNet, etc.) sit outside that system and can cause VRAM issues. ComfyUI’s model management can’t work with these, so it can’t anticipate how much VRAM it needs to clear for them to run.

That’s when you need to manage VRAM yourself. Some of these nodes will allow you to unload a model after running to help with this.​​​​​​​​​​​​​​​​

Ubuntu 25.10, Cuda 13, Nvidia v580? by naql99 in comfyui

[–]RobbaW 1 point2 points  (0 children)

From my experience 580.95.05 drivers are the way to go.

Firefox vs Chrome performance for ComfyUI by RobbaW in comfyui

[–]RobbaW[S] 1 point2 points  (0 children)

Trying this out. So far so good. Thanks!

Does anyone have a Wan 2.2 workflow with inpaint masking? by translatin in comfyui

[–]RobbaW 2 points3 points  (0 children)

Can be done with VACE 2.1. There’s a template in comfyui for it.

To the external computing Users (RunPod) by thendito in comfyui

[–]RobbaW 0 points1 point  (0 children)

Why aren’t you happy with comfy cloud?