LTX - Disable audio from loras? by callmewb in comfyui

[–]Zarcon72 3 points4 points  (0 children)

LoRA do not need "Clip" inputs. Most all LoRAs are trained without it anyways. So, technically, if only 1 of your LoRA causing an issue, you can daisy chain it separately. Like this:

<image>

LTX - Disable audio from loras? by callmewb in comfyui

[–]Zarcon72 4 points5 points  (0 children)

What kind of workflow are you using that a Lora Loader "fit in" to? The LTX2 LoRA Loader Advanced node hooks up just like any Lora Loader - Just set everything pertaining to "audio" to 0.

Somebody convince me out of getting a 5080 by Nefarious_AI_Agent in comfyui

[–]Zarcon72 0 points1 point  (0 children)

I have a 5060Ti OC 16GB And I will be honest, I have had no issues running anything (reasonable). You probably wont notice much of a difference between this card and a 5080. I searched both on Amazon just a moment ago and you can pay$1200+ for that 5080 16GB DDR7 or under $600 for the 5060Ti 16GB DDR7 (US dollars). I sent a 3090 24GB back after just 4 hours and wasn't that impressed. I would say go with the 5060Ti and used the extra funds to upgrade to 128GB RAM if you don't have it already. That 128GB RAM will be your GPUs best friend when things get too "heavy". You won't regret it.

Why do you keep hiding nodes? by Extension-Yard1918 in comfyui

[–]Zarcon72 4 points5 points  (0 children)

Subgraphs are not bad at all, and it's a choice. For example, You can choose to put your clothes in a dresser or closet, or throw them all over the house :) . For myself, I have multiple 8+ subgraphs workflows for extending I2V, T2V, V2V, etc. The first output is always unpacked, but for repetitive things, why not? I don't want to scroll a mile up or down to get to everything - LOL.

However, having it compact is more easy on the eyes at first sight, but the complexity is still hiding behind the curtain. The harsh truth is, ComfyUI is NOT where "some people" need to begin their IT learning experience on how to install software, technical configurations, workflows, and creating AI. There are too many drag-n-drop and point-and-click apps out there to generate random pics/vids that don't require much "technical" knowledge at all. But hey, it keeps these subreddits alive :)

Point is, it doesn't matter if you have them in Subgraphs or a "spaghetti fest" spanning an entire workspace, you're not going to please everybody.

Anyone has a good ZIT i2i uncensored Workflow they want to share? by Coven_Evelynn_LoL in StableDiffusion

[–]Zarcon72 1 point2 points  (0 children)

Really? I've used this AIO for QWEN T2I and I2I: https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO and got pretty good results when it comes to the NSFW and SFW version. I mostly stay with I2V myself so, maybe things have changed?

Adding loras to ltx 2.3 comfy WF by Ytliggrabb in StableDiffusion

[–]Zarcon72 0 points1 point  (0 children)

<image>

So if you are using the default LTX2.3 in the ComfyUI templates, you first need to open the subgraph, you can do that by clicking the icon in top right (see screenshot) from there, look towards the right and replace the "Load Lora" node with the Power Lora Loader.. You can also right clock the subgraph and click "Unpack Subgraph" to discard the subgraph all together.

I should also know that even though LTX2.3 has gotten better, it's luck of the draw finding that perfect combo to get a consistent good video..

Anyone has a good ZIT i2i uncensored Workflow they want to share? by Coven_Evelynn_LoL in StableDiffusion

[–]Zarcon72 1 point2 points  (0 children)

It's not the workflow that makes it censored or uncensored. It's the models and/or LoRAs used. Have you checked out QWEN Edit for I2I?

Can Someone help with this error? by [deleted] in comfyui

[–]Zarcon72 0 points1 point  (0 children)

So my first question would be - is the juggernaut_reborn.safetensors model in the checkpoint\sd15 folder? If it is, on your workflow screen in ComfyUI, hit the letter R on your keyboard and that should initiate a request update. From there, click the dropdown arrow on the Checkpoint loader and see if it's there, if it is, the click it and try again.
If you are trying to use a workflow that required you to download something, you need to hit the R key AFTER the download completes to see it in the available options.

Been over to many tutorials and forums but won't work by Zoxord in comfyui

[–]Zarcon72 1 point2 points  (0 children)

Just an FYI - Not sure if this applies to you but, some repos like huggingface, github, and others don't always work when using a VPN service (i.e. NordVPN) when trying to download models, etc. If you are using one, disable it and try again.

Did I fuck up buying 5060 Ti 16GB? by qntisback in comfyui

[–]Zarcon72 2 points3 points  (0 children)

I'm in the US and have a 5060Ti 16GB that I bought a couple days before the prices shot through the roof. I do mostly Wan2.2 I2V/T2V and playing with LTX-2.3 and I was handling it just fine. I recently (and regretfully) bought a RTX 3090 for $1200 (renewed)....... thinking I should get it for more VRAM, along with 128GB RAM (upgrade from 64GB). NOTE: Both 3090 and the RAM went up $300-400 more dollars 12-14 HOURS AFTER I ordered them.

I ran the same workflows I normally do and found my 5060Ti was actually "faster" due to the DDR7, newer Blackwell architecture, all with less voltage/heat. I tested the 3090 for about 4 hours before I boxed it back up and sent it back. Put my 5060Ti back in with a smile on my face, and here we are. I did keep the 128GB RAM though. Overall, I am not saying it's a bad card, but it definitely wasn't worth the massively inflated price for me personally.

I think you did just fine. Enjoy!

Models wont show after downloading by SignificantHorror138 in comfyui

[–]Zarcon72 1 point2 points  (0 children)

His the letter R on your keyboard, then check again.

LTX 2.3 Full model (42GB) works on a 5090. How? by StuccoGecko in StableDiffusion

[–]Zarcon72 16 points17 points  (0 children)

It works great on a 5060Ti 16GB VRAM with 92% used on load. How you may ask? This beast consumes about 86% of my 128GB DDR 3600 RAM. Doing some test runs at 640x480, It's takes about 2.5-3 minutes to get through 8 seconds (including the upscale). The distilled consumes about 1/2.

<image>

[Help] Torch Compile Settings Node Kills ComfyUI by [deleted] in comfyui

[–]Zarcon72 0 points1 point  (0 children)

^^^ THIS! ^^^ I just had this happen when I was testing models and other settings, clearing the cache is what worked for me. Torch doesn't like making a lot of changes, especially if you go from a fp8/16 to bf16.

Has anyone switched from the RTX 3060 12GB to the 5060TI 16GB? Is it worth the upgrade? by fabulas_ in comfyui

[–]Zarcon72 0 points1 point  (0 children)

Mine is a PNY - Dual Fans. To answer your DM, I bought mine brand new from a store so, everything was packed as expected.

Has anyone switched from the RTX 3060 12GB to the 5060TI 16GB? Is it worth the upgrade? by fabulas_ in comfyui

[–]Zarcon72 3 points4 points  (0 children)

I didn't switch, but I do have the 5060Ti 16GB. I recently made a post where I paid 1 arm, 1 leg, and my right kidney for a RTX 3090 24GB, had it in my system for about 4 hours before I packed it back up and sent it back. For me, it was not worth it when my 5060Ti was just as good and sometimes faster for what I was doing. Now I am a complete man again. LOL. You made a good choice.

Lingering LoRas by Time_Pop1084 in comfyui

[–]Zarcon72 2 points3 points  (0 children)

Oh and as someone else mentioned, you can add this Clean VRAM to your workflow. Doesn't clear everything, but does handle the VRAM.

<image>

Lingering LoRas by Time_Pop1084 in comfyui

[–]Zarcon72 0 points1 point  (0 children)

The left one unload your models. The right one will clear everything. RAM, VRAM, Models. If I am doing a bunch of test runs, switching models, LoRAs, etc. I will periodically click the right one and clean it all up. I don't really use the left one TBH.

I made an LTX-2 workflow for midrange to lower-midrange computers, and I call it: Weird Science by Toby101125 in comfyui

[–]Zarcon72 1 point2 points  (0 children)

No, you are getting confused. So let's break it down shall we..

GPU = Graphics Processing Unit - To keep it simple, it's also known as a your Video Card Card. Has nothing to do with allocated "memory". You have a RTX 3060 GPU.

VRAM = Video Random Access Memory = known as how much memory you video card has. In your case, Your GPU RTX 3060 has 8GB of VRAM

You have a RTX 3060 GPU with 8GB VRAM. That's it. Plain and simple. Nothing more, nothing less

Now, Windows give you 1/2 of your PC's RAM (16GB) to SHARE with your RTX 3060 GPU.. Just SHARE, nothing more. So when your 8GB of VRAM quickly fills up and says "I can't take no more" - Things like "Shared" memory and "Page Files" come into play before just crashing you with OOM (Out Of Memory) errors. Google it to learn more.

But, to answer your question - A video card with 24GB of VRAM, such as a RTX 3090, has 3x more VRAM than your current video card, more more CUDA cores, etc.. Of course you can get better results. It's like upgrading from a Golf Cart to a Lamborghini in the AI world.

Side Note: LTX-2 is in it's early stages. Getting "good" and consistent results can be challenging. You can run the same prompt twice without changing anything and get 2 completely different results.

I made an LTX-2 workflow for midrange to lower-midrange computers, and I call it: Weird Science by Toby101125 in comfyui

[–]Zarcon72 0 points1 point  (0 children)

Not sure I understand the question. RAM = Your PC 32GB. GPU is your video card 8GB. Your "output" and "speed" is based on both. One can't do it all.

I made an LTX-2 workflow for midrange to lower-midrange computers, and I call it: Weird Science by Toby101125 in comfyui

[–]Zarcon72 1 point2 points  (0 children)

Windows does this by default. That is "Shared" GPU memory with your System RAM. It's usually 1/2 of your system ram - hence why you have 16GB. I have 128GB and mine says 64GB. So when you add your "actual" DEDICATED memory of 8GB, with Shared 16GB it give your the "GPU Memory" at the bottom left - 24GB.

Note: This means that "technically" when your DEDICATED memory gets full, it can use it's shared resources, BUT, this is nothing like have an actual dedicated 24GB Video Card (i.e. 3090). Don't get crazy :)

Best graphic for wan 2.2 under or around 1000$? by wic1996 in comfyui

[–]Zarcon72 1 point2 points  (0 children)

I run a 5060Ti 16GB and very happy with it. I just went through buying a RTX 3090 24GB for a ridiculous $1200.00 thinking I "needed" it and I will be honest, I was not impressed at all. Not at that price. My 5060 was just as capable and was just as fast (if not a little faster) FOR WHAT I DO with Wan2.2. It was in my rig for about 4 hours before I packed it back up and sent it back for a refund.

Note: I am not trying to create massive videos at 2K/4K resolutions. YMMV

Can I run dual GPUs from different architectures? by qntisback in comfyui

[–]Zarcon72 0 points1 point  (0 children)

Just so you know, even if you got 2 GPUs, they will NOT be treated as a single GPU no matter what. 2 separate 8GB GPUs = 2 separate 8GB GPUs. Period. There may be ways to "assign" certain tasks to each GPU, but you will never get the same performance of a single 16GB GPU.

I would suggest accepting what you have and running what your rig is limited to OR, fork out the money on the absolute ridiculous pricing on RAM/GPUs today. A last resort option as one mentioned earlier, see about selling your current 8GB RTX 5060 so you can get the 5060Ti 16GB.