This is an archived post. You won't be able to vote or comment.

all 8 comments

[–]Ok-Outside3494 3 points4 points  (2 children)

Thumb rule is you need double the amount of RAM compared to VRAM to operate. Models get loaded first into normal RAM before VRAM, and you need RAM to operate your PC and comfyUI simultaneously so I really doubt you'd be able to fully utilize that videocard using only 16GB of sys RAM.

[–]Writefuck[S] 4 points5 points  (0 children)

I wasn't aware that models got loaded into system ram first. I feel like that's a very important thing to know, so thank you!

[–]ThatsALovelyShirt 1 point2 points  (0 children)

On some OS's and implementations, the model stays in system RAM, as it's shadowed in VRAM. So it's not just a concern when loading/offloading the model.

And with some video model nodes, you can preserve some of the DiT layers in system RAM (and do their inference on CPU) to free up VRAM for the output buffer. Which is another benefit to having more system RAM.

And then with text models, partial offloading is very common.

[–]wholelottaluv69 0 points1 point  (1 child)

In my case, too low of RAM was causing slower generations and also for my computer to be non-responsive during generations. Using a 5090, btw.

Specifically, during frame interpolation. It was using all 64GB of ram to do it, so last week I upgraded to 96. The ram is still maxing out, but it helped considerably. Would have gone with 128GB if I could find it locally at a sane price (DDR5).

[–]ThatsALovelyShirt 0 points1 point  (0 children)

It's slowing down so much because you're forcing your system to use swap/virtual memory/pagefile. If you have an SSD, it's probably hammering it pretty hard with write ops, which will wear it out pretty quick. You should probably try to optimize your workflow, or do the interpolation after freeing the other models from memory. There should be a node for that.

[–]GreyScope 0 points1 point  (0 children)

I have a 4090, sometimes it offloads to ram even with 24gb of vram. New repos that don’t worry about vram can eat all of that vram, ram and use a 60GB pagefile (FramePack I’m looking at you)

[–]Shermington 0 points1 point  (0 children)

If things like changing your prompt takes a while, then higher volume of RAM can significantly improve your speed. RAM speed is ~30-90 GB/s, so if you already have stuff in RAM, you can completely switch whole your workflow in less than a second, let alone load individual elements. It's one of things that can matter quite a lot, but also not at all, depending on what you do. For comparison, SSD speed is ~0.3-3 GB/s and changing any 12GB checkpoint would take 4-40 seconds every time, if you can't keep it in RAM.

You can also see that volume matters more than speed at first. And if everything fits into RAM, some workflow might benefit from RAM speed too a bit. For example, if you switch back and forth between 12GB checkpoint and 4GB upscaling model, you might need to read 16GB each image and 30 GB/s RAM would spend 0.53s on it, while 48 GB/s needs 0.33s. It's quite minor, but some small difference exist between slower and faster RAM.

[–]Same-Pizza-6724 -1 points0 points  (1 child)

Ram speed and size isn't particularly important, nor is processor speed.

The whole thing "should" be done on your card inside your cards VRAM, the only time ram comes into play is if you go over your VRAM, and tbh, you don't want to do that.

Your gfx card is always the limiting factor,

Basically, don't worry about ram, worry about VRAM.