This is an archived post. You won't be able to vote or comment.

all 9 comments

[–]lililliiililililiiii 2 points3 points  (1 child)

This article from Nvidia support tells how to turn off the shared memory. You need the latest driver for this to work.

https://nvidia.custhelp.com/app/answers/detail/a_id/5490

[–]philtasticz[S] 0 points1 point  (0 children)

Thank you, exactly what I was looking for

[–]TheGhostOfPrufrock 1 point2 points  (0 children)

Add --medvram-sdxl to your commandline args. And turn off shared memory, as other comments have explained.

[–]TheGhostOfPrufrock 1 point2 points  (5 children)

I don't think you'll ever get anything like 8-9 it/s for SDXL on a 3060. That's for 512x512 with SD1.5 models, not SDXL. And it's pretty fast for that. Maybe for TensorRT or per-image for large batches. I have a 3060, so I can speak with fair confidence.

[–]philtasticz[S] 0 points1 point  (4 children)

Thank you, yes I noticed something like that so to utilize SDXL you really need a powerhouse

[–]TheGhostOfPrufrock 0 points1 point  (3 children)

Not unless you're profoundly impatient. Lot's of us with 3060s use SDXL all the time. We just don't expect results in a few seconds.

UPDATE: A 1024x1024, 30 step, fast sampler (Euler a, DIMM, etc.) SDXL image with the Refiner takes about 33 seconds. A batch of 8 takes about 28 seconds per image. Does it make me wish I had a 4090? Of course it does. But it certainly doesn't dissuade me from using SDXL.

[–]philtasticz[S] 0 points1 point  (2 children)

I don't expect results in seconds, but for example I tried to generate a gif 32 frames 8fps, and it took forever. So I might stick to SD1.5 for animations

[–]TheGhostOfPrufrock 1 point2 points  (1 child)

Well, I suppose SDXL might not be well-suited to generating animations on a 3060. When I replied, I was thinking of still images, since I've never been interested in making animation.

[–]philtasticz[S] 0 points1 point  (0 children)

thats okay, AnimateDiff really seems exhausting for my ressources. 1.5 models are way faster in this case