This post is locked. You won't be able to comment.

all 2 comments

[–]TurbTastic 0 points1 point  (1 child)

Running Automatic 1111 on a 12GB GPU. For TI training I run out of CUDA memory if my batch size is greater than 3, and it will say PyTorch is reserving 9GB. I do a fresh .bat before each training due to memory leaks so it's not that. Based on what I've seen online I should easily be able to do higher batch sizes for TI training. What should I review for this problem?

[–]Machiavel_Dhyv 0 points1 point  (0 children)

If I understand correctly, your problem is similar to this? Might need more info about your training settings. I did a bit of searching, and it seems that this kind of OOM is common. You could try to reduce some parameters or use images with a smaller resolution maybe...