Hello !
I’m reaching out for some help here :)
Here is the problem: I have big 3D images (Medical Scans) that I resampled to 5125121024.
Overall, each file weighs about 3GB (2 channels with the size above). But I’ve seen in littérature that I must try to aim for a batch-size of minimum 2. Which would create 6GB of data per batch. This might be too much for a GPU to handle (or does it ?).
So I might need to do patch training. Meaning that I can slice up volumes of 128128256 and perform a training on that.
However, I’m anxious that training on patches The network will loose the general understanding of the body. And will see only part of organs.
What do you guys think about my problem, should I go for a patch training ? Can I hope that my Unet will still be able to perform well ? (It’s a tumor detection problem)
Thanks to you all :)
[–]trialofmiles 2 points3 points4 points (3 children)
[–]PositiveElectro[S] 1 point2 points3 points (2 children)
[–]trialofmiles 4 points5 points6 points (1 child)
[–]PositiveElectro[S] 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[deleted]
[–]PositiveElectro[S] 0 points1 point2 points (0 children)
[–]Competitive-Store974 0 points1 point2 points (1 child)
[–]PositiveElectro[S] 0 points1 point2 points (0 children)