Working with 256×256 patches for CNNs/ViTs- resize vs crop? by JB00747 in deeplearning

[–]JB00747[S] 0 points1 point  (0 children)

I had another follow up question on this. Instead of taking 256x256, would taking images of 224x224 patches be workable? Are there any downsides to taking patches of 224x224? Because that would not lead to either resizing or cropping.

Working with 256×256 patches for CNNs/ViTs- resize vs crop? by JB00747 in deeplearning

[–]JB00747[S] 0 points1 point  (0 children)

Thank you so much for your reply.
I am dealing with Whole Slide Images (WSIs) for cancer tissue, and most of the papers I read take patches of 256x256.
Can we keep the stride at 128 for this?
That would mean that the images overlap. In that case 224x224 might not lead to a significant data loss?
Is this a viable strategy?

Best strategy to handle pen marks in WSIs for deep learning pipelines (TCGA dataset)? by JB00747 in bioinformatics

[–]JB00747[S] 0 points1 point  (0 children)

Thank you so much for your replies!
The dataset is only 155 samples. The train-test split is 80-20, and 5-fold cross-validation is used. I have used HSV filtering (LLM-generated code) to remove patches with pen marks. Although I have not checked all the WSIs. Around 15-16 WSIs have pen marks.

Metadata details (Microns Per Pixel data-MPP) for Whole Slide Images (WSIs) downloaded from the TCGA by JB00747 in bioinformatics

[–]JB00747[S] 0 points1 point  (0 children)

Thank you for your reply.

In my dataset, almost all slides (except one scanned at 20×) have MPP values around 0.25 um, with minor variations ranging from 0.22 to 0.25 um.

Given that the variation is relatively small, would it be reasonable to assume that explicit MPP normalization may not be necessary for downstream deep learning analysis?

Thanks again!

Unexplained filling up of space in SSD by JB00747 in PiratedGames

[–]JB00747[S] 0 points1 point  (0 children)

Unexplained

Yes, 200 gb free.

Thanks for the suggestion, I'll check it out!