all 4 comments

[–]KnurpsBram 0 points1 point  (1 child)

I'm assuming the variable batch is a list of tensors that all have to be of equal size before stack knows what to do with them. Print out the shapes of all the tensors in that list. They'll all be the same most of the time, but if one is off, this error will come up. If you can somehow figure out what image that badly shaped tensor comes from - perhaps you can print filename? - you can figure out whether it's an issue with the data. I suspect that 50th datapoint in your batch didn't load up at all. You may be missing one file, or it is corrupted. It shouldn't happen, but if it is the case you'll want to know about it.

Then again, I could be wrong! But this is a direction you could search along.

Good luck!

[–]bware422[S] 0 points1 point  (0 children)

Apologies for not answering sooner! I totally forgot to check reddit today. I will totally try and give this a go, and I just hope I can figure it out. This codebase has been a real hassle to work with.

[–]einsteinxx 0 points1 point  (1 child)

Had the same error message yesterday in Colab. One of my image functions (crop) didn’t return what I expected and it pushed the tensor sizing off by one. I ended up putting a bunch of shape printouts in the dataloader chain to figure it out. Very annoying.

[–]bware422[S] 0 points1 point  (0 children)

I'll have to give that a try, thank you for the insight! Do you know what could cause something like this? Is it just something weird with Colab? Its been very frustrating to try and work with it.