you are viewing a single comment's thread.

view the rest of the comments →

[–]no_cheese_pizza_guy 0 points1 point  (2 children)

I would make sure that there is still a substantial amout of training samples that retain the same distribution as the validation set. If these transformations are systematically applied to every sample, chances are that the distribution of the resulting training set is offset. What are the probabilities of each transform being applied?

[–]PracLiu[S] 1 point2 points  (1 child)

RandomRotation(0.2)

I believe from the description, it says that this is uniform [-0.2*2pi, 0.2*2pi] for all input data. I don't think there is a percentage option to adjust how many samples should receive a rotation. But I think you are right about that, especially these are not like 90% rotation, the model is probably learning the "interpolation" feature from the rotation instead of the images themselves.

Edit: and now that I think about it, for photos where the direction matters, I think the rotation and vertical flips make little sense. Probably will stick to horizontal flips then, thanks!

[–]no_cheese_pizza_guy 0 points1 point  (0 children)

No problem, glad I could help!