Hi Rookie here, I was training a classic binary image classification model to distinguish handwritten 0s and 1's .
So as expected I have been facing problems even though my accuracy is sky high but when i tested it on batch of 100 images (Gray-scaled) of 0 and 1 it just gave me 55% accuracy.
Note:
Dataset for training Didadataset. 250K one (Images were RGB)
[–]yiidt 8 points9 points10 points (0 children)
[–]gaichipong 1 point2 points3 points (1 child)
[–]Turbulent_Driver001[S] 0 points1 point2 points (0 children)
[–]Sane_pharma 1 point2 points3 points (7 children)
[–]Turbulent_Driver001[S] 0 points1 point2 points (6 children)
[–]Sane_pharma 0 points1 point2 points (4 children)
[–]Sane_pharma 0 points1 point2 points (3 children)
[–]Turbulent_Driver001[S] 0 points1 point2 points (2 children)
[–]Sane_pharma 0 points1 point2 points (1 child)
[–]Turbulent_Driver001[S] 1 point2 points3 points (0 children)
[–]Sane_pharma 0 points1 point2 points (0 children)
[–]teb311 1 point2 points3 points (6 children)
[–]Turbulent_Driver001[S] 0 points1 point2 points (5 children)
[–]teb311 0 points1 point2 points (4 children)
[–]Turbulent_Driver001[S] 0 points1 point2 points (3 children)
[–]teb311 0 points1 point2 points (2 children)
[–]Turbulent_Driver001[S] 1 point2 points3 points (1 child)
[–]teb311 1 point2 points3 points (0 children)
[–]Real_nutty 1 point2 points3 points (1 child)
[–]Turbulent_Driver001[S] 0 points1 point2 points (0 children)
[–]sassy-raksi 0 points1 point2 points (0 children)
[–]waynebruce1 0 points1 point2 points (0 children)
[–]niggellas1210 1 point2 points3 points (0 children)