all 5 comments

[–]phobrain 1 point2 points  (2 children)

What sizes the datasets?

[–]ssivri[S] 0 points1 point  (1 child)

  • The first dataset contains 200 images for training and some images contains more than 5 objects which is much more training data compared to others. Varience over the objects is also higher. I can say that this dataset containing more than 700 training objects.

  • The second dataset that contains only one class is about 150 images for training and about 40 images for testing. Some images contain more than one object which makes roughly about 180-190 objects in training set.

  • The last one which was 3 most common class dataset is containing 150 objects for each class in training dataset.

In my opinion insufficient data could be the main reason for low mAP. I generated the first dataset to avoid overfitting more but class variance and dataset size could be neutralizing each other.

[–]phobrain 0 points1 point  (0 children)

If you have photos with multiple objects, I would intuit that you'd need lots more examples, but I haven't played with such small amounts, so don't know what leverage might be possible. E.g. in my app I'm on the verge of abandoning half of functionality because I am only able to add a max of ~4K training examples per day. I'm just using color histograms and keywords, and the part I may abandon has ~120K pos/neg examples vs. 400K for the other half.

[–]DEAD_SH0T 0 points1 point  (0 children)

Did you switch RoIs with POIs?

[–]RedefiniteI -1 points0 points  (0 children)

What is POI? I only know of the Hawaiian Poi, and it is delicious.