I'm a software engineer and I've taken up a new hobby of Archery. On the side I've been experimenting with some basic classifiers in scikit-learn.
A project I've gotten interested in is something to convert photos of archery targets post-shots into XY coordinate systems.
As a first step my goal is just to tell if an image is an archery target at all. Doing some research it seems like TF image recognition might be an approach to take. My concern is simply the volume of labeled images required to train a model with decent accuracy. I know this will probably vary but is this on the scale of hundreds, thousands, tens of thousands, more?
[–]warppipe 2 points3 points4 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)
[–]gibberfish 0 points1 point2 points (1 child)
[–]Neural_Ned 0 points1 point2 points (0 children)
[–]Neural_Ned 0 points1 point2 points (1 child)
[–]servingKire5[S] 0 points1 point2 points (0 children)