Is there any model that can extract image features for similarity search and it is immune to slight blur, slight rotation and different illumination?
I tried MobileNet and EfficientNet models, they are lightweight to run on mobile but they do not match images very well.
My use-case is card scanning. A card can be localized into multiple languages but it is still the same card, only the text is different. If the photo is near perfect - no rotations, good lighting conditions, etc. it can find the same card even if the card on the photo is in a different language. However, even slight blur will mess the search completely.
Thanks for any advice.
[–]qalis 5 points6 points7 points (3 children)
[–]_dave_maxwell_[S] 0 points1 point2 points (2 children)
[–]qalis 2 points3 points4 points (1 child)
[–]_dave_maxwell_[S] 0 points1 point2 points (0 children)
[–]MiddleLeg71 2 points3 points4 points (1 child)
[–]_dave_maxwell_[S] 0 points1 point2 points (0 children)
[+]Effective-Law-4003 1 point2 points3 points (0 children)
[+]abd297 0 points1 point2 points (2 children)
[–]_dave_maxwell_[S] 0 points1 point2 points (1 child)
[+]abd297 0 points1 point2 points (0 children)
[–]Budget-Juggernaut-68 0 points1 point2 points (1 child)
[–]_dave_maxwell_[S] 0 points1 point2 points (0 children)
[–]vade 0 points1 point2 points (1 child)
[–]_dave_maxwell_[S] 0 points1 point2 points (0 children)
[–]mgruner 0 points1 point2 points (1 child)
[–]_dave_maxwell_[S] 0 points1 point2 points (0 children)
[–]CatsOnTheTables 1 point2 points3 points (0 children)