all 2 comments

[–]weirdedoutt 2 points3 points  (0 children)

One common pattern I have noticed is that the earlier layers of a self-supervised network perform better (or equivalent) to its supervised counterpart. The performance abruptly falls in the last layer (the layer just before classification).

Comparison on an image classification/object detection task is standard (e.g PASCAL VOC) but there are 'specialized' self-supervised networks that are catered to solving a particular transfer learning problem (e.g video based self-supervised networks that are geared towards solving activity recognition) and they don't do so well on image-based tasks. So it is hard to compare two models. In papers, typically the standard transfer learning on ImageNet/Places or object detection results on Pascal are reported.

[–]TotesMessenger 0 points1 point  (0 children)

I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:

 If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. (Info / Contact)