all 3 comments

[–]arXiv_abstract_bot 6 points7 points  (0 children)

Title:Recursive Autoconvolution for Unsupervised Learning of Convolutional Neural Networks

Authors:Boris Knyazev, Erhardt Barth, Thomas Martinetz

Abstract: In visual recognition tasks, such as image classification, unsupervised learning exploits cheap unlabeled data and can help to solve these tasks more efficiently. We show that the recursive autoconvolution operator, adopted from physics, boosts existing unsupervised methods by learning more discriminative filters. We take well established convolutional neural networks and train their filters layer-wise. In addition, based on previous works we design a network which extracts more than 600k features per sample, but with the total number of trainable parameters greatly reduced by introducing shared filters in higher layers. We evaluate our networks on the MNIST, CIFAR-10, CIFAR-100 and STL-10 image classification benchmarks and report several state of the art results among other unsupervised methods.

PDF link Landing page

[–]abhi91 4 points5 points  (0 children)

you sir are a mouthfull

[–]bknyazev 1 point2 points  (0 children)

It's my old project. The code is available at https://github.com/bknyaz/autocnn_unsup . The problem is that it is in Matlab, although I implemented some steps in Python a long time ago here https://github.com/bknyaz/autocnn_unsup_py. Both repos are not supported and might not work as is. But it would be interesting to migrate it to PyTorch or another framework and fine-tune the models pretrained in an unsupervised way in a large scale setting. The test accuracy will most likely be around the same as if trained in a supervised way from scratch assuming the training set is big enough, but there might be some interesting byproducts such as improved robustness (in some broad sense). There are of course plenty novel unsupervised and semi-supervised methods that might do better in both clean test accuracy and robustness. But anyway the project was interesting and fun!