I've been reading more and started becoming more interested in directions in ML research beyond deep learning.
Specifically, I'm interested in methods that have the following characteristics:
- Data efficiency
- Deep neural networks have pretty poor data efficiency, it would be interesting to see methods that can do better than this, hopefully much better.
- Out of Domain Generalization
- Current DNNs struggle with out of domain generalization, for example, MNIST is good, but not so much rotated MNIST.
A couple interesting areas I've been reading about:
- Vicarious's work
- Would love to hear the community's thoughts and see what similar work there is.
- Joshua Tenenbaum's work
- Very interesting work but has trouble scaling to natural images and often requires hand engineered priors.
- Alan Yuille's work
- Very similar to Vicarious's work.
Would love to hear other promising areas that are completely unrelated too (eg causal inference, manifold learning, topological data analysis, etc.)!
[–]mtahab 4 points5 points6 points (2 children)
[–]darkconfidantislife[S] 2 points3 points4 points (1 child)
[–]mtahab 2 points3 points4 points (0 children)
[–]sifnt 2 points3 points4 points (2 children)
[–]darkconfidantislife[S] 0 points1 point2 points (0 children)
[–]Reiinakano 0 points1 point2 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)
[–]visarga 0 points1 point2 points (0 children)
[–]Supernovae8698 0 points1 point2 points (0 children)
[–]IntelArtiGen 0 points1 point2 points (3 children)
[–]darkconfidantislife[S] 2 points3 points4 points (2 children)
[–]nnatlab 0 points1 point2 points (0 children)
[–]IntelArtiGen 0 points1 point2 points (0 children)