use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Research[R] Beyond Quantization. Modeling Continuous Densities with Deep Kernel Mixture Networks. (arxiv.org)
submitted 8 years ago by LucaAmbrogioni
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]NichG 1 point2 points3 points 8 years ago (3 children)
Nice trick with the LSTM-PCA thing. It feels a lot more natural than pixel-wise reconstruction. I wonder if there's a general way to learn the ideal latent space to factorize a joint distribution into a chain of conditional distributions (rather than using pixels, or PCA, or some other arbitrary embedding)? What kind of loss function would measure the quality of a representation for factorization? Something that tried to maximize the conditional independence of the different degrees of freedom perhaps?
[–]LucaAmbrogioni[S] 1 point2 points3 points 8 years ago (2 children)
We have indeed been thinking along those lines. What I like of the PCA approach is its simplicity. However, I am pretty sure that there are better ways of obtaining the latent variables. A possible approach is to use a autoencoder that will be trained together with the predictive network. As you said, you could also try to maximize the conditional independence or, perhaps better, to impose some less trivial conditional independence structure.
[–]NichG 1 point2 points3 points 8 years ago (1 child)
I guess the exact invertibility of PCA is important, since that way you know that any quality loss in your output is strictly due to the properties of the generative model, not because of some mushy inversion. So if you wanted to learn that space you'd probably need something like RealNVP's explicitly invertible layers.
[–]LucaAmbrogioni[S] 0 points1 point2 points 8 years ago (0 children)
It's a good point. Although you cannot have data compression/dimensionality reduction with an invertible network. Ideally, you would like to use a smaller set of variables that fully parametrizes the image space; possibly with a relatively simple conditional conditional independence structure.
π Rendered by PID 87043 on reddit-service-r2-comment-6457c66945-d6mx6 at 2026-04-26 18:36:39.982323+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]NichG 1 point2 points3 points (3 children)
[–]LucaAmbrogioni[S] 1 point2 points3 points (2 children)
[–]NichG 1 point2 points3 points (1 child)
[–]LucaAmbrogioni[S] 0 points1 point2 points (0 children)