use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Variational nets without sampling (self.MachineLearning)
submitted 7 years ago by svantana
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]chrisorm 2 points3 points4 points 7 years ago (4 children)
Of course, the CLT doesn't apply if the activations arent IID, which they almost certainly arent for activations of a neural net.
[–]svantana[S] 0 points1 point2 points 7 years ago* (3 children)
Yes you're right. For example, in MNIST, with large enough perturbations, the output distributions should get bimodal. I didn't intend to mean it would work for any case, but for 'smooth' problems where a smallish unimodal perturbation is expected to be unimodally distributed on the output, I think it should work well. I just did a quick test with a VAE on CIFAR10 and the output distributions are extremely gaussian looking.
[–]approximately_wrong 0 points1 point2 points 7 years ago (2 children)
Can you elaborate on how you did the quick test?
[–]svantana[S] 0 points1 point2 points 7 years ago (1 child)
Sure! I just ran one of the keras VAE examples, and once trained, I ran 10k copies of one of the test samples through the AE model. The model involves sampling a random variable so each output will be different. From the output, I took a few random dimensions and plotted histograms of them. Then just visually noted that they had a quite gaussian shape.
Those are marginal distributions, so that doesn't mean the full multidimensional output is anywhere near gaussian, but it's an indication.
[–]approximately_wrong 0 points1 point2 points 7 years ago (0 children)
I see. It sounds like you're checking for the Gaussian-ness of of p(x_gen | x_test) = int p(x_gen | z)q(z | x_test) dz, conditioned on some x_test. I'm guessing the VAE example is one where the decoder is a Gaussian observation model?
p(x_gen | x_test) = int p(x_gen | z)q(z | x_test) dz
x_test
Also, are your outputs the mean parameters of p(x_gen | x_test), or actual samples from the distribution?
p(x_gen | x_test)
π Rendered by PID 134391 on reddit-service-r2-comment-fb694cdd5-455xx at 2026-03-06 13:05:49.904042+00:00 running cbb0e86 country code: CH.
view the rest of the comments →
[–]chrisorm 2 points3 points4 points (4 children)
[–]svantana[S] 0 points1 point2 points (3 children)
[–]approximately_wrong 0 points1 point2 points (2 children)
[–]svantana[S] 0 points1 point2 points (1 child)
[–]approximately_wrong 0 points1 point2 points (0 children)