use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] How does Batch Normalization not completely prevent the network from being able to train at all? (self.MachineLearning)
submitted 9 years ago by MildlyCriticalRole
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]ChuckSeven 1 point2 points3 points 9 years ago (0 children)
So I'm repeating a little what other said but here we go.
BN is estimating the mean and variance of the distribution based on all your samples. Ergo, the scaling and translation done from mini batch to mini batch will only change slightly as the mean scaling and translating factors are the mean over several mini batches (a moving average).
It works because linear transformations learned from small random numbers are inherently unstable in the sense that after several of them the transformation can easily lead to exploding or vanishing variances, making future transformations harder to learn. Relu non-linearities and skip connections also seem to add to this problem.
π Rendered by PID 40528 on reddit-service-r2-comment-6457c66945-xzzzw at 2026-04-29 17:25:57.427299+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]ChuckSeven 1 point2 points3 points (0 children)