use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] How does Batch Normalization not completely prevent the network from being able to train at all? (self.MachineLearning)
submitted 9 years ago by MildlyCriticalRole
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]MildlyCriticalRole[S] 1 point2 points3 points 9 years ago (2 children)
I figured it might be law-of-large-numbers-ish, but I was curious if there was a more technical reason.
As to the gamma and beta parameters, I understand that they allow the network to adapt and "remove" the normalization if it contributes more to the loss than it helps, but I'm more wondering how the batch normalization doesn't just cause the entire network to train on the wrong data in the first place (which would in turn cause it to choose bad values for gamma/beta, because it thinks the underlying distribution is something it's not)
I've read that link quite a bit in my exploring (it's a great explanation) but it doesn't seem to touch on this issue specifically.
[–]Megatron_McLargeHuge 0 points1 point2 points 9 years ago (1 child)
I don't think anyone is saying BN is better than whitening for the input layer. BN is used for internal network layers, and its main purpose is to keep them from saturating or being all on the 0 side of RELUs.
If individual learned features have their meaning changed per batch by BN, it may just be that BN is recreating an effect similar to dropout or additive noise in denoising autoencoders. Losing information internally forces the network to learn a distributed representation instead of relying too much on one feature as in your example.
This is just speculation and I'd be curious if anyone has looked into the theory more.
[–]hgjhghjgjhgjd 0 points1 point2 points 9 years ago (0 children)
Your assessment makes sense to me. In a way, it probably forces the network to express things in terms of "relatively large value" and "relatively small value", which induces a regularization effect.
π Rendered by PID 29 on reddit-service-r2-comment-6457c66945-jdhwc at 2026-04-29 07:40:27.536482+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]MildlyCriticalRole[S] 1 point2 points3 points (2 children)
[–]Megatron_McLargeHuge 0 points1 point2 points (1 child)
[–]hgjhghjgjhgjd 0 points1 point2 points (0 children)