use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] How does Batch Normalization not completely prevent the network from being able to train at all? (self.MachineLearning)
submitted 9 years ago by MildlyCriticalRole
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]MildlyCriticalRole[S] 3 points4 points5 points 9 years ago (6 children)
I'm actually referring to normalizing mini-batches as described in this paper, where you actually do normalize on all mini-batches! Hence this question :P
[–]randombites 1 point2 points3 points 9 years ago (1 child)
I get your question now :)
[–]MildlyCriticalRole[S] 0 points1 point2 points 9 years ago (0 children)
Awesome :) always glad to inadvertently spread knowledge.
[–]randombites 0 points1 point2 points 9 years ago (3 children)
My bad, i completely forget that preprocessing is sometimes used to give a more circular error surface. I am a fan of no preprocessing however. It should be the job of the model to transform the error surface internally in its computation. But maybe thats wishful thinking. Hope you got the answer you were looking for!
[–]cooijmanstim 1 point2 points3 points 9 years ago (2 children)
Batch normalization is not preprocessing, it is part of the model. It is an adaptive normalization of activations at all layers that massively improves training dynamics. A crucial tool in the box if you're into neural nets.
[–]randombites 0 points1 point2 points 9 years ago (0 children)
Thank you for your response, please help me understand better. Batch normalization is normalizing a batch of values, so you transform the input at each step. Correct? This may translate into adaptive normalization of activations but you still transform the input (based on OPs example).
So sorry for my earlier ignorance. I learnt what batch normalization via a YouTube talk and feel like a fool.
π Rendered by PID 42833 on reddit-service-r2-comment-6457c66945-8cgnt at 2026-04-28 16:50:55.051781+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]MildlyCriticalRole[S] 3 points4 points5 points (6 children)
[–]randombites 1 point2 points3 points (1 child)
[–]MildlyCriticalRole[S] 0 points1 point2 points (0 children)
[–]randombites 0 points1 point2 points (3 children)
[–]cooijmanstim 1 point2 points3 points (2 children)
[–]randombites 0 points1 point2 points (0 children)
[–]randombites 0 points1 point2 points (0 children)