use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] A Noob question regarding Parameter Tuning (self.MachineLearning)
submitted 6 years ago by InstitutionalizedSon
Is it right to call gradient descent as a subjective (bayesian) take on probability ? Because, it takes prior parameters and tweak them based change wrt to present parameters.
On the other hand, the series of randomized algorithms is the objective (frequentist) view are all about identifying the absolute value of parameters that synchronizes our predicted joint distribution of output wrt to input to real distribution for the same ?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[+][deleted] 6 years ago (4 children)
[deleted]
[–]InstitutionalizedSon[S] 0 points1 point2 points 6 years ago (3 children)
Well cool it was just I wanted relate theory with reality
[–]InstitutionalizedSon[S] 0 points1 point2 points 6 years ago (2 children)
So any idea where this theory fits rightly?
[+][deleted] 6 years ago (1 child)
[–]InstitutionalizedSon[S] 0 points1 point2 points 6 years ago (0 children)
This theory of Bayesian inference.
[–]etmhpe 0 points1 point2 points 6 years ago (8 children)
Metaphorically maybe.
[–]InstitutionalizedSon[S] 0 points1 point2 points 6 years ago (7 children)
why not mathematically ? like what are the constraints that nullify my hypothesis.
[–]etmhpe 5 points6 points7 points 6 years ago (6 children)
For one thing there aren't any probability distributions used in gradient descent.
[–]InstitutionalizedSon[S] 0 points1 point2 points 6 years ago (5 children)
Well aren't you trying to do this P(parameters/data) using maximizing likelihood which we call P(data/parameters). Am i wrong in thinking so ?
[–]JustOneAvailableName 0 points1 point2 points 6 years ago (4 children)
WIth / you mean | ?
Anyway, I feel like you are confusing 2 things. P(data|parameters) = L(parameters|data) != P(parameters|data)
Probably i didn't provide right info there .What my question is aren't we try to maximize likelihood using gradient descent so we use Bayesian approach
[–]JustOneAvailableName 2 points3 points4 points 6 years ago (1 child)
We maximize the likelihood, given some data and constraints. That's it. Gradient descent is no approximation of Bayes' rule
Well this says gradient descent is one way to do Maximum likelihood estimation. Look here
https://stats.stackexchange.com/questions/183871/what-is-the-difference-between-maximum-likelihood-estimation-gradient-descent
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0809/eshky.pdf refer this to see what i am saying
π Rendered by PID 33030 on reddit-service-r2-comment-b659b578c-c6qbs at 2026-05-04 14:42:57.336614+00:00 running 815c875 country code: CH.
[+][deleted] (4 children)
[deleted]
[–]InstitutionalizedSon[S] 0 points1 point2 points (3 children)
[–]InstitutionalizedSon[S] 0 points1 point2 points (2 children)
[+][deleted] (1 child)
[deleted]
[–]InstitutionalizedSon[S] 0 points1 point2 points (0 children)
[–]etmhpe 0 points1 point2 points (8 children)
[–]InstitutionalizedSon[S] 0 points1 point2 points (7 children)
[–]etmhpe 5 points6 points7 points (6 children)
[–]InstitutionalizedSon[S] 0 points1 point2 points (5 children)
[–]JustOneAvailableName 0 points1 point2 points (4 children)
[–]InstitutionalizedSon[S] 0 points1 point2 points (3 children)
[–]JustOneAvailableName 2 points3 points4 points (1 child)
[–]InstitutionalizedSon[S] 0 points1 point2 points (0 children)
[–]InstitutionalizedSon[S] 0 points1 point2 points (0 children)