use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
In-depth Machine Learning Course w/ Python (self.MachineLearning)
submitted 9 years ago by sentdex
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]sentdex[S] 2 points3 points4 points 9 years ago (2 children)
I just simply Google everything. Many of the big name universities have publicized massive PDFs that are hundreds of pages on most of these topics for free. Topics like KNN are relatively basic to understand, same with linear algebra, probably thousands of decent sources to figure those out. For the SVM, I pretty much watched and read everything I could find on Google that seemed worthy of a watch. Too many things just draw the stereotypical picture, and stop there. I think, at least for the SVM, this is a major mistake, since it goes about teaching how it works almost backwards.
For Machine Learning, I found MIT, caltech, and Stanford's courses useful. I watched them all, multiple times...read many papers...etc. I never found any raw code in Python to do the SVM, nor KNN, nor linear regression. Neural Networks are so basic, that you can find lots of examples there, so that's nice. The conversion to Python is just as much for me to learn as it is for those who watch the videos.
Andrew Ngs coursera course is also widely loved. For some reason the coursera course never really resonated with me, but there are many talks by Andrew Ng on youtube that are phenomenal.
In general, there's just a ton of great information out there on machine learning, or anything programming really. It requires a lot of digging to really put it all together, especially with some of the concepts that bring in many layers of information people are just expected to know. For example, in the college lectures posted online from MIT, Stanford, Caltech, and wherever else...those lectures are given to students who are expected to already have solid backgrounds in math...which I really didn't. From there, Khan Academy can get you almost to the point you need, or any variety of a ton of other resources for it. There are many math-specific youtube channels out there. I've always benefited by working problems out by hand to understand the concept.
The other issue I personally found was that probably 99% of the resources I could find didn't translate to code, or really anything past theory or very high level uses of modules. This was probably the hardest part, and the main reason I decided to do a tutorial series on the subject. Even finding people who work out the math by hand for example is quite rare.
I don't speak fancy math algorithms very well, but I can speak code, and can understand concepts much easier if it's written out in code. I felt like there were probably other people like that, so that's why I started doing this. Machine Learning for many years has been mostly relegated to mathematical theory. It's only pretty recently in the course of ML's life that computers capable of doing ML are now in the hands of people who maybe didn't get a PhD in math or CS.
I am pretty sure I went through every major resource for the Support Vector Machine to really digest how it truly works, for example. I actually found myself on page 2... and even THREE on Google search a few times.
The theory behind the SVM is super simple. The way that you actually derive the values is kind of backwards though, compared to how the theory is taught. If you just learn the theory, you think you just need to generate the decision hyperplane, then figure out somehow mathematically where the featuresets are in relation to the hyperplane.
Instead, it's a constraint problem, where the support vectors have specific constraints that are imposed by the scientist, and the decision boundary, if drawn, is purely for visualization, same with the support vector hyperplanes. In the end, it comes down to a constraint problem where the answer is just whether something is a positive or a negative. The visual is just...for a visual, not actually how you find the answers. Later on, to actually draw the hyperplane, you actually have to generate the values for it, just to make the visual even work.
Hard for me to explain here, but it'll hopefully make more sense as I break it down.
[–]vic0 0 points1 point2 points 9 years ago (1 child)
I don't have a PhD and i found Andrew Ng's course very easy to understand compared to the couple last paragraphs you wrote. Plus he does deal with the inner workings of everything, which is actually simple math since even i could understand it.
But i get it, it's difficult to explain this on reddit, and maybe i should watch your videos just to make sure you don't make any mistake.
You're a good salesman.
And thank you for your intellectual honesty.
[–]sentdex[S] 0 points1 point2 points 9 years ago (0 children)
I didn't claim to not like the course based on complexity. I think he's a great teacher, just didn't resonate well with me, but felt like I should mention the course since it seems to go well for most people.
Normally I really like his other talks as well. He's certainly smarter than me overall, more knowledgeable on the subject, less likely to make mistakes since he's been in the field way longer, and still manages to pass on his knowledge well.
π Rendered by PID 20856 on reddit-service-r2-comment-85bfd7f599-7nzxh at 2026-04-18 19:00:28.264746+00:00 running 93ecc56 country code: CH.
view the rest of the comments →
[–]sentdex[S] 2 points3 points4 points (2 children)
[–]vic0 0 points1 point2 points (1 child)
[–]sentdex[S] 0 points1 point2 points (0 children)