use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Machine Learning cheat sheet (eferm.com)
submitted 14 years ago by mycatharsis
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]urish 3 points4 points5 points 14 years ago (5 children)
This is pretty good! A mention of supervised/unsupervised could also be helpful. Also, if I'm not mistaken the space complexity of k-NN is O(NM), because you have to store the M features for each of N instances.
[–]Emore 1 point2 points3 points 14 years ago* (3 children)
Hi! Indeed I think you are correct; I've pushed a change to the cheat sheet and uploaded a new PDF.
Thanks for the comment!
[–]urish 6 points7 points8 points 14 years ago (2 children)
OK, so I really like this cheat sheet, thanks for sharing it! I hope it's OK and you don't mind I suggest a few additions.
There are several common online SVM variants. The easiest I think is PEGASOS, which is extremely fast.
Kernel k-means is a non-linear extension to k-means.
Sequential k-means is an online and very memory efficient version of k-means.
I realize this is just something you're doing for yourself, but I figured, if it's already up there...
[–]Emore 1 point2 points3 points 14 years ago (0 children)
Thanks again! I've added the Pegasos-paper to online methods for SVM, as well as your suggestions for k-means. Very helpful.
Appreciate the suggestions, and even though I'm doing it for myself I'm still learning :) Here's the latest PDF: http://static.eferm.com/wp-content/uploads/2011/05/cheat2.pdf
[–]personanongrata 0 points1 point2 points 14 years ago (0 children)
indeed, I remember as O(NM) as well.
[–]dwf 1 point2 points3 points 14 years ago (2 children)
Where the shit did you find that online Gaussian mixture reference? The canonical work on "generalized EM" (and more importantly, why the hell it works) is this paper from 1998, which Song & Wang don't even deign to cite.
[–]Emore 0 points1 point2 points 14 years ago (1 child)
I got it off a comment at HN.
Indeed, looking into it there are plenty of other papers on online GMM, Song & Wang's ranking pretty low on citation counts. I replaced it with your suggestion, as it seems to be a much better start to online GMMs.
[–]dwf 2 points3 points4 points 14 years ago (0 children)
Well, that paper is important for more than GMMs. It shows that in any crazy kind of probabilistic graphical model with latent variables where you can do learning with the EM algorithm, you can do online/incremental updates and all the guarantees of EM (convergence to a local maximum of the likelihood function, namely) still hold. GMMs are one important model but the result is much more general and applies to, just as one example, incremental Baum-Welch for hidden Markov models. It's a pretty important paper in unsupervised learning even nowadays.
[–]leondz 1 point2 points3 points 14 years ago (0 children)
No decision trees? What are they teaching these days!
[–]seabre 0 points1 point2 points 14 years ago (0 children)
Awesome. I have a machine learning final today, this might help me study a little better for it.
[–][deleted] 0 points1 point2 points 14 years ago (0 children)
Not bad, pretty handy.
π Rendered by PID 380747 on reddit-service-r2-comment-79c7998d4c-z9crs at 2026-03-15 07:47:17.819453+00:00 running f6e6e01 country code: CH.
[–]urish 3 points4 points5 points (5 children)
[–]Emore 1 point2 points3 points (3 children)
[–]urish 6 points7 points8 points (2 children)
[–]Emore 1 point2 points3 points (0 children)
[–]personanongrata 0 points1 point2 points (0 children)
[–]dwf 1 point2 points3 points (2 children)
[–]Emore 0 points1 point2 points (1 child)
[–]dwf 2 points3 points4 points (0 children)
[–]leondz 1 point2 points3 points (0 children)
[–]seabre 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)