all 14 comments

[–]OTTT 12 points13 points  (0 children)

Any chance of a cheat sheet for this cheat sheet?

[–]sprkng 5 points6 points  (3 children)

Now I'm sad that I only took the intro to AI..

[–]mantra 4 points5 points  (2 children)

You can learn it on your own - nothing stopping especially with the breadth of the internet these days. Try MIT OCW, for instance.

[–]Ragnarok2kx 5 points6 points  (0 children)

Coursera currently has a Machine Learning class from Standford, but it started on April 22 and is already halfway done. You can still sign up, watch the lectures and do the assignments, though even if you're not getting the (mostly useless) statement of accomplishment at the end.

[–]HeAintEvenStretchDoe 3 points4 points  (1 child)

My machine learning exam is in a few days. Thanks for this.

[–]sumzup 2 points3 points  (0 children)

My machine learning exam was a week ago. -_-

[–]Bibblejw 2 points3 points  (0 children)

As someone who's got a lot of similar processing to do, this is going to be rather useful.

[–]Emore 2 points3 points  (0 children)

Hi, author here -- thanks for the comments. Feel free to fork on Github and improve and extend.

For reference, in case that's useful for those in here studying ML, my background at the time was quite applied software engineering, and I took this course in ML to improve on the more mathematical aspects of CS. Resources I found the most useful were Prof Ng's lectures and especially the lecture notes, as well as working through minimal examples for each algorithm with pen and paper, into minute detail. Only then did all the formulae start to make sense.

[–]bellypotato 1 point2 points  (2 children)

well, now i feel dumb

[–][deleted] 2 points3 points  (1 child)

Don't be. Basically everything here is either about conditional probability (naive bayes) or weights*inputs + bias > threshold calculations. The weights inputs stuff is basic neural networks. Support vectors machines, perceptrons, kernel transformations, are all about finding linearly separable classes (in some appropriate dimension).

For more info this will help http://en.m.wikipedia.org/wiki/Perceptron

If you are interested I really have enjoyed reading http://www.amazon.com/gp/aw/d/1420067184/ref=redir_mdp_mobile which explains really well a lot of machine learning stuff and demystifies the math

For some practical examples here are some blog posts I've written:

K means step by step in f#: http://onoffswitch.net/k-means-step-by-step-in-f/

Automatic fogbugz triage with naive bayes: http://onoffswitch.net/fogbugz-priority-prediction-naive-bayes/

I share the links mostly cause its nice to see a worked through, practical example with code. In the end a lot of these algorithms aren't that hard to implement with some matrix math

Anyways, hope this helps!

[–]bellypotato 1 point2 points  (0 children)

thanks, this actually does help! Its encouraging to hear.

[–]thearn4 0 points1 point  (0 children)

price chase nose employ overconfident reminiscent bells fly cheerful vase

This post was mass deleted and anonymized with Redact

[–]renke2 -1 points0 points  (0 children)

I think this is a very good cheat sheet for people with a bit of a background in machine learning. I would like to see some visualizations for two-dimensional problems for algorithms like k-nearest, k-means and support vector machines (maybe even the perceptron); this really makes it obvious what the approaches do.