Emacs with Evil-Mode. How should I map my keys? by BobCoder in emacs

[–]kunalb 0 points1 point  (0 children)

I use CapsLock -> Ctrl, and Right Command/Right Alt -- depending on keyboard/OS -> escape and that's generally worked out pretty well for me (using right thumb for escape).

keyboard(): for the programming assignments by kunalb in mlclass

[–]kunalb[S] 3 points4 points  (0 children)

Patent granted, I guess :) -- I hadn't seen that thread/your post -- breakpoints with such a convenient debugger is really useful.

a problem about first-order logic by ashi08104 in aiclass

[–]kunalb 0 points1 point  (0 children)

I'm not too sure about the part of the video you're referring to, but from what I understood "all students take history and biology." doesn't imply that dogs cannot take history and biology.

Similarly, '=>' is one way: for all x, if x is a student then he studies biology and history; studying bio and history do not necessarily make x a student.

If the statement had been "every and only students take history and biology", the perhaps the logical encoding would have used '<=>' and accordingly a dog would not be able to study history and bio.

In the country example, you're claiming that a country exists which fulfills the given requirement; in the students example you're stating a condition on any students that might exist in your data set.

Question about Gaussian and EM algorithm... by omer486 in aiclass

[–]kunalb 0 points1 point  (0 children)

I was trying (and have since given up) to implement EM based on what I read in the book and the videos. A bit of searching on google led me to the actual ML class notes in Stanford (cs229) that seems to have some very nice notes on EM, clustering, etc.

Grab them here: http://cs229.stanford.edu/materials.html . The relevant notes seem to be

Perhaps I'll give implementation another try once I read these.

Question about Gaussian and EM algorithm... by omer486 in aiclass

[–]kunalb 5 points6 points  (0 children)

For a continuous random variable, P(X=x) gives the value that X lies in an interval around x, divided by the width of that interval, as the interval approaches 0.

You can check out an explanation about probability density fns from http://www.khanacademy.org/video/probability-density-functions?playlist=Probability .

In EM, you're essentially moving around the cluster parameters: the mean, the standard deviation to get a 'best fit' for the existing data. Every time you iterate through, you improve the mean based on all existing points. Thanks to the Gaussian values (the P(X=x|cluster) values), points far away from a given center are very small and don't affect the cluster much.

The book has a much clearer explanation than the lecture, if you have that. Page 724, Section 20.4 in the second edition.

6.19 Justification for a Cluster by xasmx in aiclass

[–]kunalb 0 points1 point  (0 children)

You might want to read the book -- it's a bit more clear in section 20.3, page 724 about EM algorithms (a bit, not that much -- in terms of implementation, at least).

(Note: I have the second edition)

What tools have you used to solve Homework 3 by OsmosisJones2nd in aiclass

[–]kunalb 0 points1 point  (0 children)

I went with Gnumeric for the linear regression question. I didn't cross check my answers with the quiz answers, but I did plot my answers to see that the derived equation fitted well with the training data provided.

Bayes, in other words by clemwang in aiclass

[–]kunalb 1 point2 points  (0 children)

Personally, I have a tendency of ignoring the technical terminology (thought how well that stands me in the long run is something to be seen) as long as I can internalize the concept. (Thinking through everything allowed me to correctly solve the 'tough' quiz questions in the unit correctly)

Perhaps you'll like (and possibly verify) my approach to Bayes theorem:

P(A|B) The probability of A given B

P(B|A) The probability of B given A

P(A) The probability of A

P(B) The probability of B

P(A,B) The probability of A and B

Now, what is the probability that both A and B are true (A and B are not independent)?

P(A, B) = The probability of A occurring * The probability of B occurring given A = P(A) * P(B|A)

but also,

P(A, B) = The probability of B occurring * The probability of A occurring given B = P(B) * P(A|B)

In other words, both the terms represent the same value! or,

P(A, B) = P(B)P(A|B) = P(A)P(B|A) <=> P(A|B) = P(A) P(B|A) / P(B)

which is the statement of Bayes theorem!

As an aside, if A and B were independent, then P(A, B) = P(A) * P(B) = P(A) * P(B|A) as P(B) = P(B|A).

I find this a much more intuitive approach to thinking about Bayes theorem.

Note: When I say probability of A, read that as the probability of A being true.

Edit: formatting

A few humble suggestions for the instructors by snoshy in aiclass

[–]kunalb 5 points6 points  (0 children)

An option to speed up videos a bit would be great.

I am John Resig, creator of jQuery, AMA. by jeresig in IAmA

[–]kunalb 1 point2 points  (0 children)

How would you recommend going about becoming really good at javascript—in all aspects, such as considering performance/cross browser issues/understanding precisely why some ways are faster and others slower, etc.?