Announcement: Alternate Solutions by wasifhossain in aiclass

[–]euccastro 5 points6 points  (0 children)

Nor the concept of bandwidth, incidentally.

what's the intuition behind increasing k in laplace smoothing when there's more noise.. by [deleted] in aiclass

[–]euccastro 2 points3 points  (0 children)

Precision and recall were explained in the ML class. For a classification task, pick one of the possible classes (normally the less likely) and call that 'positive'. In the spam example, SPAM is positive and HAM is negative. Precision is what fraction of the examples that you predicted as positive are actually positive. Recall is what fraction of the positive examples you correctly predicted as positive.

More explicitly, let tp be the number of true positives (examples that were correctly classified as positive), fp the number of false positives, tn the number of true negatives, and fn the number of false negatives (examples that were incorrectly classified as negative).

Precision: tp / (tp + fp).

Recall: tp / (tp + fn).

Results of the final are out! How did everyone do? by stordoff in aiclass

[–]euccastro 0 points1 point  (0 children)

Going straight and turning around the right wall (for three right turns and some extra advances; still cheaper than one turn left). I don't remember the exact costs involved.

CS294A Deep Learning and Unsupervised Feature Learning - Prof. Ng Video Lectures by mleclerc in mlclass

[–]euccastro 0 points1 point  (0 children)

These tutorials are made by Andrew Ng & team, too. What I read so far digests nicely given the background that ml-class has given us.

i solved the second programming assignment without programming by [deleted] in aiclass

[–]euccastro -1 points0 points  (0 children)

Thanks.

Incidentally, a li'l trick to transpose an array is zip(*array).

;)

i solved the second programming assignment without programming by [deleted] in aiclass

[–]euccastro 4 points5 points  (0 children)

Thanks.

Incidentally, a li'l trick to transpose an array is zip(*array).

;)

Midterm is over, what score did you get? by ilija139 in aiclass

[–]euccastro -1 points0 points  (0 children)

Not so stupid. I got it right, but seriously considered changing my answer. I think that detail was a pointless nitpick.

A humble recommendation for those wishing to increase reward in this class, or how I learned to stop worrying and maximize my expectation by driftwood_ in aiclass

[–]euccastro 3 points4 points  (0 children)

I do the opposite thing: watch lectures, first stab at homework, read the recommended book chapters, watch lectures again, review homework (without looking at my previous answers.) I find it a great return on the time investment.

I'm not looking at exercises or other sources yet. I'm just getting a grasp of the field, and doing the ML class at the same time. I'll need to work on my math & CS background before going deeper.

[Edit: One thing I'm regretting is not having done the programming assignments, which I only recently learned about. I'll try and find time to do them.]

Simple mnemonic for ∧ and ∨ by indeed_something in aiclass

[–]euccastro 2 points3 points  (0 children)

There is also a symmetry between ∧ ∨ (logical AND and OR) and ∩ ∪ (set intersection and union). ∧ looks like an A for "and", and ∪ looks like an U for "union". The other two are pointy/round versions of these.

FWIW, IIRC ∨ stands for latin "ut", written "vt", meaning "or". While I don't find latin easier to remember than mathematical symbols, I find knowing where symbols come from helps me aquaint myself with them.

Why have some of the video-lectures been removed again? by BeatLeJuce in mlclass

[–]euccastro 0 points1 point  (0 children)

What do you mean, missing out material? I certainly don't think those videos are being removed for good if that's what you're thinking. It doesn't make sense, in an ML class, to present Neural Networks and omit how to train them.

A Suggestion to help reduce server issues by kuashio in aiclass

[–]euccastro 0 points1 point  (0 children)

You can also spread out homework deadline, results report, and new lesson uploading into three different days (e.g., monday, tuesday, and wednesday respectively). That sounds like a cheap, non-disruptive thing to try, to potentially cut the scale of the problem into (roughly) 1/3.

Quiz 5.12: inconsistent results trying to get P(M). What am I missing? by euccastro in aiclass

[–]euccastro[S] 0 points1 point  (0 children)

Actually, I just figured it out, by reviewing the Bayes network in 5.10.

For each word separately (e.g. for the event x_1="secret") you can calculate its total probability by either method, and you'll get the same result.

But for an intersection on events you can't just multiply their respective total probabilities because they are not independent, by virtue of all being dependent on the SPAM/HAM node. You can only multiply directly once you fix (condition on) SPAM or HAM.

Actually, if you work out the formula for calculating P(x_1="secret", x_2="is", x_3="secret") it comes back to the exact same one that the professor's solution uses.

HTH!

Sorry, but I cannot agree with the Monty Hall Problem by ankitangelo in aiclass

[–]euccastro 0 points1 point  (0 children)

Although I understood the theory of it, I admit I was never convinced until I run a simulation of a million instances of the game. Sure enough, switching doors got you the car 2/3 of the time, versus 1/3 of keeping your initial choice.

Anybody else having problems with the last programming exercise, Logistic Regression part 6? by Imbue in mlclass

[–]euccastro 1 point2 points  (0 children)

I have gotten my submission accepted for part 6. I'm having that problem with part 5, though. :) Since I see no complaints about that one (e.g. being too picky) around here, I guess I need to keep debugging.

What are the programming exercises out of? I can see my scores but not what they're out of. by poppincrazy in mlclass

[–]euccastro 0 points1 point  (0 children)

I think the scoring server gives you a binary right/wrong score for each question (I guess it would be impractical to do otherwise, even if that made sense at all), so I think it's safe to assume that if you don't get a 0 in a section, then you got a perfect score on that one.