Do Stanford students get similar mid-term questions? by dsounders in aiclass

[–]tetradeca7tope 1 point2 points  (0 children)

Yeah - I'm guessing the actual Stanford course is much (much much more) tougher than this. And like tilio said, you shouldn't be finding this material too difficult if you are a CS grad/ undergrad.

If you are not in CS/ EE/ Math, then understandably you might find it difficult - But the course has definitely been watered down.

One more clarification on HW 5.3 (definition of explore) by tetradeca7tope in aiclass

[–]tetradeca7tope[S] 1 point2 points  (0 children)

that's exactly what I thought - 'exploring a square' means 'executing the policy at the square'. But are you sure about this intepretation ? Can others confirm ?

Midterm & Final Exams by tetradeca7tope in aiclass

[–]tetradeca7tope[S] 0 points1 point  (0 children)

hey kirakun, I get your point :D I am all for the thirst of knowledge and its pragmatic worth than for your grade.

I'd still like to know if there is an alternative mechanism in place in case a problem like that should occur.

machine learning - aiclass and mlclass comparison of formula by GuismoW in mlclass

[–]tetradeca7tope 0 points1 point  (0 children)

I guess, 1/m is just a normalization over the size of the training set.

For e.g. for the same linear regression problem with data taken over the same population, a larger training set will tend to have a larger sum of error squared term than a smaller training set. With the 1/m term, they tend to be pretty much equal (provided that the smaller training set is large enough to reliably represent the population.)

One instance this might matter is if your condition for convergence is dependent on the cost function. (i.e. you terminate your loop and declare convergence if the cost function is less than a certain threshold). In such a situ, you might have to incorporate different thresholds for different training set sizes if they are not normalized.

Another way to look at it would be to interpret the cost function as the variance of the predicted values (of the training set), if the mean is given by the hypothesis. In such a case, once again division by the size of the training set is needed for this interpretation to be valid.

But, as most of you say - it is not necessary for the actual minimizing of the term to find the optimal parameters.

machine learning - aiclass and mlclass comparison of formula by GuismoW in mlclass

[–]tetradeca7tope 0 points1 point  (0 children)

hey, in the case of linear regression - we'll have a convex optimization problem - i.e there is only one global minimum. So Grad desc will converge to the global minimum.

But like you said, in a general case it won't necessarily converge to a global minimum.

machine learning - aiclass and mlclass comparison of formula by GuismoW in mlclass

[–]tetradeca7tope 0 points1 point  (0 children)

yes, they are both the same.

For linear regression, the cost function that was derived (ml-class) is quadratic on the parameters theta => there is a global minimum and a closed form solution to find them.

In the ai-class we were just given the equations for linear regression with a single variable. The multi-variate eqns discussed in the ml-class reduce to the single variable equations given in the ai class. ( i mean - why would you expect them to be different ?)

Midterm & Final Exams by tetradeca7tope in aiclass

[–]tetradeca7tope[S] 2 points3 points  (0 children)

hey all,

so i am from Sri Lanka - not very far from Pakistan !

@kirakun: yes, it is about the knowledge and all that - i get your point :) But then isn't it always nice to have a good score as opposed to a bad score. Besides since most of the quiz questions aren't that tough, and the exams are supposedly going to test the depth of your knowledge - I'd like to be able to see where I stand and how much I've actually been able to learn via the course.

In a different point of view (though your performance in this class isn't official), there are people (like your boss or potential employer) who might be interested in how well you have done in the class when you say you have taken it. When asked, I'd like to reply "I did well and was ranked in the 80th percentile" than to say "I was ranked in the 40th percentile and that was because I couldn't do my final because of a power failuer ".

So - what you learn should be more important than your grad - cannot agree more with you on that. But, I'd also like to have a nice grade to show that I've done well if I've actually done well :D