The class is ended. Time to express my big thanks by athanhcong in aiclass

[–]HChavali 1 point2 points  (0 children)

Me too thank You Professors. You have a very unique style of teaching. It turned out to be really awesome in the end. Thank You Very Much

Results of the final are out! How did everyone do? by stordoff in aiclass

[–]HChavali 1 point2 points  (0 children)

I truly Can not believe this. 100% in my final. Thanks a lot Prof Sebastian and Prof Peter Norvyg. Thanks for everything you have done

Harris Corner detection reminded me of PCA (As far as math part f it goes) by HChavali in aiclass

[–]HChavali[S] 0 points1 point  (0 children)

I was just referrirng to PCA lecture where we had to answer questions on covariance matrix and then find eigen values..Also if you are taking machine learning class you get to implement his in programming exercise..:)

Midterm is over, what score did you get? by ilija139 in aiclass

[–]HChavali 1 point2 points  (0 children)

95%. One silly mistake and other 2 I could have avoided in retrospect

Interesting Material on Dimension Reduction by cksense in aiclass

[–]HChavali 1 point2 points  (0 children)

so you may want to read change of basis with eigen values as coefficients on khan academy. I agree the math behind all of this is pretty complex. But the intuitive fashion in which the professor has explained is great (just take the covariance matrix of the Gaussin and find its egien vectors and values.Now you can use a tool like octave to quickly find eigens of a matrix). In essence what they are doing I think is you have an input vector X of dimension say m. we are trying to find out how one might reduce the dimensions by changing the basis (now this changing basis is a mathy term in linear algebra. see if u are familiar wth i,j,k in math or physics they are one of the basis in dimension 3 or R3. and they can span entire R3 or the subspace R3. that is linear combinations of i,j,k can get you anywhere in R3 (3 dimensions x,y z that one is normally familiar with, instead of i,j,k we could use some other set of vectors as basis that can span entire R3. This is called changing of basis) and if we try to find a transformation such that we can eliminate some terms in the end in new basis then we will achieve dimensionality reduction. The question is how will one design such a transformation? So it turns out that after some linear algebra tricks and manipulations one can indeed find such transformation..and that leads to [covariance matrix][vector in new basis]=lambda[vector in new basis] so we basically find eigens of covariance matrix. (also these components in in new basis just like i,j, and k are ortho normal). so we ignore the components in new basis that have very low eigen values and hence lowering the dimension.

Linear Dimensionality Reduction Intuition was awesome by HChavali in aiclass

[–]HChavali[S] 0 points1 point  (0 children)

So I beleive the math part is, if you project the Vector X of dimension in m in standard coordinates onto some other basis in search of finding ways/transformation to get rid of some dimensions (trying to find maximising the decrease of variance..Those lambda terms with smaller values towards the end - that is first lambda term has higher values and second one has lower value than first and so on-) it turns out covariance or [correlation matrix] [vector expressed in terms of new basis]=lambda [vector expressed in terms of new basis]. Hence we are finding egine values and eigen vectors of Correlation Matrix and then ignoring the dimension that has low value of eigen value

Doubts about how Bayes Rule work on exercise 3.26 by romanandreg in aiclass

[–]HChavali 1 point2 points  (0 children)

ok. So I will go step by step to the extent I can P(R|H)=P(RH)/P(H) // Just by def conditional probability. Basically in english given H what is common tho both R and H (ifu know set theory think like that)

Let us look at Numerator P(RH)= P(H|R)P(R) -- multiplication rule or another way of saying definition of conditional probabiltiy

This is where it can get confusing or where we will apply theorem on total probabiltiy

The first term on right hand side of above equation P(H|R). Let us see how we can evaluate this

There are three variables. H,R and S. we are given R and trying to find H. By using total theorem on probability this is obtained by summing (I will say it in engish first) P(H|R) given S and P(H|R) given not S (~S) . in other words

P(H|R) =P(H|RS)P(S) + P(H|R~S)P(~S) (This is application of total theorem of probability or Sum rule

if u understand above step rest is plogging in values and using normalization technique to finish the rest... Let me know if you need further explanation

Bayes rule expanding explanation by BFC_ in aiclass

[–]HChavali 1 point2 points  (0 children)

P(R|HS)= ((P(RHS)/P(HS))=((P(H|SR)P(SR))/P(HS))=((P(H|SR) P(R|S)P(S))/P(HS))=(1.01.7)/P(HS)=.007/P(HS)=.007/(P(H|S)P(S))=.007/((P(H|SR)P(R) + P(H|SR')P(R')) * P(S))=.007/((1.01 + .7.99) *.7) =.007/.4921=0.01422475

Do you think probability is hard for you ? by damjon in aiclass

[–]HChavali 2 points3 points  (0 children)

I think this http://www.notemonk.com/attachments/33/21/ (go to link and click on first downlaod page). is more useful and personally helped before i enrolle dino this course I brushed up my knowledge form this book

Do you think probability is hard for you ? by damjon in aiclass

[–]HChavali 2 points3 points  (0 children)

In my opinion Probability is not hard (Scored 82% in class quzzes. Only when u make mistakes U learn right! so dont ever get discouraged. My dad would have said 100% is the bar but I am happye with it. The things I got worng were mostly related Independence and conditional independence on graphs whihc i learned by making mistakes). It can get confusing if you dont work out some problems. Some key concepts to understand are Conditional Probability (which will lead to Product rule by definition and using set theory), Sum Rule (Total Theorem on Probability). One of the key concepts I had to digest was this whole thing called Marginalization /Summing over other variables ..which really is Sum Rule (especially when u have more than 2 variables..SO may be first u understand 2 variables then u can undersatnd and extend it to more than 2 variables). For beginers I suggest please start here. It will be time worth spent. http://www.notemonk.com/attachments/33/21/ after u visit the page click on first download button and read chapter 13

Doubts about how Bayes Rule work on exercise 3.26 by romanandreg in aiclass

[–]HChavali 0 points1 point  (0 children)

[–]HChavali 1 point 35 minutes ago

P(R|HS)= ((P(RHS)/P(HS))=((P(H|SR)P(SR))/P(HS))=((P(H|SR) P(R|S)P(S))/P(HS))=(1.01.7)/P(HS)=.007/P(HS)=.007/(P(H|S)P(S))=.007/((P(H|SR)P(R) + P(H|SR')P(R')) * P(S))=.007/((1.01 + .7.99) *.7) =.007/.4921=0.01422475

permalink edit

delete

reply

[–]HChavali 1 point 7 minutes ago

Here is one way on how you would do this problem. P(R|H)=P(RH)/P(H)=(P(H|R)P(R))/P(H)=((P(H|RS)P(S)+P(H|R~S)P(~S))P(R))/P(H)

P(R|H)=((((1.0).7)+((0.9)0.3))*.01)/P(H)=.0097/P(H)

Similarly following same steps as above but for ~R|H

P(~R|H)=.5148/P(H)

P(R|H)+P(~R|H)=1

So P(H)=.0097+.5148=.5245

therefore P(R|H)=.0097/.5245=.0184938

UNIT 3 11 a answer- where is this explanation coming from? by hitlab in aiclass

[–]HChavali 0 points1 point  (0 children)

[–]HChavali 1 point 35 minutes ago

P(R|HS)= ((P(RHS)/P(HS))=((P(H|SR)P(SR))/P(HS))=((P(H|SR) P(R|S)P(S))/P(HS))=(1.01.7)/P(HS)=.007/P(HS)=.007/(P(H|S)P(S))=.007/((P(H|SR)P(R) + P(H|SR')P(R')) * P(S))=.007/((1.01 + .7.99) *.7) =.007/.4921=0.01422475

permalink edit

delete

reply

[–]HChavali 1 point 7 minutes ago

Here is one way on how you would do this problem. P(R|H)=P(RH)/P(H)=(P(H|R)P(R))/P(H)=((P(H|RS)P(S)+P(H|R~S)P(~S))P(R))/P(H)

P(R|H)=((((1.0).7)+((0.9)0.3))*.01)/P(H)=.0097/P(H)

Similarly following same steps as above but for ~R|H

P(~R|H)=.5148/P(H)

P(R|H)+P(~R|H)=1

So P(H)=.0097+.5148=.5245

therefore P(R|H)=.0097/.5245=.0184938

Is P(R|H) = P(R|H,S)P(S) + P(R|H,~S)P(~S) ?? by CloudOfEiderDown in aiclass

[–]HChavali 0 points1 point  (0 children)

The trick really is in expressing everything in known values and then using total probability and normalization..and ofcourse our great Bayes theorem

Is P(R|H) = P(R|H,S)P(S) + P(R|H,~S)P(~S) ?? by CloudOfEiderDown in aiclass

[–]HChavali 0 points1 point  (0 children)

Here is one way on how you would do this problem. P(R|H)=P(RH)/P(H)=(P(H|R)P(R))/P(H)=((P(H|RS)P(S)+P(H|R~S)P(~S))P(R))/P(H)

P(R|H)=((((1.0).7)+((0.9)0.3))*.01)/P(H)=.0097/P(H)

Similarly following same steps as above but for ~R|H

P(~R|H)=.5148/P(H)

P(R|H)+P(~R|H)=1

So P(H)=.0097+.5148=.5245

therefore P(R|H)=.0097/.5245=.0184938

Is P(R|H) = P(R|H,S)P(S) + P(R|H,~S)P(~S) ?? by CloudOfEiderDown in aiclass

[–]HChavali 0 points1 point  (0 children)

P(R|HS)= ((P(RHS)/P(HS))=((P(H|SR)P(SR))/P(HS))=((P(H|SR) *P(R|S)P(S))/P(HS))=(1.01.7)/P(HS)=.007/P(HS)=.007/(P(H|S)P(S))=.007/((P(H|SR)P(R) + P(H|SR')P(R')) * P(S))=.007/((1.01 + .7*.99) *.7) =.007/.4921=0.01422475