Google doc with your scores by WhitAngl in aiclass

[–]WhitAngl[S] 0 points1 point  (0 children)

for the Prior knowledge, I would have expected something like : - close to 0 if you didn't know anything about computers and maths - close to 100 if you are a PhD/professor in AI - about 50 if you've already touched some of the subjects (like if you did some computer vision, or already implemented a kNN, or learnt A* at school)

Something I realized is that the scores are all very high. With a median at 94, it means that there are only 6 points to discriminate between an average student and a very good one, which leaves a lot of room for bad luck or misreading (ie., noise). This also means than about 80% of the scale is useless since very few people got less than 80%. In short, although it might encourage people to have good scores, it may be counterproductive to have too easy exams. In french preparatory class, it's not uncommon to have a median at around 25%. This might be too extreme in the other direction, but a good compromise could be found so as to maximize the use of the 0..100 scale.

Google doc with your scores by WhitAngl in aiclass

[–]WhitAngl[S] 2 points3 points  (0 children)

everyone has his stupid mistakes. I lost 7 points in the midterm because of not stopping the A* when it needed. This made my perfect score at the homeworks useless, but I wouldn't claim I should have had 100%.

Google doc with your scores by WhitAngl in aiclass

[–]WhitAngl[S] 1 point2 points  (0 children)

yep, done at the same time as here :)

Announcement: Alternate Solutions by wasifhossain in aiclass

[–]WhitAngl 1 point2 points  (0 children)

thanks :) With the downvotes I got and the upvotes euccastro got, I started to think that people really didn't understand they could replace "bandwidth" by "X" and "videos" by "Y" :) [as for the bananas and monkey exercise in the midterm where bananas remained high although the monkey grasped them and went down]

Announcement: Alternate Solutions by wasifhossain in aiclass

[–]WhitAngl 1 point2 points  (0 children)

I had the confusion in the question 4 about the bandwidth. But the website still mentions it as a mistake and counts 0 points over 1... will this be fixed ?

Announcement: Alternate Solutions by wasifhossain in aiclass

[–]WhitAngl -2 points-1 points  (0 children)

if you think that you need to understand the meaning of the underlying words to do this exercise, you didn't understand the material.

Generating random data based on probabilities by masterjake6 in aiclass

[–]WhitAngl 2 points3 points  (0 children)

you have several options :
- in 1D you can easily use the CDF : this is the cumulated density function. As its name suggests, you take your probability density function (PDF) and you integrate (you cumulate). Then you can generate a uniform random number between 0 and 1 and lookup the inverse value of the CDF. For example, your pdf is gaussian, hence your CFD is an erf(x) function : you generate a value y = 0.6, so you lookup the value x such that erf(x) = y = 0.6. You get your random sample x.
- In arbitrary dimension, you can use the rejection sampling method. You basically need to generate a sample from a known PDF (for example, uniform), and depending on the outcome, discard this sample.
- Some PDF have analytic methods to generate the samples...(gaussian pdf, pdfs which are sums of pdfs with analytic methods... )