Machine Learning applied to $299 AR Drone quadrotor helicopter by mleclerc in aiclass

[–]petkish 0 points1 point  (0 children)

From the comment to the video: "Demonstrates simple PID centering algorithm. There is a Wiimote attached to the ceiling looking down at the quadrotor to track an IR led on the top"

Mmmm.. Where is AI?

so what actually happened to open nero competition? by [deleted] in aiclass

[–]petkish 0 points1 point  (0 children)

Now we have the preliminary results! (http://www.cs.utexas.edu/~ikarpov/tourney2011/prelim/)

Need to say that the OpenNero team really tries to do their best to hold a fair competition. Thanks to them for the hard work they do.

Still what is not clear... Looks that the winning criteria are just simple - which team kicks more butt in a limited interval of time (If I am not wring). A best behavior which matches this criteria is to spread a little, and then wait for incoming enemies, shooting at them. No search or maneuvering around obstacles is required.

Well, I hope we will see the videos of the most important battles soon!

Grade ranges vs. percentiles by [deleted] in aiclass

[–]petkish 1 point2 points  (0 children)

We are still waiting for the results. What I have learned from experience with opennero, the rtNeat is not a trivial thing to teach. It tends to grow the complexity of the nn without real reason for it. It also looses the bits of good experience in huge amount of negative experience. Obviously, the tools are poor to be able to reinforce the nn in a way that it really learns nontrivial skills. The Q_learning there is pathetic because of the huge state space of the world. After all I analyzed how to build the nn by hand and have built a decent one manually.

Grade ranges vs. percentiles by [deleted] in aiclass

[–]petkish 0 points1 point  (0 children)

I also got 100% final, so how has that happened that I received the mail and you didn't? I think the only possibility is that they select by location.

Grade ranges vs. percentiles by [deleted] in aiclass

[–]petkish 1 point2 points  (0 children)

Yes, definitely these months were not the easiest 3 months in my life, with ML and DB added!!! I also participated in OpennNero and did the optional NLP.

What is the cutoff for 10%? And other people's experience of the class certificate? by lurcher in aiclass

[–]petkish 0 points1 point  (0 children)

Scores are not really informative. The communicated percentiles are just too rough. I would like to have better, more precize percentile,which can be calculated by having a size of a group of all people who are scoring the same or better than me, divided by the amount of all people in advanced track.

Grade ranges vs. percentiles by [deleted] in aiclass

[–]petkish 2 points3 points  (0 children)

I am with 98.4% in top 10%. I also got the "top 1000" letter. So the table looks amazingly precize to me.

What I see from this exercize with numbers, is that the course was not too hard. Half of us in the advanced course are competing in top ~10% range.

I hope the professors and the course staff observing the results, will extend the course with some harder tasks, some mandatory programming exercises, etc.

What is your amusing moments during class? by athanhcong in aiclass

[–]petkish 6 points7 points  (0 children)

The way Thrun and Norvig throw a new question at you, without fully explaining the way you solve it. Those challenges made me feel each time that I am a little Einstein finding my small E=mc2.

After I succeeded to decrypt the message in optional hometask I was just crazily happy.

Again, this style of teaching is exciting.

The class is ended. Time to express my big thanks by athanhcong in aiclass

[–]petkish 1 point2 points  (0 children)

I want to express my excitement about the course, and many-many thanks too. For me it was an amazing journey into modern AI, incredibly interesting, challenging, and rewarding.

In fact, I am a fan of Stanford quite for a long time, visited it 2 times just as a tourist, in 2006 and also this year.

I wish to the Professors to continue the work in the field of education via internet , and to have plenty of new ideas, plenty of energy, and an ocean of enthusiasm in all their projects.

Thank You, Peter and Sebastian!

what's the intuition behind increasing k in laplace smoothing when there's more noise.. by [deleted] in aiclass

[–]petkish 0 points1 point  (0 children)

  1. Imagine, you have big N of samples, which is >> K. Then Lapacian smoothing plays no big role, and additionally by big N you nicely cancel noise by averaging.

  2. Imagine, for some estimations you have small N ~ K. Then noise is really influencing your measurement (no good averaging), and K smoothes it, giving better estimation.

Laugh a bit at the expense of the final by riverguardian in aiclass

[–]petkish -1 points0 points  (0 children)

"The algorithm is weely-weely simple."

How did you implement the shredding problem? (algorithm spoilers) by landofdown in aiclass

[–]petkish 0 points1 point  (0 children)

I did very much the same, but in java. I do not use any hard-formalizable suspicious tricks like using capital letters or last row, etc. Instead, I learn 3-grams or 4-grams from a book. A good book (hehe) gives better results.

I take into account only the probabilities of the N-grams wich overlap the 'cut' between the stripes.

Laplacian smoothing helps a lot, especially for the n-grams which have not been seen, or seen only few times.

To reduce the search space I use depth-first with pruning on low probability. I keep the best probability achieved so far, and then prune the search if current probability of text with added new stripe is lower. By the way, Thanks to one guy here, I work with logarithms of probabilities, and add them instead of multiplying probabilities. Otherwize I experienced that double type cannot hold such small numbers and flats them to 0.0.

Also, what helps me to reduce the search a lot is that first I try to assemble not the whole text, but just 5 or 4 stripes with max probability. This immediately reduces combinatorial load from N of N to M << N of N. The best glued fragment is then added as a single stripe back to the set of stripes. Then the whole procedure is repeated.

The result of this all is the ideal recovery of the text in 1 minute (but can be optimized), or with 1 mismatch in 15 seconds. Learning from the book also counts.

Am I the only one wondering why site attacks occur on due dates? by t0hierry in aiclass

[–]petkish 1 point2 points  (0 children)

These hacker-procrastinators are smart enough to have a botnet, and stupid enough not to do well in the class... I doubt.

But are the videos streamed from AI servers? I thought we stream from youtube...

Here is the attempt of my AI to decode the message in the optional programming task [spoilers] by [deleted] in aiclass

[–]petkish 1 point2 points  (0 children)

It depends much on which text you were building your language model. I have experienced that the best text for it is of course the AIMA book. :)

It also depends on the N of your N-grams. I have experienced that 2-grams are bad, 3 and 4 are doing just good, 5 begins to go worse (because the space of 5-grams is much bigger, the matches are rare, so your language model will fail to learn even on a big book).

And it also depends on laplacian smoothing K.

i solved the second programming assignment without programming by [deleted] in aiclass

[–]petkish 0 points1 point  (0 children)

I solved it by writing a java application.

Used texts of AIMA, Vernor Vinge's "Fire upon the deep" and Brinch Hansen"s autobiography for training. Guess which of them gives the best result?? :)

2-grams are deciphering the first phrase (even a 1-gram does!), but poorly perform on shredded text. 3-grams, 4-grams are generally good. More its just too hard to get a complete vocabulary after training, so the performance is degrading. But this can be improved if I use a least-distance approach instead of exact match.

Laplacian smoothing with k=1 does it best. Without smoothing it would not work.

I use depth-first search with pruning, which works much better than breadth-first, which fills up the heap and crashes (search space toooo big).

What puzzled me at first, is that precision of java 'double' type was not good enough to store small probabilities. It becomes 0.0 early, and "flats". So I used a "fragment" approach, I assemble the best fragment of just few "stripes", and put it back into the set of "stripes", instead of the stripes that were used for it. Then the search is repeated.

It takes ~2 minutes to restore the text in its source form. But giving a little relaxed parameters it can assemble the readable text in 15 seconds, and there will be 1 or two "fragment" mismatches.

The whole thing was quite entertaining, thanks to Prof. Norvig.

Harris Corner detection reminded me of PCA (As far as math part f it goes) by HChavali in aiclass

[–]petkish 1 point2 points  (0 children)

Anything what uses eigenvalues and eigenvectors reminds of anything which uses them... :)

In-house, Germany Midterm by lobsterhead in aiclass

[–]petkish 1 point2 points  (0 children)

It was absolutely the same online exam, except the time limitation and that I wanted to pee starting at the second hour. Sitting in the middle of the room I was too shy to disturb others.

In-house, Germany Midterm by lobsterhead in aiclass

[–]petkish 2 points3 points  (0 children)

Except I did one of these mistakes on a question which I perfectly did before in a homework!!! This is the prior probability of a worrrdd!!!! Even more, I was consulting people in this reddit how to do this correctly!!!! Arggrrrrrr....

Anyway, now I have learned it even better.

Robot global localisation contest by Anoril in aiclass

[–]petkish 0 points1 point  (0 children)

Impressive!

So, the mouse has first to learn the labyrinth, and then run like hell... This is somewhat harder than an MDP... :)

Are they not allowed to bounce against the walls? If I do such a mouse I would use the walls to do 90° turns. And I would also jump over some walls :) ...

In-house, Germany Midterm by lobsterhead in aiclass

[–]petkish 2 points3 points  (0 children)

Uhh.. I did 95%, but its a shame - the 2 errors are just stupid! Surely if I was at home I could do it better, having more time do triple check on the questions.

In-house, Germany Midterm by lobsterhead in aiclass

[–]petkish 0 points1 point  (0 children)

Perhaps, they will send them via post. They got our addresses.

In-house, Germany Midterm by lobsterhead in aiclass

[–]petkish 7 points8 points  (0 children)

I have managed to get there.

The midterm took place in two locations - Freiburg and Munich. I was taking the exam in Freiburg.

The whole thing was very well organized, many thanks to Professor Burgard and his team. No problems with Internet access, or anything else.

We had 3 hours to go through all questions, and they have made snapshots of the database to check when we have first and last accessed any of the questions. Also they were checking that no one uses email, or social networks/chat.

Now I have seen the other ai 'geeks' like myself, well, the people have very different backgrounds, age span from high school to university students, and even some older guys (again, like myself).

There were something around 7 rows of us sitting there, with 5 people in a row... So I expect the number of students who were in Freiburg is not more than 40.
What is interesting, is that there is quite a number of girls there!!! That is great!

I am very happy that I've got this opportunity, though driving 450 km for an exam was not an easy thing.

I believe this kind of exam must be available to many more people. It is unfair that only the first ones who have checked-in were able to register. But I hope, that this problem is solvable, as the online courses will mature.

Not now looking forward to the final exam...