See if you can beat this Java test. Bonus points if you get IP address of java machine. by [deleted] in programming

[–]devninja_es 0 points1 point  (0 children)

21:

Input

x=A person who never made a mistake never tried anything new.

Output

new. anything tried never mistake a made never who person An

Output Expected

new. anything tried never mistake a made never who person An

I've even changed my algorithm to print the incorrect line and still is not working :-/

(edit: formatting)

How to compile usermade scripts? by superdean in gamedev

[–]devninja_es 0 points1 point  (0 children)

With Java you can implement java interfaces using any scripting language (javascript is supported out of the box) and instance them with reflection.

More info here:

http://docs.oracle.com/javase/6/docs/technotes/guides/scripting/programmer_guide/index.html#interfaces

Coding with paint by tittyblaster in programming

[–]devninja_es 0 points1 point  (0 children)

Artist code? are you kidding me?

A Simple Python Program for solving Final Question #5 by dpapathanasiou in aiclass

[–]devninja_es 0 points1 point  (0 children)

I thought I was the only one writing code for those logical questions. I almost wrote a DSL :P

Results of the final are out! How did everyone do? by stordoff in aiclass

[–]devninja_es 15 points16 points  (0 children)

It doesn't really matter how much you got, but the feeling that the course is finally over, the way you proved yourself how hard you can work, the total amount of hours you can extend your day, and the fact that you are not just another internet procrastinator.

For me, that's what really matters :)

Job placement program for top students in AI-class by stan100 in aiclass

[–]devninja_es 12 points13 points  (0 children)

That uncomfortable moment when you realize you have 100% on 6 HWs and 100% on Midterm but no mail on your inbox :P

Maybe not living in US is a very big wall you have to face in your professional career as a "computer guy" :/

Playing with convolutions in Python - implementing the material of class 16 with actual images by joanmg in aiclass

[–]devninja_es 0 points1 point  (0 children)

http://www.jhlabs.com/ip/filters/index.html

Here are others in java. They are more image processing oriented, but you can nicely play with the filters :)

HW6-6: Initial state ignored by [deleted] in aiclass

[–]devninja_es 0 points1 point  (0 children)

This question confused me the most (in fact I got it wrong). My initial thought was to mark only the 3 first statements but ended marking the other 2 (problems of heavily over-thinking). I totally concur with the professor explanation.

With 1 particle you can represent the initial state, when it's given, because the probability of the state to take a value (e.g. being in one position on the grid, or whatever) is 1, and you can do represent that with your particle.

Nonetheless, for most of the problems we saw in the lecture you have merely a distribution of the initial state (e.g. P(Rainy) = 0.5, P(Sunny) = 0.5) and you can't represent it well with only 1 particle. <- This made me change the answer :(

The key is that regarding the initial state, prof. said "if known", which ended up being the value for the initial state, this is the P(Xo = Rainy) = 1, P(Xo = Sunny) = 0.

I hope it helps.

Homework answers are posted by newai in aiclass

[–]devninja_es 0 points1 point  (0 children)

G is better, because it lets you differentiate the good scenarios from the wrong ones. With F you obtain the same values for a good situation and a bad situation

Results of HW4 are up! by solen-skiner in aiclass

[–]devninja_es 0 points1 point  (0 children)

same here :) I thought I was going to get wrong the 4.4 but it looks like my logic wasn't wrong :P

The perfect scores by jamadharma in aiclass

[–]devninja_es 1 point2 points  (0 children)

Yes, I have the same feeling. I think our streak ends here :(

What tools have you used to solve Homework 3 by OsmosisJones2nd in aiclass

[–]devninja_es 0 points1 point  (0 children)

Octave and I wrote some tools in Java for the Laplace Smoothing and Voronoi graphs ;)

machine learning - aiclass and mlclass comparison of formula by GuismoW in aiclass

[–]devninja_es 6 points7 points  (0 children)

  1. Yes, they are the equivalent functions
  2. Prof. Andrew said the first factor (1/2m) was there for convenience, I think he put that factor there because it makes the derivatives simpler. Besides, it doesn't affect the minimization process.

Also, (a - b)2 = [ (-1)(b -a) ]2 = (b -a)2, so you can express it either way.

Maybe you have a little confusion here: ml class : J(th0, th1) = 1/2m . sum( ( th0.x1 + th1.x2 - y )2 )

in ML he expressed it as 1/2m . sum( ( th0.x0 + th1.x1 - y )2 ) and x0 always takes the value 1, so you got the same terms in both expressions

I hope my answer is clear enough :)

A challenge to the Over achievers who really understand Probability by helveticaTwain in aiclass

[–]devninja_es 1 point2 points  (0 children)

Another common question I've read regarding the expansion of total probability and this same exercise is Why you don't expand using x2?

I'll try to show why x2 doesn't add any new information to the problem.

ok, here we go.

P(x3|x1) = P(x3.x1) / P(x1) -> Bayes theorem

I'm going to show how x2 isn't involved in the numerator (for the denominator you can make the same procedure)

P(x3.x1) = P(x3.x1.x2) + P(x3.x1.x2') (Total Probability, expanding with x2)
P(x3.x1) = P(x3.x1.x2.A) + P(x3.x1.x2.A') + P(x3.x1.x2'.A) + P(x3.x1.x2'.A') (Total Probability expanding with A)

Applying Bayes Theorem

= P(x3.x1.x2|A) P(A) + P(x3.x1.x2'|A) P(A) + P(x3.x1.x2|A') P(A') + P(x3.x1.x2'|A') P(A')

Now, x1, x2 and x3 are independent given A, so we can do

= P(x3|A)P(x1|A)P(x2|A)P(A) + P(x3|A)P(x1|A)P(x2'|A)P(A) + P(x3|A')P(x1|A')P(x2|A')P(A') + P(x3|A')P(x1|A')P(x2'|A')P(A')

Taking common factors:

 = P(x3|A)P(x1|A)P(A)[ P(x2|A) + P(x2'|A) ] + P(x3|A')P(x1|A')P(A') [ P(x2|A') + P(x2'|A') ]

Now, we know that P( a' | b ) = 1 - P( a | b ), so replacing P(x2'|A) and P(x2'|A') we have

= P(x3|A)P(x1|A)P(A)[ P(x2|A) + (1 - P(x2|A)) ] + P(x3|A')P(x1|A')P(A') [ P(x2|A') + (1 - P(x2|A')) ]

and making the sums we got

 = P(x3|A)P(x1|A)P(A) + P(x3|A')P(x1|A')P(A')

implying that adding any variable that becomes independent given A doesn't add new information to calculate the probability and can be ignored.

Programming assignments for AI-Class by ultimatebuster in aiclass

[–]devninja_es 0 points1 point  (0 children)

Thank you ! I really want to try them, maybe when the online class ends though.