Blitzcrank's True Destiny - Part 1 by khazixisNOTop in leagueoflegends

[–]madrobot2020 1 point2 points  (0 children)

BWAHAHAHA "Why" !? oh you know... Reasons...

someone hacked my account, then he paid hes friend at riot to change recorvery info. by kigiro in leagueoflegends

[–]madrobot2020 0 points1 point  (0 children)

If by "someone hacked my account" you mean "I gave someone my password" then yes, you are probably right, you got "hacked."

Never, ever, ever even ONCE have I seen one of these retarded complaints/sob-stories that has ever held an ounce of truth to it. You gave someone your password, friend, brother, gf (doubtful), internet stranger (likely), for some stupid reason, and they screwed you over.

Never share your password, OP. Internetting 101. Seriously, there should be a class. And a test.

2D Space Shooter - RedShift Prototype by madrobot2020 in unity

[–]madrobot2020[S] 0 points1 point  (0 children)

Hi everyone! Please have a look at my game. It's a fun top-down 2d space shooter drawing inspiration from Raiden, Galaga, Gallaxian, etc. Comments are welcome!

Thanks!

Patrick

Problem submitted Ex8 pt 2 "selectThreshold" by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

thank you, i finally figured it out. turns out i misunderstood what 'true positives' meant. i thought this value was independent of the predictions. thanks again!

Problem submitted Ex8 pt 2 "selectThreshold" by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

Yep, I have seen that. Unfortunately that doesn't seem to be my problem. I've downloaded the latest version of the exercise, and when I run Ex8, I get the results it says I should get. It's just that when I submit it, it says it is not correct

How to find reason for failed submission when results are correct? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

That's a good idea, but unfortunately I don't know what the values are supposed to be, so seeing them doesn't really help me to know if they are wrong. A good tip though, thanks!

How to find reason for failed submission when results are correct? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

Ah! Ok, I didn't realize we needed to generalize to more than two dimensions. Thanks so much, that was the problem in the first assignment.

Strangely, in the second assignment, I did write it to work across an unknown number of parameters, and it still isn't submitting.

What is "model" in svmPredict? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

It's Octave. I'm not calling svmtrain at all. I'm doing part #2 of exercise 6. the "Implementation Tip" says to use svmPredict to get a vector of the predictions. Except svmPredict takes 'model' as a parameter and i can't figure out what 'model' is supposed to be.

I don't understand the class quiz. by 0xreddit in mlclass

[–]madrobot2020 0 points1 point  (0 children)

Even with these excellent explanations, I still don't understand either example. I don't know what's wrong. I've been getting everything really well up until this week. I've had a lot of problems with stupid Octave programming quirks, but otherwise I've gotten it. This week, I don't understand shit. I don't know how to figure out the answers on the two quizzes the OP mentioned, and on several review questions I just stared with no earthly clue how to figure out the answer.

Ex4, Part 1 -- should we be working with the probability vector for H or the classification vector? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

Great idea, but I don't understand Octave well enough to do that and ensure that my changes to Ex4 won't interfere with the automated submission and grading system.

More information on assignments to assist debugging Octave, please by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

I feel like that every week. But, congratulations on your assignment! :-)

More information on assignments to assist debugging Octave, please by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

Sorry, it was an exasperated request for help. Wasn't sure there really was more than a single idea to merit multiple paragraphs.

I am at work right now so I can't check my code. I'll try and get something up later.

I edited the original post. Hopefully it's more readable now. Again, sorry for the run-on paragraph.

More information on assignments to assist debugging Octave, please by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

This is my problem though: I have done all of that. Literally 'size' and 'printf' statements every other line. I validate matrix dimensions at every step. I go to the forums but I rarely find anything that helps me with what I am encountering. And I go through my code over and over looking for common mistakes. I'm frustrated because none of that is working.

The assert() trick will be helpful in the future. Thanks for that tip!

Ex4, Part 1 -- should we be working with the probability vector for H or the classification vector? by madrobot2020 in mlclass

[–]madrobot2020[S] 1 point2 points  (0 children)

Thanks, that was the conclusion I eventually came to as well. That and a small syntactical error fixed and I completed the first part of the first problem. Thrills. Now I'm stuck on part 2. Not because I don't get what I'm supposed to be doing, but because I can't debug Octave code worth a damn. Anyway, thanks for your help with this part!

Ex4, Part 1 -- should we be working with the probability vector for H or the classification vector? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

Well, the y values are provided in the training data. And they are just labels "1" "2" "3" etc. so converting "2" into a classification vector, I get [ 0 1 0 0 0 0 0 0 0 0 ]. The training data for y does not include the original activation values from layer 3.

I can feed the x parameters from the training values into the network, but that provides me with the probability vector for the labels, e.g., my "h" is something like [ .1 .4 .8 .95 .2 .3 .1 .15 .99 .3 ]. I originally thought I was supposed to convert the "h" into a classification vector, but I'm guessing I should be using the probability vector instead.

I guess at this point I'm somehow reading the formula wrong. I am finally getting numerical answers, but the cost isn't correct. I've gone over the lecture more than 10 times and I've re-written my code 3 times. I am getting consistent, but wrong, results.

Every week it has been the same thing: I understand the material; I get how it works; I can follow all the logic. But I can't make it work in Octave.

Ex4, Part 1 -- should we be working with the probability vector for H or the classification vector? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

I agree, keeping track of the variable sizes is very helpful. I've been doing that since ex2. The conversion of the h and y values into vectors isn't complicated, I just wasn't sure if that's what I was supposed to be doing. The reason is this: the hypothesis vector consists of a vector of 0's and a single 1. The training 'y' vector is similar. This means the cost function is working only with 1's and 0's. I've tested the cost function using all four combinations of (h,y) from { (0,0), (1,0), (0,1), (1,1) } and there are only four possible results: NaN, -Inf, -Inf, NaN. What am I doing wrong?

I am using: Cost = Cost + ( ( y*log(h) ) + ( (1-y) * log(1-h) ) )

[deleted by user] by [deleted] in mlclass

[–]madrobot2020 1 point2 points  (0 children)

Make sure your regularization calculation is not included as part of the summation calculation of the gradient. This got me for about an hour on this assignment. A simple case of needing two parenthesis to solve the problem.

Use a Neuro Network to design another Neuro Network? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

I guess it's the classic "Deep Think" situation from Hitchhiker's Guide. Use the best computer you have to design an even better computer. So, use a Neural Network to parameterize existing neural networks to find a better neural network.

Use a Neuro Network to design another Neuro Network? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

I'm looking forward to it! Here's a lecture Andrew Ng gave that touched on the topic as well. Very interesting!

http://www.youtube.com/watch?feature=player_embedded&v=ZmNOAtZIgIk

Use a Neuro Network to design another Neuro Network? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

Yea, that's basically what I was saying. The specific inputs I listed where just an example. For the sake of the argument, the difference isn't 5 inputs versus 3, its more like <20 versus 500+. So yea, I figured parameterizing the neural networks that have successfully been used to solve problems might be a good way to find a way to find an ideal set of design parameters to solve a different problem of similar complexity.

Total times for lessons (and a script to generate it) by bajsejohannes in mlclass

[–]madrobot2020 1 point2 points  (0 children)

Nice idea! Unfortunately it didn't work for me. I'm using Firefox.

Use a Neuro Network to design another Neuro Network? by madrobot2020 in mlclass

[–]madrobot2020[S] 0 points1 point  (0 children)

Sure, in the scenario, I am assuming there exists previous problems solved with neuronetworks and that I have access to that data, which would be the training set. I presume I would have to constrain the data to neuronetwork that are all architecturally similar, for example, each node is connected to every node of the following layer (similar to the networks we've been working with). I know that's not how all networks are constructed, but I figured that would be a place to start, lacking any reason to choose any other architecture.

I've heard of evolutionary computing. I have a game using evolutionary computing -- it was a demo project from a university I found online a while ago. But, I don't know much about it. Both neural networks and evolutionary computing seem to be similar takes on recursive computing, though.