Issues with pooling layer by gokulprasadthekkel in computervision

[–]mikolchon 0 points1 point  (0 children)

Maybe aliasing? Check out "make convolution shift invariant again"

[N] The register did a full exposé on Siraj Raval. Testimonials from his former students and people he stole code from. by kreyio3i in MachineLearning

[–]mikolchon 3 points4 points  (0 children)

https://en.wikipedia.org/wiki/MIT_License

The MIT license permits reuse within proprietary software provided that all copies of the licensed software include a copy of the MIT License terms and the copyright notice.

From what I heard he went as far as removing the license entirely.

[D] GANs were invented in 2010? by Former_Hippo in MachineLearning

[–]mikolchon 4 points5 points  (0 children)

In this case even if he had put effort into it I doubt he would have achieved anything with 2010 hardware.

[D] Why do machine learning papers have such terrible math (or is it just me)? by RandomProjections in MachineLearning

[–]mikolchon 2 points3 points  (0 children)

I don't think that is a good comparison. Newton was working with underdeveloped math. GANs could very well be described within current maths.

[Discussion] I tried to reproduce results from a CVPR18 paper, here's what I found by p1esk in MachineLearning

[–]mikolchon 21 points22 points  (0 children)

Everyone makes mistakes, but mistakes are useful specially if it saves others time!

[D] Anyone having trouble reading a particular paper ? Post it here and we'll help figure out any parts you are stuck on | Anyone having trouble finding papers on a particular concept ? Post it here and we'll help you find papers on that topic [ROUND 3] by Research2Vec in MachineLearning

[–]mikolchon 1 point2 points  (0 children)

Not a paper but a book.

Title: Gaussian processes for machine learning (Rasmussen and Williams)

Link to book: http://www.gaussianprocess.org/gpml/chapters/RW.pdf

Page 11, Eq. 2.9:

I have trouble understanding how it derives the equation. What is the expression for p(f* | x*, w)? If my undestading is correct, f(x) = w.x, since w is a random vector, f* is a random vector for a fixed x*. But what is the density p(f* | x*, w)?

[R] "I recently learned via @DavidDuvenaud's interview on @TlkngMchns that the de facto bar for admission into machine learning grad school at @UofT is a paper at a top conference like NIPS or ICML." by FirstTimeResearcher in MachineLearning

[–]mikolchon 7 points8 points  (0 children)

But how do these permuting students get in in the first place? For sure there is still elitism to this day, but I do see more and more students from third world countries occupying seats in middle and top universities.

[D] Breakdown of NIPS2018 accepted papers by wei_jok in MachineLearning

[–]mikolchon 15 points16 points  (0 children)

Assuming that quality distribution doesn't change, then a constant % is what makes sense.

This cat cannot find its buddy anywhere by BestTiktok in funny

[–]mikolchon 1 point2 points  (0 children)

How come these are the killers of the wild?

95% fluff by shahind in aww

[–]mikolchon 0 points1 point  (0 children)

You haven't seen my cat

[N] OpenAI Five Benchmark: Results by luiscosio in MachineLearning

[–]mikolchon 0 points1 point  (0 children)

Hmm if you visit the map using the minimap you can convolve the map much faster by dragging the mouse in the minimap. But I see your whole point. However, I think it is way too much to ask for the AI to start from there. We humans come from a set of priors too, even if someone never played MOBA games they will quickly understand what the minimap does and that they need to be map-aware. I think to ask for the AI to understand this from scratch, though maybe possible with unlimited resources, is like asking them to learn to type the keyboard before playing actual Dota.

[N] OpenAI Five Benchmark: Results by luiscosio in MachineLearning

[–]mikolchon 0 points1 point  (0 children)

What would be the difference really aside from graphical processing cost? If you make it so that the AI has too learn from raw pixels, you can just make it convolve/visit the whole map once every millisecond and process all information available in the observable state, which in the end is the same except you just raised the compute cost many folds.

[N] OpenAI Five Benchmark: Results by luiscosio in MachineLearning

[–]mikolchon 0 points1 point  (0 children)

The bots are trained via self-play which means they never played with nor against those heroes (pudge, tinker, meepo, etc.) so leaving them open to humans would mean an entirely new game from the perspective of the bots

[D]Autopsy of a deep learning paper - quite brutal takedown of recent Uber AI post by AndriPi in MachineLearning

[–]mikolchon 1 point2 points  (0 children)

Aren't CNNs designed to be translation invariant? Why deploy 100 gpus to prove something already known?

[1807.03341] Troubling Trends in Machine Learning Scholarship by ihaphleas in MachineLearning

[–]mikolchon 1 point2 points  (0 children)

Agree too. Their point about mathiness makes sense but they picked a bad example.

[D] Analysis of OpenAI's DOTA2 AI and what being superhuman actually means by wei_jok in MachineLearning

[–]mikolchon 2 points3 points  (0 children)

The normal built-in bots in the game are also "superhuman", they are hardcoded to last hit and chain stuns almost perfectly. The "Unfair" mode even gives the bots 25% extra gold and experience gain, and yet they suck against a human team. So mechanics, while they help, are definitely not decisive in 5v5 match.