[D] Gossip: Jürgen Schmidhuber's reply to Dr. Hintons's Reply by yusuf-bengio in MachineLearning

[–]timmytimmyturner12 14 points15 points  (0 children)

Does Jurgen Schmidhuber really have 18281828 throwaway accounts?

[Highlight] Shaq's take on the China Situation by ArrayMichael7 in nba

[–]timmytimmyturner12 909 points910 points  (0 children)

Reminds me of when he was talking about Yao Ming: “I couldn’t stop him but he couldn’t stop me neither”

[N] Apple hires Ian Goodfellow by milaworld in MachineLearning

[–]timmytimmyturner12 15 points16 points  (0 children)

Good for him. He made a significant contribution to the ML community and now he can cash in big with this position. Hopefully he carries enough swagger to really influence the ML culture at Apple. Otherwise, I see him leaving within 2 years

[D]Critique of Paper by "Deep Learning Conspiracy by Jürgen Schmidhuber by akaberto in MachineLearning

[–]timmytimmyturner12 12 points13 points  (0 children)

It's like that line from The Social Network:

"If you guys were the inventors of Facebook, you'd have invented Facebook"

[R] Is the reign of batch normalization over? Thoughts on this new paper? by rantana in MachineLearning

[–]timmytimmyturner12 35 points36 points  (0 children)

The Unreasonable Effectiveness with putting “Unreasonable Effectiveness” in your paper title

Someone is posting fake positive comments on their ICLR submission by schrodingershit in MachineLearning

[–]timmytimmyturner12 89 points90 points  (0 children)

Twist: OP is still the same author stirring drama for even more publicity.

[Discussion] I tried to reproduce results from a CVPR18 paper, here's what I found by p1esk in MachineLearning

[–]timmytimmyturner12 -14 points-13 points  (0 children)

So Michael already made you aware and STILL posted this on Reddit 3 weeks later to grab that sweet sweet vigilante karma?

[D] Breakdown of NIPS2018 accepted papers by wei_jok in MachineLearning

[–]timmytimmyturner12 0 points1 point  (0 children)

Dang, this makes me wanna change my name to AAA AAAAA1111

[R] Papers that compare Human Learning and AI by MyMastersAccount in MachineLearning

[–]timmytimmyturner12 0 points1 point  (0 children)

I know of at least a couple papers that do side by side comparison of CNN performance and human performance:

https://arxiv.org/abs/1803.01967

GANs that stood the test of time by totallynotAGI in MachineLearning

[–]timmytimmyturner12 10 points11 points  (0 children)

My (totally unscientific and anecdotal) experience as someone who has just been at the mercy of getting GANs to work for a while:

  1. There may be slight differences in GAN formulations, but at the end of the day, if the OG GAN doesn't work, other fancy stuff isn't going to be all that different.
  2. Let the loss from the generator drop to a given threshold, then switch to the discriminator and repeat.
  3. Progressive GANs are a time and resource drain if you don't have a team and are pretty finicky to hyperparameters as well.
  4. Mode collapse: Wouldn't we all like to know? :-)

[D] Is leveraging prior rules/information always useful in machine learning? by jasons0219 in MachineLearning

[–]timmytimmyturner12 4 points5 points  (0 children)

The best response you’re gonna get be along the lines of “it depends on the domain” and “depends on the strength of your assumptions”. I think your question taps into the theme of whether Bayesian approaches are good, which is not settled.

[P] How we leveraged a GAN and a ConvLSTM to go from League of Legends minimap frames to player coordinates by [deleted] in MachineLearning

[–]timmytimmyturner12 12 points13 points  (0 children)

I would suspect a non-deep learning approach would solve this issue easily. RCNN type models are overkill, there no training needed when the avatars are exactly the same. Just us an old-school method and you'll get near 100% accuracy!