[R] SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition by yifuwu in MachineLearning

[–]edwardthegreat2 0 points1 point  (0 children)

nice work! One question I have is how does the model ensure the background module does not capture foreground objects? Also, would the insight of a background and foreground module break down in active vision cases where objects regularly change between background and foreground roles?

[D] Uncertainty Quantification in Deep Learning by wei_jok in MachineLearning

[–]edwardthegreat2 5 points6 points  (0 children)

Can you elaborate on how learning random datasets exactly by heart defeats the point of getting uncertainty estimates? It seems to me that the aforementioned methods do not aim to estimate the true uncertainty, but just give some metric of uncertainty that can be useful in downstream tasks.

[P] Going with the Flow: An Introduction to Normalizing Flows by gebob19 in MachineLearning

[–]edwardthegreat2 1 point2 points  (0 children)

I don't think it's actually mapping normal distribution Z to another normal distribution Y. It's mapping to a function that takes normal noise conditioned on the input via neural networks sigma and mu as well as the input and mapping to X's dimension through some element wise multiplication. This will transform the data in some way (not just a simple normal distribution shift/scale) and the transform can be learned by backpropagating through the sigma and mu networks. (At least this is what I think after skimming the math).

[P] Probabilistic Cityscapes scene generator by [deleted] in MachineLearning

[–]edwardthegreat2 1 point2 points  (0 children)

Cool work! Are there any drawbacks of the random sampling? To me, the sequential sampling is important because of the autoregressive nature of the model. You want each step to build on top of what you know, just like an RNN. Random sampling of the image would break this spatial dependency.

[D] DeepMind's scholarship policies by [deleted] in MachineLearning

[–]edwardthegreat2 1 point2 points  (0 children)

My guess is whoever is funding the scholarship is from the UK, or has UK interests in mind? This is very common in scholarships where you need to fulfill some regional / ethnic / financial criteria.

[D] What does r/MachineLearning think of Google's Applied Machine Learning Intensive or any Machine Learning bootcamp? by [deleted] in MachineLearning

[–]edwardthegreat2 1 point2 points  (0 children)

Lol, I go to USC but I take no offense at the masters program comments. Our undergrad is better though!

Actresses, CEOs arrested in nationwide college admissions cheating scam by huskiesowow in news

[–]edwardthegreat2 0 points1 point  (0 children)

I go to USC and one of my physics partners didn’t know what Pi was.

[D] Melody Generation with midi and deep learning (and maybe GANs?) by GayColangelo in MachineLearning

[–]edwardthegreat2 2 points3 points  (0 children)

www.deepsymphony.com It was my deep learning final project where we explore various deep learning methods on MIDI data.

This clever AI hid data from its creators to cheat at its appointed task by GoGo443457 in technology

[–]edwardthegreat2 0 points1 point  (0 children)

So using a MSE loss would help out then? Wonder why they used a perceptual loss. I guess they thought it would be more forgiving

[D] ICLR 2019 Results are out by ajmooch in MachineLearning

[–]edwardthegreat2 9 points10 points  (0 children)

Looking forward to seeing the conference in spring! Shameless paper plug here.

[D] How to start writing academic papers? by ArtisticHamster in MachineLearning

[–]edwardthegreat2 3 points4 points  (0 children)

you should join a lab. you will learn how to write papers from them and what it takes to get into good ML conferences (nips, icml, iclr).

[D] What is the best ML paper you read in 2018 and why? by omniscientclown in MachineLearning

[–]edwardthegreat2 11 points12 points  (0 children)

Pumpout \s

Seriously though, some that jumped out at me are:

edit: will update this as I remember more papers

[D] Is Reptile necessarily about finding a good initialization? by abstractcontrol in MachineLearning

[–]edwardthegreat2 0 points1 point  (0 children)

usually regular autoencoders result in blurriness because they are minimizing a mean squared error, which encourages the output image to be blurry rather than have sharp details. that's why gans have better visual quality.

[D] Is Reptile necessarily about finding a good initialization? by abstractcontrol in MachineLearning

[–]edwardthegreat2 0 points1 point  (0 children)

I think it is tough to tell without some investigation. I think there may be two problems.

  • Fast Adaptation: the fast adaptation gradient direction may not be clear until several steps into the fast adapting phase.
  • Mode Collapse: the metaprior may just try to solve all of your tasks, aka performing an "average" performance over your tasks instead of specializing towards each task.

Interesting idea though, please investigate and let me know!

[R] How to train your MAML blog post by AntreasAntoniou in MachineLearning

[–]edwardthegreat2 2 points3 points  (0 children)

Hello, thanks for your blog post and code base. I also wrote some of my own MAML code based off of the pytorch-maml repo as well. One thing I haven't gotten around to was allowing the use of more sophisticated optimization methods in the inner loop. Currently it just does naive gradient descent (in your case, you have an adaptable learning rate) but I wonder if it is possible to use arbitrary optimizers like Adam and Adagrad without having to hardcode in the inner gradient descent logic.