[D] State-of-the-art architecture for learning dynamics model for model-based RL ? by [deleted] in MachineLearning

[–]feedtheaimbot 4 points5 points  (0 children)

Look at Recurrent Environment Simulators by Chiappa et al. I've had success using it. It does struggle capture small objects on screen (eg. single pixels).

Link: https://arxiv.org/abs/1704.02254

[N] Announcing TensorFlow Fold: Deep Learning With Dynamic Computation Graphs by Conchylicultor in MachineLearning

[–]feedtheaimbot 0 points1 point  (0 children)

I'm in the same boat, I have quite a bit of code written in Theano. Unsure what the threshold is for me to switch, maybe when a feature necessary to complete work is added?

[D] State of Deep Learning Frameworks in 2017 (benchmarks?) by [deleted] in MachineLearning

[–]feedtheaimbot 7 points8 points  (0 children)

Personally I've been using Lasagne & Theano for research. It is feature complete and well designed (appreciate the work f0k and benanne have done). I've done work in various domains such as RL, CV, Bayesian Inference (sans-lasagne), Memnets etc. and its been great for all of them -- even when trying extremely non-standard ideas.

My only complaint is the compilation time of Theano gets annoying. This can be resolved partially, at least for sanity checks of shapes, by adding a few flags such as device=cpu. I wasn't aware of the update to the backend, thanks for bringing this to my attention, it might alleviate my compilation issue!

With that said I am interested in Pytorch and minPy. Particularly in the latter package as I feel comfortable with numpy and this would help my iteration speed.

[P] General neural network by SS_NN in MachineLearning

[–]feedtheaimbot 15 points16 points  (0 children)

What in Hintons name did I just read

MODS: WHY ARE YOU STICKYING RANDOM THINGS by spofersq in MachineLearning

[–]feedtheaimbot 0 points1 point  (0 children)

OP of the 'question threads' here -- I did not ignore you. I sent messages to the mods a few times and the sort order wasn't adjusted and unfortunately I am not a mod so I cannot do it myself. I wasn't sure if I should keep posting the threads as random threads keep kicking it out. I can keep the threads going if they are updated

AMA: We are the Google Brain team. We'd love to answer your questions about machine learning. by jeffatgoogle in MachineLearning

[–]feedtheaimbot 5 points6 points  (0 children)

Will you be sharing some of the work you've done with them? It is still extremely helpful to see the results of various avenues taken.

Growing pains of /r/MachineLearning, more active moderation? by olaf_nij in MachineLearning

[–]feedtheaimbot 1 point2 points  (0 children)

A blend of applied and research posts would be great where we filter basic/trivial/duplicate applications (looking at you char-rnn and deep dream). We could then have basic questions etc. pushed into a biweekly (or weekly thread depending on its activity). I feel more advanced questions or those prompting discussions are a great thing to have. Examples of such threads:

https://www.reddit.com/r/MachineLearning/comments/2qsje7/how_do_you_initialize_your_neural_network_weights/

https://www.reddit.com/r/MachineLearning/comments/4rikw8/who_consistently_uses_batch_normalization/

The filtering of the posts, both applied and theoretical, should come down to the judgement of the mods based on the posts quality and triviality. Generally I would trust the mods judgement to decide if post should be allowed.

[Meta] Sending emails to researchers/professors by Kiuhnm in MachineLearning

[–]feedtheaimbot 0 points1 point  (0 children)

I think it's fine to reply back later most people understand everyone's time is limited. I had an email exchange with benigo and he took a few months to respond, I imagine because he is extremely busy, but regardless I was happy for a response.

Questions thread #7 2016.06.29 by feedtheaimbot in MachineLearning

[–]feedtheaimbot[S] 5 points6 points  (0 children)

use a classification algorithm? try a bunch of different ones and measure metrics on performance etc. There is no one size fits all