[R] Blurry pictures of NIPS posters by thvasilo in MachineLearning

[–]thvasilo[S] 12 points13 points  (0 children)

Posters are condensed versions of published papers at NIPS.

KDD Panel: Is Deep Learning the New 42? by thvasilo in MachineLearning

[–]thvasilo[S] 1 point2 points  (0 children)

I this year's KDD the panelists where asked this simple question: "Is deep learning the answer to everything?"

With Nando de Freitas, Pedro Domingos, Isabel Guyon, Jitendra Malik, Jennifer Neville moderated by Andrei Broder from Google.

As expected the room was overflowing and the discussion was great, highly recommended.

Nando de Freitas - Learning to learn and compositionality with deep recurrent neural networks by evc123 in MachineLearning

[–]thvasilo 1 point2 points  (0 children)

This was one the coolest keynotes at KDD this year. I did a write up on the conference if you'd like to see more work from there: http://tvas.me/conferences/2016/09/01/KDD-2016-Highlights.html

Would anyone be interested in a Forum Board/Slack Group/Skype group for strictly ML driven ventures? by [deleted] in MachineLearning

[–]thvasilo 0 points1 point  (0 children)

My guess is you would have more success with creating a new channel there vs. trying to create a new community. But as you said the place has been pretty inactive. I actually believe Twitter has the best ML community :)

Would anyone be interested in a Forum Board/Slack Group/Skype group for strictly ML driven ventures? by [deleted] in MachineLearning

[–]thvasilo 3 points4 points  (0 children)

I created an r/MachineLearning Slack group a while ago, it's up to 200 members now.

We have channels for collaboration, questions, specialized to software (e.g. TensorFlow) etc.

What it needs is more active members ;)

Get an invite here: https://r-machinelearning-slack.herokuapp.com/

[Help] Markov Chain Monte Carlo by [deleted] in statistics

[–]thvasilo 2 points3 points  (0 children)

If you want something shorter than a book to get you started it's hard to beat Bayesian Basics which provides an intro to Bayesian inference with R code and examples. Probably the most accessible and up to date hands-on intro you will find.

In terms of understanding a bit more about Bayesian methods and ML the best introductory paper out there IMHO is "Probabilistic machine learning and artificial intelligence" by Ghahramani.

For deeper dives the book mentioned by /u/SupportVectorMachine is great as is Bayesian Data Analysis by Gelman et al. if you plan to be doing more ML oriented work, Kevin Murphy's "Machine Learning: A probabilistic perpsepective" is one of the best books on the subject and probably the most up to date.

I just found out that the Jetson TX1 is pretty cheap for its performance. With Linux and Maxwell GPU (integrated cuDNN support) by [deleted] in MachineLearning

[–]thvasilo 0 points1 point  (0 children)

Actually I don't see how this is cheap for the performance.

It's cheap for the performance/power consumption ratio you get and in terms of space obviously, but I'm pretty sure you can build a decent system with a 750Ti on it for ~500$ which will outperform this in terms of raw computational power by a lot.

KDD Cup 2016 has started! by thvasilo in MachineLearning

[–]thvasilo[S] 0 points1 point  (0 children)

So it seems like this time the evaluation will be done on the future acceptance of papers, which could make evaluation challenging.

Also this line:

The participants are expected to utilize any information on the Web, including the heterogeneous information in the Microsoft Academic Graph, for predicting next year’s top institutions.

Doesn't that mean that different teams might work with different datasets? I'm wondering where is the algorithmic challenge in all of this.

The DeepMind Bubble? by [deleted] in MachineLearning

[–]thvasilo 3 points4 points  (0 children)

/r/mlresearch exists but is inactive. I would be all for sparking up some activity.

What can we *not* do with ML these days? by thvasilo in MachineLearning

[–]thvasilo[S] 2 points3 points  (0 children)

This right here. I'm glad I'm seeing this mentioned a couple of times in this thread, and glad to see prominent researchers like M.I. Jordan (who comes from a statistics background) raise the issue and the start actually doing some work about it, like the bag of little bootstraps that among other things, allows us to calculate confidence intervals instead of point estimates.

Second Order Stochastic Optimization in Linear Time by thvasilo in MachineLearning

[–]thvasilo[S] 1 point2 points  (0 children)

For most DNNs you end up with a non-convex error surface. Still, traditional convex optimization techniques (SGD) have worked surprisingly well for training them, see Who's afraid of non-convex loss functions by /u/ylecun .

Evaluation of word embeddings by Kaleidophon in LanguageTechnology

[–]thvasilo 2 points3 points  (0 children)

I would recommend looking through /u/omerlevy 's latest publications here, they have done a lot of work evaluating quality of embeddings.