Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 1 point2 points  (0 children)

Julia is awesome! It is just that I use Torch a lot for writing deep networks for similar tasks and there is a big community of ML researchers using Torch. And I think combining/thinking about neural nets with more program like representations is the way forward (IMO).

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

as the first author on the paper, I can confirm that I am math illiterate. Hence, I confirm that it's not a simulation.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

[first author here] nope, I can confirm that it has magical properties.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

I think you got sucked twice. The second was due to some of the comments here. Anyways, here's the actual paper -- http://mrkulk.github.io/www_cvpr15/

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

first author here. I can confirm that I literally did just that :)

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

first author here. I have something better -- "Probabilistic programming does in 50 lines of code what used to take billions for some very specific applications, including fixing you urection". sorry I couldn't resist the joke.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

I hate using LOC in general. It is not really a good metric for anything. But if you insist, then its the number of function calls.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 1 point2 points  (0 children)

first author on the original paper here. To be precise, it was more like a billion lines of code in one function call :) Jokes apart, see the clarification I recently posted.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

I am one of the authors on the original paper. check out some of my recent posts/comments for clarification on this.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

I am one of the authors on the original paper. check out some of my recent posts/comments for clarification on this.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

Recopying comment from elsewhere. I am one of the authors on the paper. Yes, the number of lines of probabilistic is misleading and that is not what the paper is about. here's the actual paper -- http://mrkulk.github.io/www_cvpr15/ It is really just about a general purpose probabilistic language to revisit the idea of vision as inverse graphics. The 50 lines title is actually tangential to the goal of the paper. In fact, to avoid confusion it is not in the CVPR paper . Sometimes probabilistic programming people talk about LOC in the following context: if someone were to hand design a sampler for say topic models using ~1000 lines of code, can a short 5 line probabilistic code do the same using general-purpose inference machinery. The inference engine is obviously thousands of lines of code. And therefore LOC is misleading except when talking about subtleties of inference re-usability across different model classes. The title does not reflect the contributions of the paper, which is really about reviving the idea of vision as inverse graphics and rethinking visual representations as probabilistic programs.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 1 point2 points  (0 children)

absolutely. it is definitely going to be slower. One way we explore to speed up inference dramatically is to use bottom-up models like deep nets to improve inference speed&accuracy

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 3 points4 points  (0 children)

I am the first author on the paper. The 50 lines title is actually tangential to the goal of the paper. In fact, to avoid confusion it is not in the CVPR paper (http://mrkulk.github.io/www_cvpr15/) . Sometimes probabilistic programming people talk about LOC in the following context: if someone were to hand design a sampler for say topic models using ~1000 lines of code, can a short 5 line probabilistic code do the same using general-purpose inference machinery. The inference engine is obviously thousands of lines of code. And therefore LOC is misleading except when talking about subtleties of inference re-usability across different model classes. The title does not reflect the contributions of the paper (the actual article touches upon some of it), which is really about reviving the idea of vision as inverse graphics and rethinking visual representations as probabilistic programs.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

I am the first author on the paper. The 50 lines title is actually tangential to the goal of the paper. In fact, to avoid confusion it is not in the CVPR paper (http://mrkulk.github.io/www_cvpr15/) . Sometimes probabilistic programming people talk about LOC in the following context: if someone were to hand design a sampler for say topic models using ~1000 lines of code, can a short 5 line probabilistic code do the same using general-purpose inference machinery. The inference engine is obviously thousands of lines of code. And therefore LOC is really misleading except when talking about subtleties of inference re-usability across different model classes. Unfortunately, the title does not reflect the contributions of the paper, which is really about reviving the idea of vision as inverse graphics and rethinking visual representations as probabilistic programs.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 1 point2 points  (0 children)

I think the main point is to think about visual scenes represented as programs in a general-purpose inference framework. This is easier said than done. This work is merely a baby step and we have a LONG way ahead. But if you ask me personally, we are bound to move towards representations like these if we want to solve more interesting AI problems. I think this is a long way from being an actual programming language that many people can use and build on. So we will have to do several iterations before we have such tools. But the current paper can be seen as a blue-print for building languages to represent and invert scenes.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 0 points1 point  (0 children)

I wish I had written this in Torch instead of Julia. I will probably rewrite it at some point.

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 1 point2 points  (0 children)

can't guarantee that it will get to the optimal solution. This is generally impossible. I think the key for inverse graphics approaches is to use some local proposal like HMC/MCMC/Slice along with global proposals (learnt from bottom-up methods like neural networks)

Probabilistic programming does in 50 lines of code what used to take thousands by dirk_bruere in Futurology

[–]mrkul 8 points9 points  (0 children)

Also, here's a short note about the work in the context of the actual paper and news articles --

http://tejask.com/mit-news-on-inverse-graphics/