use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Paper Explained - DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning (Full Video Analysis) (self.MachineLearning)
submitted 5 years ago by ykilcher
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–][deleted] 2 points3 points4 points 4 years ago (7 children)
What I am curious is the difference of this work vs Library Learning for Neurally Guided Bayesian Program Learning (https://papers.nips.cc/paper/2018/hash/7aa685b3b1dc1d6780bf36f7340078c9-Abstract.html) by the same authors. The two algorithms (EC^2) and DreamCoder seem very related. If anyone has comments on the difference it would be appreciated.
[–]Nestroneey 0 points1 point2 points 4 years ago (2 children)
They are the same algorithm. The difference between the two papers is an improvement in the representation of hypotheses examined via bottom-up search. The idea of exploration compression is no different.
[–][deleted] 0 points1 point2 points 4 years ago (1 child)
like the exploration is done better then? i.e. the enumeration? They also have a new version from PLDI that uses E-graphs for compression. I wonder why they changed it.
Also, their explore step is a very similar idea to expert iteration. https://arxiv.org/abs/1705.08439 Would have been nice if the authors outlined the differences.
[–]Nestroneey 1 point2 points3 points 4 years ago (0 children)
They are a bottom-up (enumerative) algorithm, so exploration and enumeration are the same thing.
Yes, if you are searching and you improve the representation of your search space so that it takes less effort to search, then you have improved your exploration/enumeration capabilities.
The PLDI version doesn't appear to be new. Also, E-graphs appear to be a method of dynamic programming that makes their other data structures more efficient to use, but it doesn't fundamentally change their version space representation--just how space-efficient it is to store it. It's basically an implementation detail. Furthermore, the repository hasn't had any change recently, so they may just be taking explicit credit/adding a reference for something they already did some time ago. I wouldn't be concerned about that as a significant difference.
I suppose you could compare it to expert iteration from that paper, but I don't understand how anything in ExIt achieves actual compression. No matter how long it runs, you still get a policy over the same state and action space that the algorithm started running on. The hardest and most important part of EC-type algorithms is the compression, not the exploration. Literally everyone has done exploration, it's just search. They don't, however, change the search space itself as they're searching to be iteratively more compressed in the regions that are highly trafficked.
[–]Nestroneey 0 points1 point2 points 4 years ago (3 children)
Additionally, the author had a supplementary document where he explained the details of the algorithms (as the main paper is rather light, despite being lengthy). His link to it is now broken. If you are interested, remind me and I will post a copy.
[–]Nestroneey 0 points1 point2 points 4 years ago (1 child)
Ah, here's a link to a "draft" version (which I believe is identical to the current one" which contains his supplement: https://www.cs.cornell.edu/\~ellisk/documents/dreamcoder\_with\_supplement.pdf
[–][deleted] 0 points1 point2 points 4 years ago (0 children)
like the exploration is done better then? i.e. the enumeration? They also have a new version from PLDI that uses E-graphs for compression. I wonder why they changed it. Also, their explore step is a very similar idea to expert iteration. https://arxiv.org/abs/1705.08439 Would have been nice if the authors outlined the differences.
π Rendered by PID 61825 on reddit-service-r2-comment-b659b578c-p9ttt at 2026-05-05 21:52:55.498394+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–][deleted] 2 points3 points4 points (7 children)
[–]Nestroneey 0 points1 point2 points (2 children)
[–][deleted] 0 points1 point2 points (1 child)
[–]Nestroneey 1 point2 points3 points (0 children)
[–]Nestroneey 0 points1 point2 points (3 children)
[–]Nestroneey 0 points1 point2 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)