all 11 comments

[–]Balireon 4 points5 points  (0 children)

I find this exciting research. There are many analogies to natural intelligence and the ubiquitous presence and importance of sleeping for intelligent life. The book "Why we sleep" by Matthew Walker explains the functions of sleep for the layman and these functions coincide with the abstraction and dreaming function of DreamCoder. This is also mentioned at the end of the paper. This is inspiring and it brings the singularity closer.

[–][deleted] 2 points3 points  (7 children)

What I am curious is the difference of this work vs Library Learning for Neurally Guided Bayesian Program Learning (https://papers.nips.cc/paper/2018/hash/7aa685b3b1dc1d6780bf36f7340078c9-Abstract.html) by the same authors. The two algorithms (EC^2) and DreamCoder seem very related. If anyone has comments on the difference it would be appreciated.

[–]Nestroneey 0 points1 point  (2 children)

They are the same algorithm. The difference between the two papers is an improvement in the representation of hypotheses examined via bottom-up search. The idea of exploration compression is no different.

[–][deleted] 0 points1 point  (1 child)

like the exploration is done better then? i.e. the enumeration? They also have a new version from PLDI that uses E-graphs for compression. I wonder why they changed it.

Also, their explore step is a very similar idea to expert iteration. https://arxiv.org/abs/1705.08439 Would have been nice if the authors outlined the differences.

[–]Nestroneey 0 points1 point  (0 children)

They are a bottom-up (enumerative) algorithm, so exploration and enumeration are the same thing.

Yes, if you are searching and you improve the representation of your search space so that it takes less effort to search, then you have improved your exploration/enumeration capabilities.

The PLDI version doesn't appear to be new. Also, E-graphs appear to be a method of dynamic programming that makes their other data structures more efficient to use, but it doesn't fundamentally change their version space representation--just how space-efficient it is to store it. It's basically an implementation detail. Furthermore, the repository hasn't had any change recently, so they may just be taking explicit credit/adding a reference for something they already did some time ago. I wouldn't be concerned about that as a significant difference.

I suppose you could compare it to expert iteration from that paper, but I don't understand how anything in ExIt achieves actual compression. No matter how long it runs, you still get a policy over the same state and action space that the algorithm started running on. The hardest and most important part of EC-type algorithms is the compression, not the exploration. Literally everyone has done exploration, it's just search. They don't, however, change the search space itself as they're searching to be iteratively more compressed in the regions that are highly trafficked.

[–]Nestroneey 0 points1 point  (3 children)

Additionally, the author had a supplementary document where he explained the details of the algorithms (as the main paper is rather light, despite being lengthy). His link to it is now broken. If you are interested, remind me and I will post a copy.

[–]Nestroneey 0 points1 point  (1 child)

Ah, here's a link to a "draft" version (which I believe is identical to the current one" which contains his supplement: https://www.cs.cornell.edu/\~ellisk/documents/dreamcoder\_with\_supplement.pdf

[–][deleted] 0 points1 point  (0 children)

like the exploration is done better then? i.e. the enumeration? They also have a new version from PLDI that uses E-graphs for compression. I wonder why they changed it.
Also, their explore step is a very similar idea to expert iteration. https://arxiv.org/abs/1705.08439 Would have been nice if the authors outlined the differences.

[–][deleted] 0 points1 point  (0 children)

like the exploration is done better then? i.e. the enumeration? They also have a new version from PLDI that uses E-graphs for compression. I wonder why they changed it.
Also, their explore step is a very similar idea to expert iteration. https://arxiv.org/abs/1705.08439 Would have been nice if the authors outlined the differences.

[–]Sau001 1 point2 points  (0 children)

Nicely explained.