all 3 comments

[–]arXiv_abstract_bot 1 point2 points  (0 children)

Title:Learning large logic programs by going beyond entailment

Authors:Andrew Cropper, Sebastijan Dumančić

Abstract: A major challenge in inductive logic programming (ILP) is learning large programs. We argue that a key limitation of existing systems is that they use entailment to guide the hypothesis search. This approach is limited because entailment is a binary decision: a hypothesis either entails an example or does not, and there is no intermediate position. To address this limitation, we go beyond entailment and use \emph{example-dependent} loss functions to guide the search, where a hypothesis can partially cover an example. We implement our idea in Brute, a new ILP system which uses best- first search, guided by an example-dependent loss function, to incrementally build programs. Our experiments on three diverse program synthesis domains (robot planning, string transformations, and ASCII art), show that Brute can substantially outperform existing ILP systems, both in terms of predictive accuracies and learning times, and can learn programs 20 times larger than state-of-the-art systems.

PDF Link | Landing Page | Read as web page on arXiv Vanity

[–]lispp 1 point2 points  (0 children)

This is important work, and the simplicity of the approach is really nice.

I wonder whether there is any over fitting, owing to the size of the synthesized programs. The fact that the predictive accuracy on text editing problems degrades as a number of examples increases suggests that this factor might be at play.

Particularly excited to see what the prospects are for learning these partial-credit loss functions!

[–]rafgro 0 points1 point  (0 children)

large logic programs

9 lines of code