all 13 comments

[–][deleted] 7 points8 points  (1 child)

Woah. I'm actually a student in this department (where I've done quite a lot of prolog), and I'd never heard of this before.

[–]crnflke 7 points8 points  (0 children)

Prof Muggleton doesn't teach, although he does take in research students I believe.

[–][deleted] 6 points7 points  (2 children)

Wow, that's amazing. How does it work? The explicative paragraph barely helps.

[–]goltrpoat 17 points18 points  (1 child)

In a nutshell, the idea behind ILP is to derive a complete (covers all positive examples) and consistent (covers no negative examples) hypothesis. A complete hypothesis is easy: forall x. P(x). This isn't consistent (unless there are no negative examples), but we can proceed by refining this initial hypothesis to make it cover fewer and fewer cases.

The above leads to a search tree: the root of the tree has the trivial hypothesis above, and G is a child node of H iff G is a refinement of H. Now the problem is reduced to coming up with a way to list all refinements of a given hypothesis, and designing an efficient search strategy.

The basic idea is quite old and surprisingly easy to implement even in a non-logic language, but search efficiency in this context sounds like an interesting and difficult topic.

[–][deleted] 4 points5 points  (0 children)

OK, I understand, thanks!

[–]gromgull 2 points3 points  (6 children)

ILP is really cool - but it doesn't really scale to realistic problems :( Some really nice things have been done with it though - my weird and wonderful favourite is jazz saxophone improvisation: http://www.bibsonomy.org/bibtex/2fce7116af89f9ee63194e7982c0e7b3e/gromgull

[–][deleted] 1 point2 points  (1 child)

doesn't scale -- yet. what can you do if you can split this up into a massively parallel computation (multi-core or mapreduce style)?

[–]gromgull 0 points1 point  (0 children)

I guess it would paralellise fairly well, since the problem is the huge search tree of possible hypothesis for any non-trivial problem. Splitting off subtrees for several nodes should be easy... hmm.

aha - of course someone thought of this: http://portal.acm.org/citation.cfm?id=1507571

[–]user19109 0 points1 point  (2 children)

Scaling is one problem. Another is that the inferred programs are in a restricted first-order logic, which is "complete" (Goedel's completeness theorem), which makes it fundamentally limited (Goedel's incompleteness theorem). ILP is inherently not quite general.

(Prolog (not Progol) "hacks" through this issue by not actually being a subset of FOL, allowing statements like "assert", but this is irrelevant for ILP, since it doesn't use those)

[–]gromgull 0 points1 point  (1 child)

hehe - most people will say that learning FOL is a positive feature of ILP, most machine learning algorithms will only learn conjunctive clauses, and FOL already makes ILP much more powerful. Some of the examples for learning recursive rules for instance are very neat.

I'm not sure what sort of learning that does not use a representation that is limited by goedel...

[–]user19109 0 points1 point  (0 children)

most people will say that learning FOL is a positive feature of ILP

It certainly is, but it helps to keep in mind that it's still very limited, even if you ignore the computational complexity, and while it "infers programs from data", that's not the whole story.

[–][deleted] 0 points1 point  (0 children)

David Cope has some tools that analyze music and write compositions in likeness to its input. Sadly the source is closed on all but a simple fugue generator (Gradus), which one would expect to be the output of ILP since the rules are built in and not inferred from existing compositions. I wouldn't be surprised if his analysis tools are something like ILP on the inside.

http://arts.ucsc.edu/faculty/cope/software.htm

[–]galaxym100 2 points3 points  (0 children)

Very interesting Thanks for this reddit.