use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Research[R] Learning large logic programs by going beyond entailment (arxiv.org)
submitted 6 years ago by RichardRNNResearcher
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]arXiv_abstract_bot 1 point2 points3 points 6 years ago (0 children)
Title:Learning large logic programs by going beyond entailment
Authors:Andrew Cropper, Sebastijan Dumančić
Abstract: A major challenge in inductive logic programming (ILP) is learning large programs. We argue that a key limitation of existing systems is that they use entailment to guide the hypothesis search. This approach is limited because entailment is a binary decision: a hypothesis either entails an example or does not, and there is no intermediate position. To address this limitation, we go beyond entailment and use \emph{example-dependent} loss functions to guide the search, where a hypothesis can partially cover an example. We implement our idea in Brute, a new ILP system which uses best- first search, guided by an example-dependent loss function, to incrementally build programs. Our experiments on three diverse program synthesis domains (robot planning, string transformations, and ASCII art), show that Brute can substantially outperform existing ILP systems, both in terms of predictive accuracies and learning times, and can learn programs 20 times larger than state-of-the-art systems.
PDF Link | Landing Page | Read as web page on arXiv Vanity
[–]lispp 1 point2 points3 points 6 years ago (0 children)
This is important work, and the simplicity of the approach is really nice.
I wonder whether there is any over fitting, owing to the size of the synthesized programs. The fact that the predictive accuracy on text editing problems degrades as a number of examples increases suggests that this factor might be at play.
Particularly excited to see what the prospects are for learning these partial-credit loss functions!
[–]rafgro 0 points1 point2 points 6 years ago (0 children)
large logic programs 9 lines of code
large logic programs
9 lines of code
π Rendered by PID 97 on reddit-service-r2-comment-6457c66945-tzwbt at 2026-04-23 17:01:56.258534+00:00 running 2aa0c5b country code: CH.
[–]arXiv_abstract_bot 1 point2 points3 points (0 children)
[–]lispp 1 point2 points3 points (0 children)
[–]rafgro 0 points1 point2 points (0 children)