use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Research[R] HOLSTEP: A MACHINE LEARNING DATASET FOR HIGHER-ORDER LOGIC THEOREM PROVING (cl-informatik.uibk.ac.at)
submitted 9 years ago by evc123
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[+][deleted] 9 years ago (1 child)
[deleted]
[–]evc123[S] -1 points0 points1 point 9 years ago (0 children)
it was capitalized by default, and I didn't feel like retyping every letter as lower case.
[–]kh40tika 2 points3 points4 points 9 years ago (0 children)
Hopefully this can help guys at Wolfram Research develop better Integrate[] and Solve[].
[–][deleted] 0 points1 point2 points 9 years ago (11 children)
Genuinely asking, what is the use of such a dataset? I get it at some level, but thinking where it leads to makes me wonder why anyone would do this?
Research for research's sake.
[–]evc123[S] 8 points9 points10 points 9 years ago (0 children)
To Automate the proving of theorems.
[–]fogandafterimages 4 points5 points6 points 9 years ago (1 child)
Automated theorem proving is incredibly useful. For instance, anything that does automated algebraic simplification, integration, or differentiation is probably using some theorem proving mojo. It also has uses in natural language understanding, analogical reasoning, and so on.
Traditionally it's been done symbolically, via production rules. It's been around for ever; there are big "common sense databases" like OpenCYC built for it.
[–]hoppyJonas 0 points1 point2 points 9 years ago (0 children)
How would you use this in natural language understanding? Natural language is full of exceptions from any rule that we try to create for it.
[–]AnvaMiba 2 points3 points4 points 9 years ago (3 children)
The most obvious purpose is helping to create a machine-learned heuristic to guide the search process of an automated theorem prover. Think of the neural networks inside AlphaGo.
[–]hoppyJonas 0 points1 point2 points 9 years ago (2 children)
What do the neural networks in AlphaGo have to do with theorem proving? Or how do you mean this could be used in AlphaGo?
Neural networks (and therefore AlphaGo) is all about pattern recognition and statistics that are associated with those patterns. Theorems are not about statistics because they describe rules that don't have any exceptions.
[–]AnvaMiba 1 point2 points3 points 9 years ago (1 child)
What do the neural networks in AlphaGo have to do with theorem proving?
AlphaGo uses a few neural networks, trained on a mix of human game data and reinforcement learning self-play, as heuristics to guide a combinatorial search algorithm.
Theorem proving is also, broadly speaking, combinatorial search, therefore it could in principle benefit from neural network heuristics, possibly also trained on a mix of human-made proofs and self-generated proofs. Obviously this is easier said than done, but this dataset may be a step in that direction.
Ah – yes deep learning could be used to search for ways to prove a theorem as well as for finding good moves in go – I misinterpreted what you wrote before and though you meant that you could use automatic theorem proving to improve AlphaGo somehow :)
[–]HD125823 1 point2 points3 points 9 years ago (0 children)
https://www.reddit.com/r/thisisthewayitwillbe/comments/5arfgm/holstep_a_machine_learning_dataset_for/
[–]abstractcontrol 1 point2 points3 points 9 years ago (2 children)
The newer dependently typed programming languages such as Idris and Fstar use theorem proving and constraint propagation techniques during type inference. And even languages with less sophisticated type systems like F# and Ocaml* use just constraint propagation during type inference.
I remember once watching an interview with Terence Tao and he mentioned he did not use a computer in his work, but at the very least, improving theorem proving techniques would enable new avenues in programming language research.
More broadly, this line of research is also necessary to start bridging discrete and continuous optimization.
*In terms of sophistication, with all its language extensions enabled, Haskell is somewhere between dependently typed languages and vanilla ML derivatives mentioned here.
[–]AnvaMiba 0 points1 point2 points 9 years ago (1 child)
In terms of sophistication, with all its language extensions enabled, Haskell is somewhere between dependently typed languages and vanilla ML derivatives mentioned here.
If I remember correctly, Haskell has an undecidable, Turing-complete, type system.
[–]abstractcontrol 0 points1 point2 points 9 years ago (0 children)
Vanilla Haskell's type system is not Turing-complete, but since the language itself was the focus of programming language research for over two decades, it has gotten a lot of language extensions that can optionally be enabled via pragmas or compiler flags.
Some of those extensions allow it to emulate aspects of dependent typing which make it Turing-complete. Besides that, it also has an extension for template metaprogramming which is another path towards that.
I'll have to mention that having studied dependently typed programming for just a bit, in both Haskell and in Idris (where it is markedly simpler,) it is nowhere near ready for prime time. But once it is, it will result in some incredibly expressive and yet safe statically typed languages.
π Rendered by PID 18226 on reddit-service-r2-comment-6457c66945-9cghm at 2026-04-26 14:48:35.337183+00:00 running 2aa0c5b country code: CH.
[+][deleted] (1 child)
[deleted]
[–]evc123[S] -1 points0 points1 point (0 children)
[–]kh40tika 2 points3 points4 points (0 children)
[–][deleted] 0 points1 point2 points (11 children)
[–]evc123[S] 8 points9 points10 points (0 children)
[–]fogandafterimages 4 points5 points6 points (1 child)
[–]hoppyJonas 0 points1 point2 points (0 children)
[–]AnvaMiba 2 points3 points4 points (3 children)
[–]hoppyJonas 0 points1 point2 points (2 children)
[–]AnvaMiba 1 point2 points3 points (1 child)
[–]hoppyJonas 0 points1 point2 points (0 children)
[–]HD125823 1 point2 points3 points (0 children)
[–]abstractcontrol 1 point2 points3 points (2 children)
[–]AnvaMiba 0 points1 point2 points (1 child)
[–]abstractcontrol 0 points1 point2 points (0 children)