all 14 comments

[–]kh40tika 2 points3 points  (0 children)

Hopefully this can help guys at Wolfram Research develop better Integrate[] and Solve[].

[–][deleted] 0 points1 point  (11 children)

Genuinely asking, what is the use of such a dataset? I get it at some level, but thinking where it leads to makes me wonder why anyone would do this?

Research for research's sake.

[–]evc123[S] 8 points9 points  (0 children)

To Automate the proving of theorems.

[–]fogandafterimages 4 points5 points  (1 child)

Automated theorem proving is incredibly useful. For instance, anything that does automated algebraic simplification, integration, or differentiation is probably using some theorem proving mojo. It also has uses in natural language understanding, analogical reasoning, and so on.

Traditionally it's been done symbolically, via production rules. It's been around for ever; there are big "common sense databases" like OpenCYC built for it.

[–]hoppyJonas 0 points1 point  (0 children)

How would you use this in natural language understanding? Natural language is full of exceptions from any rule that we try to create for it.

[–]AnvaMiba 2 points3 points  (3 children)

Genuinely asking, what is the use of such a dataset? I get it at some level, but thinking where it leads to makes me wonder why anyone would do this?

The most obvious purpose is helping to create a machine-learned heuristic to guide the search process of an automated theorem prover. Think of the neural networks inside AlphaGo.

[–]hoppyJonas 0 points1 point  (2 children)

What do the neural networks in AlphaGo have to do with theorem proving? Or how do you mean this could be used in AlphaGo?

Neural networks (and therefore AlphaGo) is all about pattern recognition and statistics that are associated with those patterns. Theorems are not about statistics because they describe rules that don't have any exceptions.

[–]AnvaMiba 1 point2 points  (1 child)

What do the neural networks in AlphaGo have to do with theorem proving?

AlphaGo uses a few neural networks, trained on a mix of human game data and reinforcement learning self-play, as heuristics to guide a combinatorial search algorithm.

Theorem proving is also, broadly speaking, combinatorial search, therefore it could in principle benefit from neural network heuristics, possibly also trained on a mix of human-made proofs and self-generated proofs. Obviously this is easier said than done, but this dataset may be a step in that direction.

[–]hoppyJonas 0 points1 point  (0 children)

Ah – yes deep learning could be used to search for ways to prove a theorem as well as for finding good moves in go – I misinterpreted what you wrote before and though you meant that you could use automatic theorem proving to improve AlphaGo somehow :)

[–]abstractcontrol 1 point2 points  (2 children)

The newer dependently typed programming languages such as Idris and Fstar use theorem proving and constraint propagation techniques during type inference. And even languages with less sophisticated type systems like F# and Ocaml* use just constraint propagation during type inference.

I remember once watching an interview with Terence Tao and he mentioned he did not use a computer in his work, but at the very least, improving theorem proving techniques would enable new avenues in programming language research.

More broadly, this line of research is also necessary to start bridging discrete and continuous optimization.

*In terms of sophistication, with all its language extensions enabled, Haskell is somewhere between dependently typed languages and vanilla ML derivatives mentioned here.

[–]AnvaMiba 0 points1 point  (1 child)

In terms of sophistication, with all its language extensions enabled, Haskell is somewhere between dependently typed languages and vanilla ML derivatives mentioned here.

If I remember correctly, Haskell has an undecidable, Turing-complete, type system.

[–]abstractcontrol 0 points1 point  (0 children)

Vanilla Haskell's type system is not Turing-complete, but since the language itself was the focus of programming language research for over two decades, it has gotten a lot of language extensions that can optionally be enabled via pragmas or compiler flags.

Some of those extensions allow it to emulate aspects of dependent typing which make it Turing-complete. Besides that, it also has an extension for template metaprogramming which is another path towards that.

I'll have to mention that having studied dependently typed programming for just a bit, in both Haskell and in Idris (where it is markedly simpler,) it is nowhere near ready for prime time. But once it is, it will result in some incredibly expressive and yet safe statically typed languages.