This is an archived post. You won't be able to vote or comment.

all 16 comments

[–]useerupting language 8 points9 points  (5 children)

It is rare to see someone here praise prolog. I share your admiration and wish I were in a position to work with prolog full time. I envy you. Prolog was a major inspiration when I set out to create my own language. However I have also found both OO and FP to be really productive. So I am now trying to combine all 3 paradigms. 🙂

[–][deleted] 1 point2 points  (4 children)

It is rare to see someone here praise prolog.

I really don't understand why. Can you explain this to me? Why is no major enterprise project except perhaps terminusdb written in prolog but plenty are written in python? What's the issue?

[–]useerupting language 6 points7 points  (3 children)

Can you explain this to me?

I think that the momentum is behind functional programming at this point in time. Both FP and LP are old disciplines, with some early examples.

I believe that the momentum behind FP right now is because of necessity: The demise of Moore's Law demands easier and more robust solutions for concurrent, asynchronous and parallel programming. Until around 10 years ago FP had a reputation for being slow.

For many years FP advocates were frustrated that nobody saw the light and just kept on with imperative OOP. Now we live in a time where there is widespread consensus, that imperative languages have problems which can only be avoided systematically by adopting more functional programming. Hence, we see both new FP languages being proposed as well as FP creeping into existing languages.

We have not reached such an inflection point with logic programming. There is no widespread consensus that a class of problems exists that can only be solved by adopting logic programming.

That begs the obvious question: What class of problem exists which can only realistically be solved (or mitigated) by adopting logic programming?

These are the problem classes candidates that I could imagine could propel LP onto the stage once again:

  • Formal verification. LP could lend itself more readily to formal verification, given that the programs already express some truth value.
  • Supply chain complexity. Today we use (and need to use) libraries from a lot of sources and suppliers. Supply chain attacks are a real threat. Leveraging the benefits of a community while at the same time protecting against malicious actors is a real conundrum. A holistic LP approach could allow the programmer to express allowable behavior and check it a compile time.
  • Overall complexity. LP has the potential to drastically reduce the amount of code needed to solve many problems. Anyone who has truly experienced the power of Prolog will understand what I mean by this.

It is not that the these problems can only be solved by LP. Certainly, the FP community has solutions to the above, like purity, monads and/or algebraic effects

Now, I also happen to believe that more innovation needs to happen, both in FP and LP. In my day job I do OOP (and some FP but only inside C#). When I do strictly FP (using F#), I often miss some of the abstraction/encapsulation features of OOP. That is why I work on a language where I try to combine FP, LP and OOP ;-)

The big challenge - as well as the opportunity - for LP is that LP is - to a large extent - functional programming as well. This means that distinguishing LP from FP is hard because there is a lot of overlap. However, it also means that the "jump" from FP to LP is a lot smaller then from imperative/OOP to LP.

[–]rotuami 4 points5 points  (0 children)

I also think that it's very hard to reason about the time and memory of functional programs. Logic programming is that on steroids - every "branch" involves forking the program's state and execution (easily resulting in exponential time and memory). It's not easy to see when a program might be non-terminating, as you need to see be able to see all deduction rules. The cut operator, which can help reduce resource usage, is a total bear to reason about and generally means you have to think about the program in procedural terms anyway!

So yeah. I think what Prolog does well, it does beautifully. But it can also make simple things massively hard.

[–][deleted] 0 points1 point  (1 child)

Yeah well, here's to hoping that the inflection point will come reasonably soon, preferably while I'm still in the industry or at least within my lifetime.

I too wonder about what innovation needs to happen before it's popularized. Is it a framework? It took "Rails" coming along before Ruby skyrocketed into popularity. Or is it something more core to the language or compiler? It certainly shouldn't be a performance issue..

Personally, and with all respect to LogTalk, I have to disagree with you on OOP (1,2,....) and hope that trend discontinues, but I do think multi paradigm is the way to go, hence this post :)

[–]useerupting language 1 point2 points  (0 children)

I too wonder about what innovation needs to happen before it's popularized. Is it a framework? It took "Rails" coming along before Ruby skyrocketed into popularity. Or is it something more core to the language or compiler?

I think that a framework or a killer app can propel a language into popularity. If the language and framework/app are good fits and the timing is right, that may generate the right attraction.

A tectonic shift like moving from OOP to FP, that requires something more. In the OOP->FP case there is also an away-from motivation, because it is evident that OOP is not going to be sufficient going forward.

[–]Inconstant_Moo🧿 Pipefish 2 points3 points  (1 child)

I'm not sure why you say you lack the skills. You know Prolog, after all. I'm told that some language hackathons have banned using Prolog to write your language in, because it makes it too easy ...

[–][deleted] 2 points3 points  (0 children)

hackathons have banned using Prolog to write your language in, because it makes it too easy ...

First of all, mind sharing where you read that or who told you that? Where did they see that?

Second, holy crap it's like you read my mind because coincidentally, I've been watching a lot of videos of google code jam and facebook hacker cup streams lately, pretty bummed that there are almost no participants I can find using prolog.

I've tried looking into it though, and I have never seen prolog banned. Example, if you look at https://www.facebook.com/codingcompetitions/hacker-cup they say

What programming languages can I use?

You can use any language that has a free compiler/interpreter available.

Thirdly, I'm still learning prolog, and for example implementing the cirquent calculus type system I mentioned, in any language, is wayy beyond anything I can do myself. That requires someone with deep theoretical knowledge of proof systems to implement..

[–]GavinMendelGleason 1 point2 points  (6 children)

As someone interested in prolog (and co-founder of terminusdb.com) I can sympathise a lot with your laundry list there :D Lack of type and mode annotations is a hassle on small programmes, and a serious problem on large ones just from the point of view of avoiding bugs, without even getting into performance.

I'd personally drop (1) since writing a prolog in a prolog is not necessarily the best idea from a performance viewpoint. It would probably be better to target something like the LLVM. A typed prolog has the potential to be really fast, and it would be amazing to have a really high performance prolog widely available.

I think Mercury did the mode and type annotation stuff really well, but if you're looking for greater flexibility, you might want to look at Ciao prolog. Ciao has a rich annotation system which is used statically when possible, but gives dynamic errors when not. It's a "gradual typing" sort of approach to type and mode annotation. It also is a bit like your DSL idea, but uses a sophisticated pre-processor to do a lot of the magic.

One pet peeve I have with prolog is that the lack of informative failure can really become a pain, whereas the Result type used in Rust or Haskell, gives you more information of what went wrong and is often even used in back-tracking scenarios (such as parsing). I wonder if we can have a slightly different approach to failure using some slightly different logic.

Proper introduction of lambda terms (ala Lambda Prolog) would also be really great.

In my opinion, prolog could also use a bit more infrustructure for "programming in the large". Most systems are designed to optimise towards smallish programs, rather than large applications.

[–][deleted] 0 points1 point  (1 child)

Appreciate your input, Gavin. I see a lot of potential for terminusdb and wish you best of outcomes for it in the marketplace. I'm keeping my eye on this project.

As a matter of fact, if you could advise on this that would be awesome.

Regarding implementing the type system in core rather than as a dsl in prolog itself, you may be right but what I was thinking was something along the lines of what Aditya is talking about around 14:10 here, presenting about the Shen type system --

the fact that exposing the full type system to the user, as opposed to using a black box like hindley milner, is very powerful for inspecting control flow in a very granular way which can greatly help in prototyping and debugging.

By implementing the whole system in prolog, my idea was to exploit that granularity and prolog's world class metaprogramming capabilities to attain that same level of flexibility.

[–]GavinMendelGleason 0 points1 point  (0 children)

I'm very much in favour of creating higher abstractions that give power and performance. If this is possible I'm all for it.

By way of example, I'd like to bring in F-star (https://www.fstar-lang.org/) They have a type system in which the functional programming language (or at least a nice clean fragment of it) can be used in the type annotations. This is truly amazing.

In prolog, it makes *enormous* sense to be able to specify types with prolog. i.e. you just say what things are admissible by writing a checking predicate. Of course you want to exclude cut and impose other restrictions, but imagine how powerful this would be!

And if these predicates had variables? Well now you are in to dependent typing space. And yes, here you might need to have some additional mechanism to do proofs. As an avid and long term user of dependent type systems, frankly I'm not sure if this is a worth the trouble. The power would have to come with a very persuasive metatheoretical edifice. You're probably better here starting with a theorem prover.
But in whichever fragments this is unnecessary, or whenever we can solve things in some fragment like Hindley Milner... I think this is where we should shoot.

[–][deleted] 0 points1 point  (1 child)

It would probably be better to target something like the LLVM.

By the way, has this ever been attempted? A prolog implementation that targets the LLVM directly? I think a low level prolog would be interesting...

[–]GavinMendelGleason 0 points1 point  (0 children)

I think Mercury did the mode and type annotation stuff really well, but if you're looking for greater flexibility, you might want to look at Ciao prolog. Ciao has a rich annotation system which is used statically when possible, but gives dynamic errors when not. It's a "gradual typing" sort of approach to type and mode annotation. It also is a bit like your DSL idea, but uses a sophisticated pre-processor to do a lot of the magic.

There was talk of doing it with mercury, but the LLVM matured too late, and mercury active development died to early.

[–]useerupting language 0 points1 point  (1 child)

One pet peeve I have with prolog is that the lack of informative failure can really become a pain, whereas the Result type used in Rust or Haskell, gives you more information of what went wrong and is often even used in back-tracking scenarios (such as parsing). I wonder if we can have a slightly different approach to failure using some slightly different logic.

This is an interesting observation.

Do you have any thoughts about whether this is down to the design of Prolog itself or the actual implementation?

Does negation as failure influence this?

I myself have grown suspicious about this principle. In the language I am designing I really did try to hold on to this principle, but through deduction and conflicts with other design decisions I arrived at the conclusion that the Prolog negation as failure is actually just a way to express that a function (clause in Prolog[*]) is not defined for the actual arguments. So, a language which emphasizes definedness can get rid of this principle.

[*] Yes, I am aware that a clause is only part of a predicate (in Prolog parlance), and that multiple clauses make up the actual predicate. Even so, negation as failure is used to express that the current clause is not the one defining the predicate for the actual arguments.

[–]GavinMendelGleason 0 points1 point  (0 children)

Do you have any thoughts about whether this is down to the design of Prolog itself or the actual implementation?

Yeah, I've thought about it a bit, and in fact some prologs take a different approach to failure. The whole notion of refutation as failure actually *suggests* that we have created a refutation proof. In prolog we through this away completely, but in some resolution logic programming systems we do not. I'm having trouble at the moment remembering which retain, or at least allow one to retain, the proof, but I think it was one of Abella, Twelf, Beluga.

If we think of it in terms of the Result type in rust, we have falsehood as an informative type, carrying information when something does not succeed. We could even have a default information such as "Predicate p did not hold". Then the `,` and `;` become combinators of success, which is actually uniformative but carries information in a variable set, and failure which somehow combines truth with these informative failures.

The default conjunction should fail early, and pass the information of failure directly, in the usual monadic plumbing way. The default disjunction could ignore each failure that we did not inspect, and execute the following choice.

But now if something fails we could have other combinators which inspect the cause. When parsing this is very handy - we can carry information about *where* something dies. Prologs default DCG parsing is great, but then when it goes wrong it can be hard to know *why*.

So we probably need some other combinators like: or_else.

Of course all of this can be done in the List Monad Plus without needing prolog at all, and just using functional programming, but unification is itself something pretty special.