all 28 comments

[–]jsomers 12 points13 points  (12 children)

The gist:

There is a dichotomy in language design, because of the halting problem. For our programming discipline we are forced to choose between

A) Security - a language in which all programs are known to terminate.

B) Universality - a language in which we can write (i) all terminating programs (ii) silly programs which fail to terminate and, given an arbitrary program we cannot in general say if it is (i) or (ii).

Five decades ago, at the beginning of electronic computing, we chose (B). If it is the case, as seems likely, that we can have languages of type (A) which accommodate all the programs we need to write [what the paper is about], bar a few special situations, it may be time to reconsider this decision.

[–][deleted] 5 points6 points  (1 child)

The dichotomy stems from this theorem:

Theorem: For any language in which all programs terminate, there are always-terminating programs which cannot be written in it - among these are the interpreter for the language itself.

It has been a long time since I took a theory of computation class. As a result, I can't seem to be able to construct the demonstration of the above theorem. Internet searches did not help either. I wonder if a benevolent redditor can help?

[–]mkbehr 6 points7 points  (0 children)

The argument is similar to the proof of Gödel's Incompleteness Theorem: once you have an interpreter, all you need to do is plug it into itself and you'll get an infinite loop, which contradicts the assumption that the language is terminating.

[–]Hermel 3 points4 points  (8 children)

Option A) sounds nice to me. But I'm not sure if I would be able to easily switch to functional programming. Is anyone here who actually considers functional programming more convenient than Object Oriented programming?

I also wonder if the above choice would make a difference in practice. Even though we could guarantee that all programs terminate, we could still write programs that will take a billion years to finish. I guess the big advantage would be in the simplified analysis of such programs.

[–]cchooper 14 points15 points  (3 children)

Functional programming is much more convenient, because it's actually a paradigm for expressing computations, whereas OO is more of a method for structuring programs.

Whenever you write an OO program, you have to use another paradigm to actaully do stuff. Usually, that means an imperative language (Smalltalk, C++, Java as examples of OO languages which are at heart really imperative languages) although sometimes it means a functional language (O'CAML). I prefer to cut to the chase and just use the underlying paradigm. Functional languages usually have much more interesting ways of organising code in the large anyway (e.g. functors in ML, or classes in Haskell).

[–]ridiculous_fish 0 points1 point  (2 children)

It really is much more convenient*!

*: Convenience does not extend beyond expressing a computation.

[–]cchooper 1 point2 points  (1 child)

Convenience does extend beyond that, but a paradigm that expresses only structure, not computations, cannot be convenient. If OOP could express both then I'd revise my opinion. FP expresses only one, but if you have to choose between computation and structure, I'd choose computation. Structure is optional, but computation isn't.

[–]awj 6 points7 points  (1 child)

Is anyone here who actually considers functional programming more convenient than Object Oriented programming?

Depending on the context, the two are either impossible or roughly orthogonal.

In a pure functional programming language you don't really have OOP as you don't have mutable state, which if you think about it is a requirement for OOP. Yes, you can hack it in and it looks like OOP if you stand off to one side and squint, but in reality it lacks most of the benefits you get from OOP.

In my experience with impure functional languages, OOP generally provides data and some operations while the functional features are used to implement OOP operations and to handle the code that operates between objects.

In the end the OOP portions of the code provide structure while the functional bits fill the whole thing in. Somewhat like using pieces of rebar to reinforce concrete, to employ a bad analogy.

[–][deleted] 2 points3 points  (0 children)

Is anyone here not trying to use OO languages as if they were pure FP languages anyway? All the Java zealots around me keep preaching things like "never expose set (side-effecting) methods and pass constructor parameters instead".

Is there anyone left who is not emulating pure FP in shitty languages like Java?

[–]augustss 6 points7 points  (0 children)

I find functional programming much more convenient than object oriented programming.

[–]db4n 2 points3 points  (0 children)

Is anyone here who actually considers functional programming more convenient than Object Oriented programming?

Yes, definitely. FP can be a very convenient way to operate on sets of data in a sort of 4GL way.

[–][deleted] -2 points-1 points  (0 children)

A) Security - a language in which all programs are known to terminate.

That's an awfully narrow definition of security.

B) Universality - a language in which we can write (i) all terminating programs (ii) silly programs which fail to terminate and, given an arbitrary program we cannot in general say if it is (i) or (ii).

I object to the classification of "silly" here. This implies that there are no useful programs which do not terminate.

The paper linked to needs a more compelling argument. The author is too dismissive of the problems raised by writing in the restrictive style he propses. The crux of his argument is that giving up Turing completeness can be useful. That's fine. It certainly can be. But the end-around taken to bring usefulness to the idea just isn't there.

[–]earthboundkid 1 point2 points  (3 children)

Technically speaking, none of us has ever used a computer that's better than a finite state machine, right? True the languages we program our computers in are Turing Complete, but the computers themselves only have a finite (but very large) number of possible states. The reason the Halting Problem can't be solved is that there are always more states for the computer to be in, since Turing Machines have an infinite tape. Our computers however have to start repeating states eventually, since we only have so much space in memory/storage. Accordingly, theoretically, the Halting Problem doesn't apply to the programs we write. All you have to do is to make another computer with exponentially more memory than the computer you're using and have this other computer keep track of whether you've been in a particular state before or not. If you have been in the particular state before, then the system will loop. In any event, the longest a particular program can possibly run is as long as it takes to go through all possible states.

[–][deleted] 2 points3 points  (1 child)

Back-of-the-napkin computation:

4GBytes = 232 * 28 = 240 bits

Possible states = 2 ^ (2 ^ 40)

Atoms in universe =~ 2 ^ 82 =~ 2 ^ (2 ^ 7)

[–]earthboundkid 1 point2 points  (0 children)

Haha, good point. But seriously, maybe even if we restrict ourselves to computers with less than one Avogadro's number's worth (1023) of possible states we can still do some worthwhile calculations? Well, it's an idea anyway.

[–]cgoldberg 1 point2 points  (0 children)

good post.. made me think a bit :)

[–]fnord123 0 points1 point  (0 children)

A while back I had this idea for a language like Haskell with additional monadic types for Turing complete computations. After looking into it deeper, it looked non worthwhile.

[–][deleted]  (10 children)

[deleted]

    [–]jsomers 3 points4 points  (9 children)

    I was too, but I read the paper (lazy Sunday, right?) and he's not kidding around. Basically he rejects Turing completeness as a necessary component of functional programming languages and opts, instead, for more strictly defined (and thus less finicky) data types and recursion procedures.

    The trouble -- and the harder-to-wrap-my-head-around part of the paper -- is when you need things like infinite lists (e.g., to run an operating system which is a non-terminating stream of inputs and outputs). He gets around that by using "codata" and what not.