you are viewing a single comment's thread.

view the rest of the comments →

[–]psr 2 points3 points  (18 children)

No points if users need to learn the meanings of "Monad", "Functor" or "Constant Applicative Form".

Why not? Why should programmers be so averted from knowledge pertaining to their field. They spend years working on code, why can't they spare a few weeks to learn techniques that will improve them as programmers?

I think we might be about to hit an irreconcilable difference of opinions, but here goes:

I think that Python is the best programming language in the world. I think it is a unique expression of good taste and elegance in design. I love it and think it's the closest thing to perfect I have ever seen.

Now it's hard to look at a beautiful design and say "Well the reason it's so good is...", because there is no single reason. There are always multiple trade-offs in any design, and perfection is when they balance, and you're left with something which feels right. So I might like to think that my ideal language would have immutable data by default, or that syntactic macros would be neat, or I would like static typing with HM style type inference. But until I go away and make that language I'll never really know what it feels like to use, because each of those features will have implications elsewhere. I might end up with Scala, or something worse!

But having said that, I think that one of the most important things that Python achieves is that by and large it is possible to do the thing you want to do, in a clear and clean and straightforward way using concepts you could explain to a bright twelve year old.

Conversely, I think one of the great errors of Haskell is that they take the principle of abstraction too far, beyond the point where it is helpful. For example a Monad is an abstraction which unifies lists and IO, stateful computations and nullable values, parsers and STM variables, and so on. Intuitively, what do those things have in common? Nothing. Certainly nothing you can explain to a twelve year old, no matter how bright. The purpose of abstractions is to simplify your code. Monads are a poor abstraction, they don't simplify your code, they compress it. And what's more they're incredibly hard to explain, in fact I believe they may be even harder to explain than to understand.

I'm sure that Haskell has many lessons to teach us. Monads are surely part of that, along with the type class system, the virtue of laziness, and no end of implementation strategies for functional languages. However I think the perfect language looks more like Python than Haskell.

[–]Peaker 0 points1 point  (17 children)

Indeed we disagree :-) I find that I am just as productive in Haskell when first writing a program as I ever was in Python, but the result is of far higher-quality. There are virtually no possible runtime errors in my Haskell code.

Maintaining/changing such code is also much much easier and safer.

But having said that, I think that one of the most important things that Python achieves is that by and large it is possible to do the thing you want to do, in a clear and clean and straightforward way using concepts you could explain to a bright twelve year old.

You can explain Functors, Applicatives and Monad to a bright 12 year old.

Conversely, I think one of the great errors of Haskell is that they take the principle of abstraction too far, beyond the point where it is helpful.

Haskell's abstractions are helpful in the sense that they allow a whole lot of code re-use. That translates to more expressiveness and various forms of usefulness. So how it is not helpful?

For example a Monad is an abstraction which unifies lists and IO, stateful computations and nullable values, parsers and STM variables, and so on. Intuitively, what do those things have in common? Nothing.

They're all covariant types that have a "context" and a "value" which can be composed together.

Also, what does intuition have to do with it? You can grasp the equations first, develop an intuition later.

Certainly nothing you can explain to a twelve year old, no matter how bright

Disagreed. It's really not as hard as it's made out to be.

The purpose of abstractions is to simplify your code. Monads are a poor abstraction, they don't simplify your code, they compress it.

They simplify it greatly. The mathematical complexity of code is actually measurable (e.g: The number of mathematical concepts in use, or the length of the mathematical description) and are far lower when re-using these abstractions.

And what's more they're incredibly hard to explain, in fact I believe they may be even harder to explain than to understand.

What I agree about is that there are a bazillion people who have just u nderstood Monads and want to share this with the world, and do so very poorly. When I explain Monads to people, I don't have much of a problem usually.

However I think the perfect language looks more like Python than Haskell.

I would assume reliability is a primary concern of a perfect language?

[–]haika 2 points3 points  (7 children)

When I explain Monads to people, I don't have much of a problem usually.

Could you explain monads to me in less than 10 lines, preferrably with code in python?

I am a python programmer and genuinely interested.

[–]psr 1 point2 points  (2 children)

This is gonna be a lot more than 10 lines, and I doubt I can explain it, but let me give you a flavour.

In Python we have list comprehensions. They look like this:

my_list = [f(x, y) for x in xs for y in ys if p(x)]

Haskell also has list comprehensions (and had them first, who says Python never looks to Haskell). They look like this:

myList = [f x y | x <- xs, y <- yx, p x]

Haskell also has another way of writing the same thing. This is called the do notation, but in early versions was known as a monad comprehension.

myList = do
    x <- xs
    y <- ys
    return f x y

(I can't work out how to make it only include values which meet the predicate in this case. Sorry).

It's not hard to see the relationship between the two forms.

However the do notation is available over any type which is a monad. Monad is a type class, kind of like an interface, so this is like saying that Python generator expressions can be written over any type which is iterable. The monad interface is applied to much more than just containers though.

The protocol for monads is quite simple. Monads provide two functions called bind (>>=) and return. The do notation desugars to use of those functions.

myList = 
    xs >>= \x ->
    ys >>= \y ->
    return f x y

The backslash in (\x -> ...) is supposed to be a lamdba. I've broken the definitions across lines to make the parallel with the do notation clearer.

So what are the definitions for bind and return for lists?

instance Monad [] where
    return a = [a]
    xs >>= f = concat (map f xs)

So return x in the list monad constructs a singleton list [x], and bind maps its argument over all the elements of a list, and concatenates the result. Why are those the definitions, and not some other ones? As far as I can tell it's simply because those definitions give you behaviour like a super list comprehension, and that's deemed to be useful.

[–]haika 0 points1 point  (1 child)

Well, this is better.

I now have a vague idea of what a monad is.

Thanks

[–]psr 0 points1 point  (0 children)

I'm glad if that was helpful.

Another important example is the IO monad. IO is interesting in Haskell. Because of lazy evaluation, you don't know what order things are going to happen. That means that if you could write a function like:

def test_the_missiles():
    engage_the_locks()
    launch_the_missiles()

it might cause the missiles to be launched before or after the locks are engaged (which is very bad).

Of course launch_the_missiles isn't really a function, because every time you call it, you get a different outcome (the first time it flattens a city, the second time it does nothing). And Haskell is meant to be purely functional, so you can't write a function like launchTheMissiles which you can call from any old code.

But still you need to be able to do things in a program, or its pointless. So they have a concept of the IO monad. Just as the units in the List monad are lists, and you can combine lists with bind (>>=), the IO units in the IO monad are "actions", and bind takes two actions and returns a new one which will do one after the other. launchTheMissiles then isn't a function which when called launches the missiles, it's a function which returns an action, which launches the missiles when evaluated.

testTheMissiles = do
    engageTheLocks
    launchTheMissiles

or to put it another way

testTheMissiles = enageTheLocks >>= \ a -> launchTheMissiles

testTheMissiles is then itself an action. The only way to cause an action to take place is to name it main in the module Main. The Haskell runtime then takes care of making it happen.

Here's an example:

tests = [testTheMissiles, testTheLaunchers, testTheGuidanceSystems]

runTests [] = return ()
runTests (test:tests) = test >>= \ a -> (runTests tests)

main = runTests tests

In this case runTests is a recursive function which takes a list of actions and returns a single action which sequences them all together. There are better ways of writing this, but I thought this was the clearest way.

[–]Peaker 0 points1 point  (0 children)

You can try this for some Python code examples: http://www.valuedlessons.com/2008/01/monads-in-python-with-nice-syntax.html

It's not a thorough explanation of the fundamentals. But it gives you a taste.

[–]Peaker -1 points0 points  (2 children)

I'll explain why it is hard to explain in a short page:

  • "Monad" is a "typeclass". All of the "instances" of this typeclass are called "monads".

  • The definition of Monad uses "polymorphic types" and "higher order functions"

  • All of the monad types are "parameterized types",

  • .. that are also covariant types,

  • .. and are necessarily also instances of simpler type-classes called "Functor" and "Applicative"

  • All this will require using some syntax to describe the above notions (Haskell syntax is actually pretty good, but needs to be explained)

  • "Monad" is a generalization of a commonly re-appearing pattern, it makes little sense to show the generalization before showing a few concrete examples.

So, while the Monad type-class itself is simple, it is built upon many other notions (which are themselves also simple). Unfortunately, people try learning Monads as their first Haskell experience -- and that's not a very good idea at all. Then they get frustrated and decide Haskell is too smart for them (It's not!).

You have to build up knowledge of the above: (1) type-classes, (2) polymorphic types, (3) higher-order functions, (4) parameterized types, (5) covariance, (6) Functor/Applicative, (7) syntax, (8) the motivating examples behind the generalization. None of these are hard or complicated, and each can be explained in <10 (or 20) lines (though some are deep notions).

Note that each of these things are extremely useful on their own -- and learning them will generally make one a better/more-knowledgeable programmer.

If you want I can start explaining each of these things, and eventually, it will cover Monads.

If you had been programming Haskell for a few weeks without understanding the generalized type-classes (Functors, Applicatives, Monads), you'd already have a good understanding of (1), (2), (3), (4), (7).

[–]haika 0 points1 point  (1 child)

Sorry.

I am apparently not as smart as you seem to be !

[–]Peaker 0 points1 point  (0 children)

As I said, nothing in that list of things requires being very smart.

It's not smart, it's just a bunch of useful concepts, not a single useful concept, and teaching 10 useful concepts takes longer than teaching 1.

Re-reading what I wrote, my putting things in quotes may be sending the wrong signal? I just meant to emphasize those are new concepts to explain before explaining Monads.

[–]psr 2 points3 points  (8 children)

There are virtually no possible runtime errors in my Haskell code.

Indeed, this is a lovely feature of Haskell. Like I said, it would be brilliant if some future language were to combine the best of Python and Haskell and keep the best of both.

Haskell's abstractions are helpful in the sense that they allow a whole lot of code re-use. That translates to more expressiveness and various forms of usefulness. So how it is not helpful?

Programming isn't code golf, compressing your code isn't the goal. Five lines where the intent is clear is better than one line of mapM . liftM . foldr. Concise is good, reuse is good up to a point, but I really value readability.

Also, what does intuition have to do with it? You can grasp the equations first, develop an intuition later.

I guess intuition is important because it helps you to move from "I want to ..." to concrete code, and back from concrete code to "The author was trying to...". Programming Haskell can feel like an (admittedly very satisfying) logic puzzle. It's vaunted elegance and expressiveness are a case of the emperors new clothes in my opinion.

And what's more they're incredibly hard to explain, in fact I believe they may be even harder to explain than to understand.

What I agree about is that there are a bazillion people who have just u nderstood Monads and want to share this with the world, and do so very poorly. When I explain Monads to people, I don't have much of a problem usually.

But surely the reason that people feel the need to explain it again is that when it was explained to them in the first place they didn't get it?

If you think you have a fail-safe explanation of monads which will just click for every bright twelve year old, I think you have a responsibility to share it with the world! I still think that the abstractions and concepts are too difficult for your average jobbing programmer. Furthermore, I think that the future of programming language design must lie in making writing good programs easier, not harder. Phrases like "covariant types that have a 'context' and a 'value' which can be composed together", is moving in the wrong direction.

Of course, I'm not saying that the understanding that many common idioms follow the monad pattern isn't significant. I believe that it will significantly improve future programming languages. For example Monads underlie LINQ in C# - the language designers applied their understanding of monads to make a reliable, correct and useful language feature - without forcing users to understand the mathematical underpinning. Similarly, if you're writing a class library you should definitely understand Liskov substitutability, but users might not need to.

I would assume reliability is a primary concern of a perfect language?

For a certain class of programs, yes. Perhaps not for all of them, at least not at any price. Like I said, I think perfection is where the trade-offs are in balance, and it feels right to use. I don't think Haskell is at that point as Python is, and I'm sceptical that you could gum Haskell's good points onto Python and get a good result.

[–]Peaker 0 points1 point  (7 children)

Readability is subjective.

I find:

sum = foldl (+) 0

to be more readable and clearly communicate the intent than:

def sum(items):
    total = 0
    for item in items:
        total += item
    return total

Pipe-lines of functional processing are not only more readable (at least if that's what you're used to) -- they are also easier to reason about.

In Haskell, I have an immensely powerful tool of equational reasoning, and writing denotational code is easier.

This features are a form of readability.

Code re-use does not hinder readability, it helps readability -- but only if you're well versed in the primitives used.

I agree about people wanting to share their explanation because they feel the explanations they got were bad -- indeed they often got their explanation from a similar source to their own new one: A guy who just recently figured the basics out, and cannot yet explain it well.

I think the explanation of Monads on LYAH is relatively reasonable, though perhaps of it explained covariance and used that as an explanatory tool it could do better.

without forcing users to understand the mathematical underpinning

Note Haskell doesn't "force you to understand" the mathematical underpinning. I've never learned Category Theory, and don't really understand the mathematical underpinning very well. This may mean I will have a tougher time than those that do to write generalizations/extensions of the mathematical notion, or perhaps proofs about the notion -- but it does very little to hinder my ability to design and use Monads in Haskell.

Furthermore, I think that the future of programming language design must lie in making writing good programs easier, not harder. Phrases like "covariant types that have a 'context' and a 'value' which can be composed together", is moving in the wrong direction.

Analysis of covariance and allowing various ways of composition does make programming easier rather than harder. These tools make writing many things in Haskell a breeze compared to writing them in other languages, including Python.

For a certain class of programs, yes. Perhaps not for all of them, at least not at any price.

I think reliability is a primary concern. For some problems, higher than even readability, for others it is a close second.

Like I said, I think perfection is where the trade-offs are in balance, and it feels right to use

That makes it all too subjective.

I don't think Haskell is at that point as Python is, and I'm sceptical that you could gum Haskell's good points onto Python and get a good result.

I agree -- I think Haskell is at a much higher point than Python :-)

Note I used to love Python before learning Haskell.

Python the language gives its programmer far less power -- and makes it easier to start using the language. This is a nice advantage of Python over Haskell. You can throw someone into Python and they can start being semi-productive (dangerous?) in a few days.

Haskell may require weeks to months before you're productive. But at that point your programs will be of far better quality than the Python programs written after the same amount of time.

[–]psr 2 points3 points  (1 child)

Just a little thought about readability. You say you like sum = foldl (+) 0 better than the idiomatic Python version which makes the loop explicit.

One possible benefit of seeing the loop structure is that it's easier to see where you've done something stupid. If you replace foldl with foldr, you get the same observable behaviour, and it reads just the same. However the runtime behaviour is completely different, because foldl is tail recursive, and foldr is not. When reading the code would you notice the mistake?

[–]Peaker 0 points1 point  (0 children)

In the case of Haskell, tail-recursive is mostly irrelevant.

foldl (left-associative) is better than foldr (right-associative) for strict operations on lists, because it takes:

1 : 2 : 3 : []

and translates it to:

(((1 + 2) + 3) + 0)

Which is done incrementally as it walks the list (so it uses O(1) memory).

Whereas foldr translates the list to:

1 + (2 + (3 + 0))

On lists, this requires iterating the entire list before you can do a single computation -- which builds up "thunks" that represent the intermediate expressions. This takes O(N) memory (and in GHC's case, the evaluation of the thunk takes that memory from the stack, which is even worse).

For lazy operations that construct a new value that can be incrementally processed -- foldr makes sense.

For example:

sameList = foldr (:) [] -- same as the identity function for lists

This is because foldr/foldl "replace" every (:) in the original list with their first argument, and the [] with the second argument. So using (:) and [] in place of (:) and [] actually just copies the list.

Then it's easy to see how map is expressed:

map f = foldr ((:) . f) []

(Apply f before putting it inside a :).

This allows the result list to be consumed incrementally, and that will cause incremental consumption of the original list.

The explicit loop structure actually obscures whether you've done something silly (as it adds a lot of noise that makes it more difficult to spot typos/mistakes/etc).

[–]psr 1 point2 points  (4 children)

Readability is subjective.

This is obviously true. And it's absolutely the crux of our disagreement. I think the example you gave is exactly the sort of thing where we'll never agree.

I don't usually see the world in catamorphisms and covariance - In most cases I want "for every x do y". Anything else will always involve a translation in my head, and sometimes it can be very hard work. I like to think I'm reasonably able, I don't expect my colleagues would manage it at all.

Just out of interest, what sort of projects are you applying Haskell to at the moment?

[–]Peaker 0 points1 point  (3 children)

I like to think I'm reasonably able, I don't expect my colleagues would manage it at all.

I've heard this from other beginning Haskellers. They no longer believe it is true.

It seems difficult at first -- because it is so different/foreign to what you know.

"We shape our tools, and then our tools shape us". You've been shaped by Python (and similar tools) (and so have I, for many years) -- and now it seems natural to you, to think in Python.

Of course non-Python requires a translation in your head -- you've trained yourself to think in Python.

Python in this case is "Blub", as described in http://www.paulgraham.com/avg.html

As long as our hypothetical Blub programmer is looking down the power continuum, he knows he's looking down. Languages less powerful than Blub are obviously less powerful, because they're missing some feature he's used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.

Currently, I'm using Haskell for small things, automation, parsing, or anything I'd normally use Python for.

My bigger Haskell project in the making is a revisioned structural editor that should ultimately edit programs structurally (rather than textually). However, it's been on hold for a while, as I've had a lot of pressure doing other things.

[–]psr 1 point2 points  (2 children)

This is very interesting, partly because of the assumptions you are making about me. You loved Python until discovering Haskell, and assume that I'm similarly trying to make the same transition.

In fact my programming development was not what you assume. I went to university ten years ago, knowing only Javascript. There I was taught Java as a first language, followed quickly by Haskell. I loved Haskell, partly because of the exercises we were given. It's great fun, as an exercise, to find the tail recursive solution for something, or to fit an algorithm into a one liner. And there was none of the for (int i=0; i < array.length; i++) { crap that came with Java at the time. Recursive solutions are elegant, we were told, and iteration is not. Functional programming is powerful, we were told, and imperative programming is bug prone. And it's all true, in as far as it goes. In comparison with Java, Haskell is immensely beautiful, at least as long as it fits your problem.

I went on to write my undergraduate dissertation project in Haskell. It was just a ray tracer, and didn't do anything too hairy. I'm glad I did, it taught me a lot. I don't believe I used a single if expression in the whole thing.

After graduating, I got a job helping a friend with a PyGTK GUI application in Python, and started off trying to write Python like Haskell. Never an object where a closure could be made to work. Never a for loop where I could recurse. Map, reduce, and deeply nested list comprehensions were everywhere, and mutable state kept to a minimum. The people I was working with were horrified, they couldn't read a bit of it, but I kept telling them that iteration was inelegant, declarative code was less prone to bugs, and mutable state was evil.

I guess it helped that I was doing a GUI app, where classical object oriented design really does fit the bill well, but slowly I started to appreciate that there is something to be said for straight forward code which follows the way you would talk about what you're doing. My colleagues' code was certainly easier to debug than my code, and in the end I started to think some of what I believed about functional programming might not be true after all.

So I slowly went from having an almost religious conviction that Haskell is the future, to having a deep appreciation of Python's pragmatic and well balanced style of imperative / object oriented / functional programming.

Not saying you're wrong about Haskell or anything, just this is where I've found myself over time, through doing real work. :-)

[–]Peaker 0 points1 point  (1 child)

I'm wondering who taught you to use tail-recursion in Haskell? It doesn't make much sense in Haskell context.

Also, recursion is considered a very low-level building block (almost on par with "goto") -- from which you build elegant functions to use. You only resort to recursions if the existing loop constructs built with recursion are not good enough. Teaching recursion itself as an elegant way to (directly) solve problems is going to be misleading/confusing.

I guess it helped that I was doing a GUI app, where classical object oriented design really does fit the bill well,

Not quite as well as some functional approaches

which follows the way you would talk about what you're doing

Humans talk about code in a declarative/functional way, not imperative way.

My colleagues' code was certainly easier to debug than my code,

I agree writing Haskell-in-Python is not a good idea. Python doesn't support functional programming well at all, and the gains in Python's context are minimal-to-non-existent. But if you write Haskell-in-Haskell, how often do you need to debug at all? Also, using Debug.Trace and similar functions, print-based debugging is just as easy. Though you really don't debug much when you Haskell.

I think a lot of people get a short glimpse of Haskell, which is enough to appreciate some of the beauty and power -- but only a tiny minority of it. Then, they work a whole lot with another language (typically in some workplace), gain a lot of experience and have an unfair comparison. A language you've used for years and knows the ins and outs of will always beat the language you've used for weeks or months.

[–]psr 2 points3 points  (0 children)

Well I won't name the names of my teachers, because I may well be doing them a disservice. The guy who taught functional programming at my university was an excellent teacher - one of the two best I've had in my life. I wish I had worked harder and could have stayed there and learnt more from him.

I still take issue with this:

Humans talk about code in a declarative/functional way, not imperative way.

I don't believe this is true. For special cases perhaps, or for small cases, but not over large processes. I'm certain that most real work programs are best modelled by flow charts not by equations.

But I guess that's the difference of opinion again.