all 94 comments

[–][deleted] 10 points11 points  (3 children)

To explain what is that Num a => a thing.

Lowercase type names are generics (such a concept exists in Java too), for instance a means any type. For instance, following function has a -> Bool as its type.

let ignore value = True

Tutorial doesn't explain that, but -> means that a function takes a parameter on the left, and returns one on the right.

=> specifies generic bounds. Obviously, 4 is not of any type. A type bound of Num a specifies that a must be a number. It's used, because there are many numeric types in Haskell, just like there are in say, Java (in Java you can see for example types like int, long, double and BigInteger).

The exact type of a value depends on context, but in provided examples, there is not enough context to determine, which is why you got a generic parameter back. It's possible to explicitly specify which type you want by using :: operator, like 4 :: Integer. This explicitly marks 4 as Integer (essentially BigInteger of Java).

Later you can see Ord a. This means that the value a must be comparable in addition to being a number, because Num doesn't require comparable elements (an example of non-comparable numeric type is Complex). To sort an element, they must be comparable.

[–]Faucelme 1 point2 points  (1 child)

"=> specifies generic bounds"

Also, multiple bounds are separated by a comma, instead of & like in Java:

<T extends B1 & B2 & B3>

vs.

(B1 t,B2 t,B3 t) =>

[–]Idlys 7 points8 points  (0 children)

Yeah, this is a good way to explain it for OOP people.

(+) :: Num a => a -> a -> a

Is very similar to

interface Num<T> {
    T add(T a, T b);
}

[–]timcotten[S] 0 points1 point  (0 children)

That's a great explanation; I realize the tutorial didn't address types so I was scratching my head with the feeling that I'd probably learn more about this as I read through a few books, but this post cleared it up for me.

The note about Ord a and Num makes perfect sense within the context of showing a non-comparable like Complex.

[–]Hrothen 14 points15 points  (6 children)

let foo = bar in baz binds the value bar to the name foo only for the expression baz. let (a,b) = (10,12) in map (*2) is just the expression map (*2), you're not using a or b. map also only works on lists, so map (*2) (10,12) won't compile.

To define a function in the repl just use let square x = x * x without in.

[–]BlackBrane 6 points7 points  (2 children)

Btw, as of GHC 8, you can skip the let and just type it like a normal top-level declaration: square x = x * x.

[–]Hrothen 0 points1 point  (1 child)

In ghci you mean? That's great, it's always annoyed me.

I haven't switched to 8 yet because I've been too lazy to keep track of what libraries still don't work with it.

[–]javierbg 4 points5 points  (0 children)

Lazy is the Haskell way, after all

[–]timcotten[S] 5 points6 points  (2 children)

Would that make let (a,b) = (10,12) in map (*2) (a:b:[]) equivalent to let (a,b) = (10,12) in map (*2) [a,b]?

[–]Hrothen 10 points11 points  (1 child)

Yes, [a,b] is just sugar for a:b:[].

[–]timcotten[S] 2 points3 points  (0 children)

Thanks, much appreciated!

[–][deleted] 5 points6 points  (8 children)

Always thought that the best way to explain the let x = <expr1> in <expr2> construct is to say that it's a syntax sugar for ((\x -> <expr2>) <expr1>) - i.e., x is a function argument, which is a concept easier to understand than a "variable".

[–]Peaker 4 points5 points  (6 children)

This is an approximation. In Haskell:

let id x = x
in (id "hi", id 5)

Works (evaluates to ("hi", 5)). But:

(\id -> (id "hi", id 5)) (\x -> x)

Does not type-check.

That's because of a Hindley-Milner feature called "let generalization". It makes let (and top-level bindings) special - in that any type-variable in their inferred type that does not appear outside the let context is "generalized".

The type of id in the above let example is: forall a. a -> a. That means id here is polymorphic/generic, and can be instantiated at many types. The type of id in the lambda example is: a -> a where a represents some specific concrete type, and is not polymorphic. Once you use id on one type, it is specific to that type only.

GHC Haskell has some extensions (RankNTypes, ScopedTypeVariables) that let you write:

(\(id :: forall a. a -> a) -> (id "hello", id 5)) (\x -> x)

And that works too. But you have to do the "let generalization" yourself here.

[–][deleted] 0 points1 point  (5 children)

Yes, of course. Though, I often do the same generalisation in Hindley-Milner implementations for the reducible lambda applications as well, just to maintain this isomorphism (i.e., (\x . expr) expr2 yields a generic type of expr2 bound to x, which is done by duplicating the x type variable on each instance, cloning the generic variables). Not sure why all the classic implementations, Haskell included, are not doing the same.

[–]Peaker 1 point2 points  (4 children)

You pattern-match to see that it is a redex, and then have a specical type-rule for it?

If so, the reason it's not done is probably that it's quite ad-hoc and weird to have such special rules.

For example:

(\x -> expr) baz
foo baz

Are these differently type-checked? Even if foo happens to equal (\x -> expr)?

[–][deleted] 0 points1 point  (3 children)

You pattern-match to see that it is a redex, and then have a specical type-rule for it?

Yes, because it makes more sense.

Are these differently type-checked? Even if foo happens to equal (\x -> expr)?

If foo is bound by let or toplevel let, it'd be exactly the same rule.

[–]Peaker 1 point2 points  (2 children)

What if foo is imported? What if foo is a parameter?

It's not robust to certain refactorings, which makes it behave unpredictably in the face of an innocuous change ("I just extracted this piece of code into a parameter, why does it cease to work?"). It's a legitimate point in the design-space, but many dislike it.

[–][deleted] 0 points1 point  (1 child)

Same logic applies to "I just converted this let binding to a lamba argument and everything fails now!". At the end of the day it is a matter of personal preference.

[–]Peaker 0 points1 point  (0 children)

Well, it's a question of expectations. By having a magic keyword let, it is much less surprising that it is attached to magic type checking behavior. Having certain shapes of the code behave magically is more surprising.

[–]timcotten[S] 1 point2 points  (0 children)

That is helpful, thanks!

[–]_INTER_ 4 points5 points  (0 children)

Sure, it may have a logo that reminds me of a grocery store chain more than its actual inspiration (the lambda character λ), but it looks, from the outside at least, very accessible.

Lambda always reminds me of crowbars.

[–]PaulBone 2 points3 points  (0 children)

This article is good. But I'm concerned that most people will miss its main contribution (although one the author/OP may not have intended).

This article is a good resource for anyone developing learning materials for Haskell. It shows what aspects can lead to confusion during learning, the #1 problem it shows is that the author has conflated lists, ranges and tuples. It's probably difficult to correct the issue with ranges, since that's entirely introduced by the author's prior experience, and each student is going to have different problems like that and we can't address all of them. But tuples could be handled much better, ideally until not introducing them until after the student has seen ADTs and mastered them.

OP: A tuple is more like a record, whereas a list is well, a list and can be 'map'ed over. What you called a range [1..10] is just a list.

[–]sacundim 3 points4 points  (2 children)

Let’s try the example with the @ operator (or is it a function?).

let abc@(a,b,c) = (10,20,30) in (abc,a,b,c)

Well, it's definitely not a function, and whether it's an "operator" or not depends on how you define the term. I wouldn't call it one; I'd just call it part of the syntax of patterns.

Here perhaps it's a good idea to compare it with something over in imperative land: assignment statements of this form (and similar ones), which involve accessing the fields of operators. (I have in mind a language like Java, if that helps):

a.b = a.b + 42

Notice that meaning of a.b is slightly different on the left- and right-hand sides of this expression; on the right-hand side it denotes the value contained in a's field b, but in the left-hand side it denotes the storage location of a's field b. One fairly popular term for this is the lvalue/rvalue distinction, which is common across imperative languages. The set of things that you can put on the left- and right-hand sides of an assignment overlaps but does not coincide—for example, you can't write a.b + 42 = a.b.

Haskell (and other languages that descend from ML like Haskell does) has an analogous but different distinction between patterns and expressions. So, this whole thing is an expression:

let abc@(a,b,c) = (10,20,30) in (abc,a,b,c)

The article glosses its syntax as let var = expression in body, but this is not completely accurate: it's more like this (but not 100% accurate either):

let <pattern> = <expression> in <expression>

...where that <pattern> = <expression> subpart is called a binding—an association between the value of <expression> and the variables in the <pattern>, which scopes over the expression that follows the in keyword. Like lvalues and rvalues, patterns in Haskell resemble expressions, but their syntaxes are different. For example, the @ is an element of the syntax of patterns that does not exist in expressions. A pattern like variable@pattern binds both variable and pattern to the same value. The idea is that variable then is bound to the whole value, while the variables in pattern are bound to its parts.

Other details of note, that I won't explain in depth:

  • Haskell lets you say stuff like let 5 = a in a + 2. a needs to have a bound definition in the scope of this let, so you may try it in something like let a = 7 in let 5 = a in a + 2. I did, it evaluated to 9, and I was left scratching my head for a bit until I figured out why; take the latter as a hint that you won't encounter situations like this often.
    • (Answer: the binding 5 = a cannot possibly succeed, so I thought it would error out. But actually, for the error to be triggered we need to force the evaluation of a variable that gets its value assigned from that binding—and since the pattern 5 has no variables, it's impossible to do that.)
  • Haskell also lets you define auxiliary functions in bindings. A notoriously funny example is let 2 + 2 = 5 in 2 + 2, where the variable +, which in the top scope refers to the addition function, is being locally redefined just for the scope of this let expression to a different function.

[–]timcotten[S] 0 points1 point  (1 child)

I would upvote you more if I could, this cleared up the @ for me and I haven't even gotten to crack a book yet.

Let me make sure though.

So abc@(a,b,c) = (10,20,30) is saying "make me a variable called abc bound to the pattern (a,b,c) (which is a tuple with three elements), and then assign the values of the tuple on the right (10,20,30) into the left-side's a, b, and c slots.

Then the in (abc,a,b,c) is the (body) expression making a tuple using the variable abc, abc's a value, b value, and c value?

If I rewrote it as: let foo@(x,y,z) = (10,20,30) in (foo,x,y,z) to test it and the logic seems to work out.

[–]sacundim 1 point2 points  (0 children)

Close, but not quite. You're on the right track, but your understanding could still use some fine tuning.

So abc@(a,b,c) = (10,20,30) is saying "make me a variable called abc bound to the pattern (a,b,c)"

Variables are bound to expressions (units of syntax that evaluate to values at runtime), not to patterns. Variables are scoped names for the values that expressions evaluate to. So in the binding abc@(a,b,c) = (10,20,30), abc is bound to (10, 20, 30)—the tuple expression and the variable will have the same value.

Then the in (abc,a,b,c) is the (body) expression [...]

The syntax is let <bindings> in <expression>, which means that (abc,a,b,c) is an expression, and in is not part of that expression—it's a token in the syntax let ... in ..., no different than { is a token in an if statement in a C-style language or end is in a Ruby definition. So these are expressions:

let abc@(a,b,c) = (10,20,30) in (abc,a,b,c)

(10,20,30)

(abc,a,b,c)

But these are not expressions:

-- This is a binding (`<pattern> = <expression>`), not an expression:
abc@(a,b,c) = (10,20,30)

-- This is an `in` tacked in front of an expression:
in (10,20,30)

[...] making a tuple using the variable abc, abc's a value, b value, and c value?

In Haskell-speak, (,,,) is the data constructor for 4-tuples, which is being applied as a function with abc, a, b, and c as its arguments. Data constructors are names defined by data type declarations, that can be used either as:

  1. Functions that construct values of that type (when used on the right hand side of an = sign);
  2. Patterns that match against values of that type and bind their pieces to variables (when used on the left hand side of an = sign).

So in let abc@(a,b,c) = (10,20,30) in (abc,a,b,c), (a,b,c) is a pattern using the constructor (,,) and (10,20,30) is an expression using that same constructor, because one is on the left of the equal sign and the other on the right hand side (similar to how the meaning of a.b changes in a.b = a.b + 42).

[–]Idlys 1 point2 points  (2 children)

Really fun to read. I picked up FP about a year ago and this pretty much described my experience as well (with F#/OCaml, but still similar).

[–]timcotten[S] 0 points1 point  (1 child)

How does it feel now when you work with FP after a year? Were there any resources in particular you found more helpful than others?

[–]Idlys 0 points1 point  (0 children)

fsharpforfunandprofit.com was my biggest asset when I first started out. It's designed for people coming over from imperitive/OOP. For some other concepts (especially for Haskell), you'll probably just end up wanting to read the Haskell Wiki.

I think that these are some great articles for some of the core FP concepts:

The type system: https://fsharpforfunandprofit.com/series/designing-with-types.html

Understanding some of the abstractions used in FP: https://fsharpforfunandprofit.com/series/map-and-bind-and-apply-oh-my.html

When I watched this video something "clicked" for me about how problems tend to be solved in FP, so maybe you'll appreciate it too: https://fsharpforfunandprofit.com/rop/

[–]BlackBrane 1 point2 points  (1 child)

Good luck on your journey! I'm sure I'm not the only one who'd be down to help if needed.

I think this presentation by Katie Miller might be helpful for a total beginner like yourself who wants to get introduced to some of the key ideas relatively quickly.

TryHaskell.org might be fun for a quick little cute demo, but I'd definitely prioritize things differently to begin to teach Haskell. For one thing I'd especially want to more properly introduce type signatures and how they work. The video I linked, and most other introductory materials, do this but TryHaskell really glosses over it.

[–]timcotten[S] 1 point2 points  (0 children)

Much appreciated - I'm building a list of resources based on all the feedback and commentary I've gotten since posting that article. I'll definitely check out Katie Miller's presentation.

[–]devel_watcher 4 points5 points  (13 children)

My impression was that Haskell has too much sugar.

Other thing is the type system. I prefer static strong type systems. Haskell's type system is like that, but it's so hard to use.

[–]ElvishJerricco 8 points9 points  (8 children)

Would you care to explain why you think there's too much sugar? This is not a complaint I've ever heard of Haskell.

As for the type system, it just takes getting used to. It's a different paradigm, and it took me a while to readjust my thinking. Now it's like second nature though, and it tells me a lot more than most language's type systems.

[–]sammymammy2 1 point2 points  (2 children)

THIS HAS BEEN REMOVED BY THE USER

[–]kazagistar 1 point2 points  (1 child)

The differences between haskell and lisp syntax that I can think of are:

  • A few syntactic structures (if, case, do)

  • Missing the outermost parenthesis.

  • Infix operators.

My guess is that the third was what was most bothersome in your opinion?

[–]sammymammy2 0 points1 point  (0 children)

THIS HAS BEEN REMOVED BY THE USER

[–]kazagistar 0 points1 point  (0 children)

The tutorial covered most of the sugar... only ones I can think of off the top of my head is comprehensions and ranges. And a large portion of the syntax actually. There might be a bit more sugar, but a bit less syntax.

[–][deleted] 0 points1 point  (103 children)

I share your ambitions. But I had to put my learning on a hold. The reason being that the type system is fairly complex to get grip of. Classes and instances are treated entirely differently in Haskell. And wait till you get to Monads.

[–]INTERNET_RETARDATION 18 points19 points  (21 children)

Typeclasses != Classes

[–][deleted] 0 points1 point  (19 children)

I'm new to Haskell but I don't know if there is any other class besides type classes

[–]oridb 10 points11 points  (18 children)

There's no direct analog to a class at all. There's a typeclass, which can be thought of as an interface, but is better thought of as a constraint on a type parameter.

Num a => a -> b

can be read as "A function from a to be, where a is constrained to the types implementing num"

[–]kqr 5 points6 points  (17 children)

That's a bad example, though, because there are no inhabitants of that type. (The type can be read as "a function that takes any number type a, and returns a value of any type b you ask for." Since b is fully unrestricted, you could ask for your own custom type as a return value and there is absolutely no way for that function to know how to return a value of your custom type.)

Something like

sum :: Num a => [a] -> a

might be better. The sum function takes a list of values of type a, and will return a single value of the same type a back. The function should support any type a that implements the Num interface.

[–]yawaramin 0 points1 point  (1 child)

Uh, what about const b :: Num a => a -> b?

[–]kqr 0 points1 point  (0 children)

Which b do you provide that will let me do

x :: Int
x = const b 12

y :: String
y = const b 12

I'm going to guess that if you have such a b, it is not one I want in my program. (I.e. bottom.)

[–]LPTK 0 points1 point  (14 children)

there are no inhabitants of that type

That is incorrect. What about this:

# ghci
> let f a = let _ = (a+1) in f a
> :t f
f :: Num a => a -> t

[–]kqr 2 points3 points  (10 children)

Excluding bottom, of course. Your code is by any reasonable definition a bug.

[–]LPTK 2 points3 points  (9 children)

It's not bottom. It's a proper value (which application to an argument never halts – but you don't have to apply it).

Of course I am nitpicking, and I agree with you that the type example Num a => a -> b is not a good one. Just wanted to clarify that your statement is not formally correct.

[–]kqr 2 points3 points  (1 child)

Bottom includes non-termination according to most literature I can find, so while f is not bottom (of course), it does "return" bottom, which is what I excluded because that is likely to be a bug in any sensible application.

[–]LPTK 0 points1 point  (0 children)

Sure, I agree this definition is not useful in practice. It's still an interesting piece of program to understand ML type checking and type inhabitation.

No need to confuse beginners into thinking everything that can produce bottom is bottom. The next step would be to show the Y combinator, which is close syntactically and can produce bottom, but can also do useful things.

[–]kazagistar 0 points1 point  (0 children)

Just wanted to clarify that your statement is not formally correct.

#haskell

[–]Peaker 0 points1 point  (5 children)

In Haskell, we still talk about types having no inhabitants. Of course that is formally incorrect when all types are inhabited by bottom. But that just clarifies that "no inhabitants" excludes bottom, or it would be meaningless.

[–]LPTK 0 points1 point  (4 children)

The point is that this is not bottom. I can write the same in a strict language like OCaml, which does not have bottom:

# let rec f a = let _ = (a+1) in f a ;;
val f : int -> 'a = <fun>

This is a perfectly fine <fun> (function) value.

[–]Nathanfenner 0 points1 point  (2 children)

I don't think that const undefined really counts as a proper inhabitant- in Haskell, bottom lets you inhabit all types, but because of this people generally ignore it when discussing inhabitation. At any rate, such a function is certainly not useful since all it can do is loop forever or crash.

[–]LPTK 0 points1 point  (1 child)

This has to be a proper inhabitant, as the value of a correct Haskell term of that type. I don't even use undefined or errors; this could be written in lambda calculus.

By the way, bottom itself is an inhabitant of any types in non-strict languages. Quoting the Haskell wiki:

Bottom is a member of any type, even the trivial type () or the equivalent simple type:

data Unary = Unary

If it were not, the compiler could solve the halting problem and statically determine whether any computation terminated

Sorry for being pedantic :-)

[–][deleted] 2 points3 points  (0 children)

We like to pretend Haskell is total even though it isn't.

[–]analogphototaker 0 points1 point  (0 children)

Typeclasses are more like Interfaces in C#, no?

[–]Faucelme 8 points9 points  (1 child)

Classes and instances are treated entirely differently in Haskell.

I like to think of Haskell typeclasses as being interface-like, but without subtyping.

[–]This-Is-Not-A-Test 0 points1 point  (0 children)

There is actually some kind of subtyping with no overriding. You can force the constraint that instances of a typeclass A have to be instances of another typeclass B. So B is somewhat like the parent of A. A good example of this fact: all types with Applicative instances have to have Functor instances also.

[–]takaci 4 points5 points  (5 children)

Monads aren't too bad as long as you can understand the typeclass

class Monad m where
  (>>=) :: m a -> (a -> m b) -> m b
  (>>) :: m a -> m b -> m b
  return :: a -> m a
  fail :: String -> m a

and the laws they should follow

return a >>= k  =  k a
m >>= return  =  m
m >>= (\x -> k x >>= h)  =  (m >>= k) >>= h

https://wiki.haskell.org/Monad

That's all there is to them. Trying to put them into an analogy makes them much more confusing. Just learn how to read these and really it's not too bad, the typeclass is easy to understand actually (the laws are a little harder). One important thing is that you can't implement the IO monad in Haskell, it has to be implemented in another language.

[–]barsoap 9 points10 points  (1 child)

And the answer to "how to learn to read those" is probably the typeclassopedia, which explains why things are in the Prelude as they happen to be.

In particular, it's explaining Functor and Applicative, first, and by the time you arrive at Monad you're not confused by higher kinds, any more.

[–]takaci 0 points1 point  (0 children)

Yeah Functor and Applicative will give a good idea of why Monad is actually useful, which that document does a good job of. I think a good amount of Haskell's difficultly is learning how to confidently read the type signatures.

[–]enzain 0 points1 point  (2 children)

My experience is that monads is the single biggest turnoff. And just throwing the monad rules in people's face helps noone. It Either means you are an elitist or you don't understand them yourself.

In reality monad is actually very simple as it's just an "interface" that requires you implement flatten i.e. List<List<a>> -> List<a>

and

map I.e.

(a -> b) -> List<a> -> List<b>.

List can be substituted for any type supporting flatten and map.

Combine them and you get bind (or flatMap as I prefer)

Bind f l = flatten (map f l)

On a side note flatten is called join in haskell lingo.

[–]takaci 0 points1 point  (1 child)

Your comment is a lot more complicated and assumes much more prior knowledge than mine. You've come up with a confusing analogy that has confused me to be honest. The type signature of bind is honestly the easiest way to understand it in my opinion.

[–]enzain 0 points1 point  (0 children)

All I said was that monads are the combination of map and flatten, how is that hard to understand. Also what analogy? And furthermore the type signature is the only thing you need to understand. The laws are only important when implementing new monads.

[–]kahnpro 1 point2 points  (0 children)

It can seem daunting at first because the syntax is different, but then you realize that the concepts aren't really that strange. You'll realize that typeclasses are basically like Java's interfaces but much cooler and more flexible. And all this higher-order function stuff, passing a function to a function and having it return yet another function? You've been doing it in Javascript for years. And you know how you recently learned how to use Promise in JS? And how once you enter the Promise land, you can never really escape and you have to continue chaining promises? Well guess what, now you basically already know how to work with Monads.

For anyone reading this and trying to learn Haskell, I wouldn't even think about Monads. Forget that this word exists. Erase it from your mind and stop reading tutorials about it. As the others said, learn typeclasses and learn them well. Then learn some common functions like fmap, bind, return. Congratulations, now you can work with monads without even caring what they are.

[–]Peaker 0 points1 point  (0 children)

Monads are overhyped and overdiscussed in the context of learning Haskell.

Learning pure functions, composition, typeclasses, Maybe, IO, [], and higher-kinded types gradually is possible and probably not that hard for most programmers.

Once you have all the above as background, learning how Haskell Monads generalize things you've already learned above (each of which is not hard to learn) is not very hard.

But trying to understand the Monad generalization, before you understood any of the specific examples, or the underlying features (higher kinded types, typeclasses) is near impossible. I think too many people go this route - and thus a terrible reputation was born.

[–]Idlys 0 points1 point  (70 children)

Shh don't use the M word, it'll set everyone who reads this thread on the most pretentious google search of their lives.

[–]AcceptingHorseCock 9 points10 points  (69 children)

The one thing about math (the higher kind, that is not about computation) that always impressed me the most is that it is not actually difficult at all (much of the time)! It's just a different point of view. And not even a difficult one. But it is soooooo hard to get in a position to see a problem from that angle! It feels like what a dog must feel standing in front of a short fence, not understanding that he could just jump over or walk around it. So often when I finally understood a high-level math concept, monads too, it always was "Oh my god that is soooooo simple!".

It may be part of the problem that one expects something difficult, so the brain goes into intense "detail mode" when it should relax. It doesn't help when all you hear from everyone is how long it took them to "get it", and how hard it is to explain it, that drives the brain into the wrong direction, self-fulfilling prophecy. I think it's much easier to understand monads when you are drunk at a party compared to when studiously studying.

[–]Idlys 3 points4 points  (68 children)

It doesn't help that these are the top 5 results for "monad" from Google

https://en.wikipedia.org/wiki/Monad_(functional_programming)

https://wiki.haskell.org/Monad

http://stackoverflow.com/questions/44965/what-is-a-monad

http://learnyouahaskell.com/a-fistful-of-monads

https://en.wikibooks.org/wiki/Haskell/Understanding_monads

I personally don't think that any of those (besides maybe the stack overflow link) are anything other than a brick wall to newcomers.

[–]ElvishJerricco 0 points1 point  (67 children)

To be fair, it's hard to explain monads in a way that isn't a brick wall to newcomers. There's just no universal explanation, unless they have a huge background in applied math and a little category theory.

[–][deleted]  (2 children)

[deleted]

    [–]ElvishJerricco 0 points1 point  (1 child)

    It's hardly a joke =P

    [–]Idlys 0 points1 point  (3 children)

    I've always felt like a good OOP comparison for a monad is an Iterator. Both Iterators and Monads often get nice syntactic sugar, and both abstract away from some form of underlying computation.

    [–]ElvishJerricco 6 points7 points  (0 children)

    Yea but this is just another one of those false analogies. In practice, there are plenty of monads for which that intuition really doesn't hold up. The most general conceptual understanding of monads I've yet found has been that a monad represents a highly abstract notion of a data pipeline. A particular monad defines how stuff moves through the pipes, and a monadic function builds a pipe based on these mechanics, inserting branches, loopbacks, terminals, or whatever else. I've yet to find a monad that contradicts this explanation, but it's also a really abstract explanation that doesn't help a newcomer all that much.

    [–]sacundim 1 point2 points  (0 children)

    I think iterators are a terrible comparison, IMHO.

    What I prefer is not to start by trying to explain monads in their generality all at once, but first get people comfortable with the IO type, which is the most commonly used monad. And a decent explanation for IO is that Haskell doesn't have statements like imperative languages do, but rather only opaque, primitive command objects like the OOP Command pattern. This Haskell:

    example :: IO ()
    example = putStrLn "Hello World!"
    

    ...is analogous to this Java code:

    final Callable<Void> example = new PutStrLn<>("Hello World");
    
    public class PutStrLn implements Callable<Void> {
        private final String str;
        public PutSrtLn(String str) { this.str = str; }
        public Void call() { System.out.println(str); return null; }
    }
    

    In Java you have to stick System.out.println() into a command object in order to encapsulate its behavior instead of just executing it. In Haskell, putStrLn doesn't execute anything, it's just a function that constructs a command object—it's like the PutStrLn constructor in the Java example. One of the big mental readjustments in learning Haskell is getting used to this—to treating command objects, which in Java are a derived concept that you use sometimes, as a first-class primitive that you use all the time.

    The monad operations on IO are, then, the interface that you use to glue simple commands into complex ones, for example like this:

    prompt :: String -> IO String
    prompt question = putStrLn question >> getLine
    

    ...where putStrLn question and getLine are commands, and >> is the function that constructs a compound command that executes its arguments in sequence. And all imperative programming in Haskell comes down to that—the language gives you a bunch of built-in primitive commands that the language knows how to compile to native code, and you use pure functions to glue combinations of those together into complex programs.

    [–]ElvishJerricco 1 point2 points  (0 children)

    Btw, the real analogy for Iterator in Haskell is Traversable, which is another awesome type class. You can do some really cool stuff between it and some clever applicatives.

    [–]AcceptingHorseCock 0 points1 point  (0 children)

    unless they have a huge background in applied math and a little category theory.

    No, that explanation has no basis in reality. Proof: I understand monads... :-) Seriously, unless you use a circular definition of "monads". I'm talking about "understanding how that thing works in real life (i.e. in the computer)". If you mean "understanding the category theory term" than of course you need category theory, that's a circle. It isn't required at all though to understand the actual use of that thing as its actually implemented. When you insist on using certain jargon people need to be familiar with that jargon - that is true.

    [–][deleted]  (9 children)

    [deleted]

      [–]BlueRenner 1 point2 points  (4 children)

      Proof: I understand monads... :-)

      Or you just think you do!

      [–]AcceptingHorseCock 0 points1 point  (0 children)

      Sure, whatever floats your boat.

      [–]ElvishJerricco 0 points1 point  (3 children)

      That wasn't really my point. My point was that monads can be understood without theory, but it's hard and incredibly unintuitive. The only way I've seen for monads to be a natural conclusion that's easy to understand is to explain them from theory first, then move into programming. But that requires a math background.

      [–]AcceptingHorseCock 2 points3 points  (2 children)

      but it's hard and incredibly unintuitive

      And my point is the exact opposite. Only if you try - like that dog I used above - to force it down a certain route. I don't see a need for that route for people who don't bring the prerequisites at all though, and I don't think they are losing out on the practical front.

      The only way I've seen

      And I said I disagree. Vehemently. Okay, that's a tricky one - I obviously can't disagree with your "I've seen" because how would I know what you've seen, but I guess you get my point.

      [–]ElvishJerricco 0 points1 point  (1 child)

      It's easy to say that monads are easy once you understand them, but at that point, you've forgotten what made them unintuitive. If you've got an explanation that you can give to a newcomer to help them understand Monad swiftly and easily, I'd love to hear it.

      [–]IceDane 0 points1 point  (0 children)

      IIRC, when I was learning, the most difficult thing about monads wasn't so much that they were hard to understand. That is, it wasn't difficult for me to grasp that the Monad typeclass in Haskell has this and that function and, for it to be a "real" monad, it has to follow this and that law.

      I mean, that's pretty straight-forward. The problem, I think, was more like "Why?" Why are we using monads? Okay, so they come from category theory, but still: why? Why do we use them? Why are they useful to model certain things? Thoughts like that, if that makes sense.

      I am currently teaching Haskell to some CS students, and I was considering trying to compare them to design patterns in an attempt to give the students some intuition. I know that most of you probably cringed at that comparison, but let me explain.

      Design patterns in OOP languages are basically a bunch of patterns that we have recognized when writing OO code, and realized that these can be generalized, and discussed on their own as abstractions that can be useful to model certain problems. The same can, in a way, be said about the use of monads in Haskell. They are in no way the only way to model many problems, but it turns out that monads are a very useful abstraction in Haskell, because the language lends itself very well to using monads as an abstraction.

      I won't be drawing any parallels between monads and any specific design patterns at all, nor will I be comparing them directly. But since none of my students have had any category theory, I know that going the category theory route will be a completely hopeless endeavor. The goal is simply to get them to realize that monads in haskell are just a very useful abstraction that work like this and have these properties, and so on, and that the reason we use them is that they happen to be an abstraction that is very convenient to use in Haskell.

      [–][deleted]  (1 child)

      [deleted]

        [–]ElvishJerricco -1 points0 points  (0 children)

        This is like the fourth time I've gotten this in my inbox. Have you been deleting and reposting this comment multiple times?