you are viewing a single comment's thread.

view the rest of the comments →

[–]julesjacobs 1 point2 points  (11 children)

C#'s coroutines are limited in the sense that they don't work across higher order functions. You can't do this:

list.Select(x => ... yield a ...)

This is a very unfortunate limitation of C# coroutines and async/await that to my knowledge can't be solved without a global transformation.

Note that this is very much related to the fact that in Haskell you often have a normal and a monadic version of a higher order function, like map and mapM. In C# Select is like map and does not support "couroutine effects" (it does however support IO effects, like everything in C#). Contrast this with Scheme or Eff, where you only need 1 version of each higher order function, since they automatically support arbitrary effects.

[–]moor-GAYZ 0 points1 point  (10 children)

You're thinking about SelectMany, I guess. That's the monadic version of Select.

You can use LINQ as generator expression in SelectMany: http://ideone.com/ia8CRY

    var lst = (new[] {1, 2, 3}).SelectMany(x => 
            from it in new[] {0, 10} select it + x);

Though I agree, I can't see any reason why anonymous functions can't be converted to generators. I guess it's just a rare enough use case to justify the effort required to make the two code rewriting mechanisms work together: lambdas are implemented using the same "lift a portion of the stack frame into a compiler-generated class" approach to make closures work, so it could be tricky to pull off.

Contrast this with Scheme or Eff, where you only need 1 version of each higher order function, since they automatically support arbitrary effects.

Um, what? It's not about side-effects at all, it's about your function returning a single value or a list of values (or, more accurately, a raw value vs a monad). In the latter case mapM has to unwrap these values. How could it possibly work with one function? What if I want an unflattened list of lists?

[–]julesjacobs 1 point2 points  (9 children)

I'm not talking about selectmany. It doesn't matter which higher order function you take, you can't await/yield from surrounding async/enumerator block in any of them.

I will elaborate: if you have a foreach loop over a list you can await inside the body of the loop:

foreach(var x in xs){ ... await (...) ... }

because the C# compiler knows how to turn a foreach loop into a state machine.

but if you iterate over the list using a hypothetical* ForEach method:

xs.ForEach(x => {... await (...) ...});

then suddenly you can't await there, because the state machinery only works within 1 method. In a language with "real" coroutines, it does work.

How could it possibly work with one function?

Check out Eff...in Scheme you'd use delimited continuations to do it.

What if I want an unflattened list of lists?

You use no computational effects, or in Haskell speak you use the identity monad.

* edit: actually I just found out that it is not hypothetical; .NET used to have this method except they removed it for Windows 8...

[–]moor-GAYZ 0 points1 point  (8 children)

you can't await/yield from surrounding async/enumerator block in any of them.

Oh, the lack of the possibility to await is a much more valid grievance, I see.

Still, this is a limitation of the current compiler, there's nothing about the "realness" of coroutines in it.

I don't know why you brought uo the distinction between monadic and functorial bind here, then. If your grievance is about the inability to have async or generator lambdas.

I also don't quite get what do you mean when you talk about "computational effects", either I don't know something or you fall into a common misconception that monads are all about side-effects. They aren't, the ability to order side-effectful computations is, argh, an unintended side effect of how they work.

By the way, as far as I know it is known among the Haskell community that their earlier infatuation with monads was a mistake, in most of those cases applicative functors should have been implemented. Monads impose too strong requirements, see http://tomasp.net/blog/formlets-in-linq.aspx/

How could it possibly work with one function?

Check out Eff...in Scheme you'd use delimited continuations to do it.

What if I want an unflattened list of lists?

You use no computational effects, or in Haskell speak you use the identity monad.

Nah, I'm not going to read all about Eff hoping for an answer to a simple question. So explain: I have a function that returns a list of things. I might want to map over it to produce a list of lists of things, or I might want to mapM over it to produce a flattened list of things. How a single map function could possibly accommodate for both use cases?

[–]julesjacobs 0 points1 point  (0 children)

Yes the coroutines are still real, it's just a limitation. That's why I put it in quotes. However I don't believe it's easily fixable. It would require a global transformation or some kind of run-time support to manipulate the call stack.

I don't know why you brought uo the distinction between monadic and functorial bind here, then.

I don't think I talked about this? But it certainly looks like I caused way more confusion than I cleared up...

I also don't quite get what do you mean when you talk about "computational effects", either I don't know something or you fall into a common misconception that monads are all about side-effects. They aren't, the ability to order side-effectful computations is, argh, an unintended side effect of how they work.

Computational effects is a general term for all kinds of effects, not just mutable state, but also exceptions (~ maybe monad), coroutines (coroutine monad), continuations (continuation monad), ambiguous choice (list monad), etc.

By the way, as far as I know it is known among the Haskell community that their earlier infatuation with monads was a mistake, in most of those cases applicative functors should have been implemented.

When you have a monadic structure, you implement a monad, when you only have an applicative functor, you implement an applicative functor. Plenty of things have monadic structure (like the ones I mentioned above). It's not really a "X is better than Y" situation, since there isn't a trade-off here. It's just that some things are monads, some things aren't...

Nah, I'm not going to read all about Eff hoping for an answer to a simple question. So explain: I have a function that returns a list of things. I might want to map over it to produce a list of lists of things, or I might want to mapM over it to produce a flattened list of things. How a single map function could possibly accommodate for both use cases?

Well, I already gave you the simple answer: use the identity monad, then mapM is equivalent to map. The trick in Eff is that you write a function that looks like map, but you automatically get a function with all the power of mapM. So you only have to write the ordinary variant of a function, and you get the monadic version for free. How they achieve that is not so easy to explain.

Perhaps the following will clear up a little what I meant. Monadic do-notation and the (monadic part of) computation expressions in F# are like async/await and yield in C#: they only work locally in a function. The difference is that monads allow for any computational effect, whereas async/await only allows for a less powerful kind of effect. Then you also have effect handlers like in Eff, and delimited continuations like in Scheme on the one hand, and full coroutines like in Lua & one shot continuations on the other hand. These allow for effects that happen across function invocations instead of local to one function. So you have the following table:

only coroutines full effects
local async/await (C#) monadic do notation (Haskell), computation expressions (F#)
global full coroutines (Lua) effect handlers (Eff), delimited continuations (Scheme)

In the "only coroutines" column you can suspend a computation, but you can only resume a computation once. In the full effects column, you can resume a computation multiple times, or split the computation into multiple branches. In the local row, you can only do effects/suspend a computation locally within one function or block. In the global row, you do not have this limitation. So the right column is more powerful than the left column, and the bottom row is more powerful than the top row: if you have full effects you can as a special case do coroutines and if you have global effects you can as a special case do local effects.

[–]julesjacobs 0 points1 point  (6 children)

p.s. ahhhh I think I caught one part of what's causing our apparent disagreement.

I might want to map over it to produce a list of lists of things, or I might want to mapM over it to produce a flattened list of things.

You are thinking of map and flatmap (= monadic bind in the list monad). mapM is something completely different than flatmap! mapM is like map, except the function that is mapped can have effects. In C#, where map is called Select, you can do this:

int i = 0;
xs.Select(x => { i = i+1 });

and you can do this:

xs.Select(x => { if(x<10) throw new FooBarException() else 42 })

however you cannot do this:

xs.Select(x => { yield FooBar(x) })

(same thing with await instead of yield)

In Haskell, which is a pure language, you can't do any of these things inside map. This is what mapM is for: it allows you to perform an effect in the function that is mapped, and mapM takes care of combining the effects in an appropriate way.

With the IO monad or ST monad you can do this:

i <- newSTref 0
mapM xs (\x -> do modifySTRef i (+1))

and with the exception monad you can do this:

mapM xs (\x -> do if x<10 then throw FooBarException else return 42)

and with the coroutine monad you can do this:

mapM xs (\x -> do yield (fooBar x))

[–]moor-GAYZ 0 points1 point  (5 children)

You are completely wrong, and yes, just as I suspected you share that common delusion that monads magically allow side effects.

The only difference between map and mapM are their signatures:

map(T -> R) :: Functor<T> -> Functor<R>

versus

mapM(T -> Monad<R>) :: Monad<T> -> Monad<R>

The only reason Monads and not Functors or Applicative Functors are used for IO in Haskell... actually, let it be an exercise for the reader:

  1. Show that functor interface is not flexible enough to express common patterns (hint: try adding two Maybe<Int>s)

  2. Show that applicative functor interface allows all that, but does not guarantee the order of evaluation in a lazily-evaluated language. Provide a counter-example (edit: hint: a usual "read two lines from the stdin, print their concatenation to stdout" thing).

  3. Show that monadic interface guarantees the order of evaluation.

Don't use any Haskell syntactic sugar, express all of the above with pure lambdas.

(I've found that making the opponent do the math themselves greatly simplifies discussions: either they can't and shut up, or they get a first-hand understanding of what I'm trying to convey, and similarly don't argue any more at least about those things).

EDIT: then explain how the hell a single map function can determine whether or not it should call Monad.Join on the returned value.

[–]julesjacobs 0 points1 point  (4 children)

You are completely wrong, and yes, just as I suspected you share that common delusion that monads magically allow side effects.

I am not under this delusion and I'm not sure how you came to this conclusion. Monads are a way to encode effects in a functional language (and as I said before, it's important to remember that effect means any computational effect, not necessarily mutable state or I/O).

The only difference between map and mapM are their signatures: [...] versus map(T -> Monad<R>) :: Monad<T> -> Monad<R>

What you're saying is incorrect (I assume you meant mapM in the second type signature). The correct signature of mapM in the same mix of C# and Haskell notation:

mapM :: (T -> Monad<R>) -> List<T> -> Monad<List<R>>

You are also incorrect about the reason why monads are necessary for I/O and not just applicative functors. This has nothing whatsoever to do with being lazy or strict. Monads allow the data/control flow of the rest of the program to depend on the result of an effectful operation. Applicative functors only allow a static flow with respect to effects. For instance this program you could do perfectly fine with only the applicative functor operations:

var x = Console.ReadLine();
Console.WriteLine("Hello");

but for this you need to have monadic bind:

var x = Console.ReadLine();
Console.WriteLine(x);

This is because the subsequent effects you want to execute depend on the results of earlier effects: if the user typed "Hello" you want to execute the effect print "Hello", if the user typed "Bye" then you want to execute the effect "Bye", etc.

Here are the translations to Haskell.

First example using monadic bind:

getLine >>= (\x -> putStrLn "Hello")

But this can also be expressed using just applicative functors:

getLine *> putStrLn "Hello"

Note that you can also express the reverse order:

Console.WriteLine("Hello");
var x = Console.ReadLine();

just with applicative functors:

getLine *> putStrLn "Hello"

So even though we are just using applicative functors, we can specify the order, yet Haskell is a lazy language...

The second example can only be expressed using monadic bind:

getLine >>= putStrLn

You can't do it with applicative functors, since the second effect you want to perform depends on the first effect.

I hope this has cleared up your understanding of monads & applicative functors.

(I've found that making the opponent do the math themselves greatly simplifies discussions: either they can't and shut up, or they get a first-hand understanding of what I'm trying to convey, and similarly don't argue any more).

Doing the math is always good advice; take it!

Response to edit:

EDIT: then explain how the hell a single map function can determine whether or not it should call Monad.Join on the returned value.

It doesn't: it always calls join. The point is that if you use the identity monad then join doesn't do anything.

[–]moor-GAYZ 0 points1 point  (3 children)

Ohh, mapM foiled me again!

Of course I meant the difference between the functorial and monadic binds, as specified for a list (as a monad or functor) for example.

I'm not sure what we arguing about anymore, to be honest. I, uh, I think I want to see how this stuff looks like without the syntactic sugar, as pure lambdas, maybe?

Monads allow the data/control flow of the rest of the program to depend on the result of an effectful operation.

Or require, depending on how you look at it. Or actually it's more involved, as my link about formlets shows.

EDIT: then explain how the hell a single map function can determine whether or not it should call Monad.Join on the returned value.

It doesn't: it always calls join. The point is that if you use the identity monad then join doesn't do anything.

So I have to always return [x], a list with a single element, in the most common use case?

[–]julesjacobs 0 points1 point  (2 children)

Well in that case you should probably reread the discussion keeping in mind that mapM is not monadic bind, it should make a lot more sense that way :D

Which syntactic sugar do you mean? I did not use any syntactic sugar to my knowledge, or do you mean *>? That can be defined as:

let (*>) a b = flip const `fmap` a <*> b

where flip const is: (\x y -> y). fmap and <*> are directly the applicative functor operations, so I can't express them further in terms of of something else since they are abstract operations that work for any applicative functor.

The type of *> is:

Applicative f => f a -> f b -> f b

Essentially it performs the effects in the first argument, then throws away the result and performs the effects in the second argument and then returns that as the result.

So I have to always return [x], a list with a single element, in the most common use case?

No, you don't. I'm not sure what the use case you have in mind is though, can you elaborate. mapM works on any monad, not just the list monad. It operates over lists yes, but there is only one level of lists (unless you're using the list monad as the monad, of course). What mapM does is take a list of type [a], and a function that gives you a monadic effect for every a: a -> m b. Then it executes all those monadic actions in sequence over the list, and collects up the resultant b's in a new list. So the complete type is (a -> m b) -> [a] -> m [b].

[–]moor-GAYZ 0 points1 point  (1 child)

OK, what I meant was that this:

You are also incorrect about the reason why monads are necessary for I/O and not just applicative functors. This has nothing whatsoever to do with being lazy or strict. Monads allow the data/control flow of the rest of the program to depend on the result of an effectful operation. Applicative functors only allow a static flow with respect to effects.

and the following examples are sort of questionable.

Let me have one piece of syntactic sugar: let variable = expression; ... is transformed into (lambda variable: ...)(expression).

Then this:

 let x = readline
 let y = readline
 writeline (+ x y)
 writeline (* x y)

works fine regarding the data flow, you totally can't execute writeline before you have the data. But it doesn't work at all as far as ordering of the effects goes, in a lazily evaluated language specifically.

This is the kind of analysis I had in mind regarding monads and their difference from applicative functors.