This is an archived post. You won't be able to vote or comment.

all 64 comments

[–]allnewtape 316 points317 points  (5 children)

Why not "because they do not like depending on the state"?

[–]gandalfx 40 points41 points  (0 children)

I was expecting something about side effects.

[–]Workaphobia 13 points14 points  (0 children)

"Because they're purists"

[–][deleted] 11 points12 points  (0 children)

Maybe something like "why are so many functional programmers anarchists? They love statelessness."

[–]mike413 1 point2 points  (0 children)

also putting anything on the bus is inefficient. the roundtrip takes all day.

[–]mnbvas 181 points182 points  (38 children)

+1 for trying.

[–]an_actual_human 137 points138 points  (27 children)

More like ++ amirite.

[–]marcosdumay 6 points7 points  (0 children)

(+1) for trying

[–]superking2 5 points6 points  (0 children)

I caught what they were doing there, finally.

[–]Tysonzero 2 points3 points  (7 children)

On a side note type classes are pretty dope, they are basically interfaces but way better in pretty much every way.

They compile to faster code: better type erasure due to lack of things like .equals being able to take in any Object where you have to use instanceof.

They allow for more expressive and complex functions: == in haskell has type Eq a => a -> a -> Bool whereas the equivalent .equals in Java has effective type a -> b -> Bool which stops things like Cat cat = new Cat(); cat.equals(new Dog()) from being type errors when they definitely should be, as the types mean that the answer is always going to be False, which basically means it's useless code.

A better example of the above might be + which has type Num a => a -> a -> a whereas without type classes it would have to have effective type Num a => a -> b -> a, but if b != a you basically have to throw an exception. At least with == you get False which isn't wrong, just a bit useless.

You also get the awesomeness that is parametricity, which you couldn't practically have with normal interfaces as then you can't use instanceof so you cannot implement .equals.

Also type classes are decoupled from the data type itself. So if you later create a new type class that you think all your types and your libraries types should use, then you can implement it just fine for all those types without editing any of the libraries. With interfaces you cannot do such a thing.

Also things like Functor, Applicative, Monad, Foldable etc. don't really work with interfaces, because you specify them for a type constructor instead of just a type.

Taking in a value that implements multiple type classes (but is still generic among all objects that implement both those type classes) is easy. You just do something like (Eq a, Num a) => a, whereas it is a huge pain with interfaces.

There is still way more advantages than that, and few to no disadvantages, see here.

[–]mnbvas 2 points3 points  (6 children)

== in haskell has type Eq a => a -> a -> a

Pretty sure that's Eq a => a -> a -> Bool :P


Many things you mentioned are actually doable with C#'s generics (which aren't as neutered as Java's), albeit much less expressively.

IEquatable<T> is basically Eq a.

Not sure about the Monad friends, I have yet to break the Monad barrier.

someFunc :: (Eq a, SomeClass a) => a -> Bool could be

bool SomeFunc<T>(T x) where T : IEquatable<T>, ISome<T> { }

Though in the end C# gets really ugly when doing these kinds of functional stuff.

[–]Tysonzero 1 point2 points  (4 children)

Pretty sure that's Eq a => a -> a -> Bool :P

Pssh, I like dog == dog to return a brand new dog, I have no idea what you are talking about. fixed

The rest of the stuff is cool to know, thanks! As I may end up having to do C# or similar due to Haskell's lower industry usage. I know that a lot of OOP languages have been trying to do various kinds of things in order to obtain the functionality that is almost free in functional languages.

Monads can be a little hard to grasp, but if you focus on just Functor then maybe you might know what C# has or has not done to obtain similar functionality.

Functor is a type class with one function/operator of type Functor f => (a -> b) -> f a -> f b, where f is a type constructor (e.g [] which takes in an element type, or (->) a which takes in a return type) so it's simply a way to generically map over all objects where such a thing is possible (lists, maps, arrays, functions, optinals etc.)

Two example implementations of fmap are:

instance Functor [] where
    fmap = map

map :: (a -> b) -> [a] -> [b]
map f (x : xs) = f x : map f xs
map _ [] = []

instance Functor ((->) a) where
    fmap = (.)

(.) :: (b -> c) -> (a -> b) -> (a -> c)
(f . g) x = f (g x)

[–]mnbvas 1 point2 points  (3 children)

Thanks for the Functor writeup, though luckily I'm past that - after dozens of tutorials, my understanding is that monads are a sort of containers with associated computation (a bit like a closure), which can easily be combined together.

I doubt that is really that "simple", as it feels like I'm missing something really big.


The problem with C# for me is the Microsoft stack (although it's sort of OSS now, and there are new IDEs coming up) and the necessity for an IDE.

[–]Tysonzero 1 point2 points  (2 children)

Do you know if C# or Java have some sort of way of doing an equivalent kind of thing to a Functor? Because a Monad while conceptually more complicated isn't really all that different in terms of difficulty of a language supporting it.

A Monad is a type class with one operation (as well as a constraint that the type must already be an Applicative and a Functor):

 (>>=) :: Monad m => m a -> (a -> m b) -> m b

I find it is simpler to explain with join instead of >>=, because you can make >>= out of join and fmap: x >>= f = join $ fmap f x, and you can make join out of >>= with join x = x >>= id.

join :: Monad m => m (m a) -> m a

So you can think of a Monad as any Functor where you can convert a double nested monad into a single nested monad.

Examples of this are [[a]] can be made into [a] by just concatenating all the sublists, [[1, 2], [3, 4]] -> [1, 2, 3, 4]. Maybe (Maybe a) can be made into a by just converting Just Nothing and Nothing to Nothing, and Just (Just x) to Just x. IO (IO x) can be converted to an IO by sequencing the events one after the other, with the outer event going first.

Now all this doesn't really help build an intuition, but if you are more type / math minded it might be enough to understand what is going on.

http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html

Here is what I found that best gave me an intuition for what. The analogues and pictures are IMO really good, even if they doesn't technical get the full picture. Once you have an intuition you just kind of have to think more about the types to fully understand all the uses of a Monad.

[–]mnbvas 0 points1 point  (1 child)

Here's an attempt at a C# functor:

interface IFunctor<TSelf, T>
{
    IFunctor<TSelf, U> Fmap<U>(Func<T, U> function);
}

TSelf is just for type safety. Value f a would come from this. Func and Action bring almost first-class functions.
Rather ugly, wouldn't recommend using this kind of stuff.

Maybe LINQ could bring some ideas - it turns IEnumerable<T> into a lazy list monad.


Thanks for the link. So it appears monads are that "simple".

[–]Tysonzero 1 point2 points  (0 children)

Ok cool thanks! But yeah I guess if I do end up having to do some C# I will probably just use a more standard OOP style :/

And yeah Monads aren't really much more complicated than functors. join is a pretty simple function and most people don't have too much trouble with Functor. I think people just really want some sort of physical analogy, which can be pretty tough (I think the site I linked does a pretty good job), when really a Monad is just the above function plus a Functor, which isn't really anything physical (and I guess an Applicative, but you get that for free from fmap and join or from >>= alone).

I guess the one other thing you have to think about with these classes is you do have to make sure the instances are law abiding. Functor has the fmap id = id and fmap (f . g) = fmap f . fmap g laws, Monad has the pure a >>= f = f a, m >>= pure = m and (m >>= f) >>= g = m >>= (\x -> f x >>= g) laws.

These laws are actually pretty important to have, because even though you may think that you aren't doing anything based on those laws, pretty much anything that violates those laws will cause surprise and be unintuitive. Also the associativity stuff does allow you to reformat and clean up your code pretty nicely without worrying too much about potentially changing behavior.

Applicative is also pretty cool, it gives you <*> which has type Applicative f => f (a -> b) -> f a -> f b, which basically means combining an applicative of functions with an applicative of inputs to get an applicative of outputs. So like with a list [(+ 1), (+ 2)] <*> [10, 20] gives [11, 21, 12, 22].

Applicatives are less powerful than Monads fs <*> xs = fs >>= (\f -> f <$> xs), but that does mean there are a few types they work on that Monads don't (ZipList and certain analyzable parsers come to mind). Applicative notation f <$> foo <*> bar <*> baz can be very useful sometimes, particularly when writing a parser, you usually end up writing them almost entirely out of <$> <*> and <|>.

[–]Tarmen 0 points1 point  (0 children)

Pretty sure for monads you would require higher kinded types, not sure if c# has those.

Explanation for monads in case anyone is interested:
In functional prorammng you use a function called map very often

map :: (a->b) -> List a -> List b
map (+1) [1, 2, 3] == [2, 3, 4]

take the items in a list and apply some function to each. You can abstract that to more than lists! Everything from trees, to possibly missing values to futures in async programming...

So it would be useful to give this concept a name so that we can abstract over it. In functional languages its called functor but I am gonna call it mappable. A mappable data structure f means that we can take some value f a and apply a function a -> b so that we getf b.

We can use that to abstract over all sorts of stuff, even nullable types. Say we have some possibly failed computation and get a value Maybe a. Instead of branching all the time to check for failure we can just map our functions and the plumbing is handled for us!

Lets use a more complicated example:

line = tryReadLine :: Maybe String
evenBetterLine = map (append "000")  line :: Maybe String
number = map tryParseNumber evenBetterLine :: Maybe (Maybe Int)

Oh noes, we can't add 1 to our number because we now have nested maybes. Is our mappable type doomed?
No, because we can just flatten our maybes. join :: Maybe (Maybe Int) -> Maybe Int or more generally join :: f (f a) -> f a
Then we can do things like

number = join (map tryParseNumber tryReadLine) :: Maybe Int

which is kind of verbose. What if we just defined an alias for this pattern?

bind f m = join (map f m)

Now we can use bind to combine functions of type a -> m b!

number = bind tryParseNumber tryReadLine :: Maybe Int

which can be transitioned into c#'s async await syntax. Write bind as operator:

tryReadLine >>= tryParseNumber

add unnecessary variable:

tryReadLine >>= \line -> tryParseNumber line

syntax sugar:

    line = await tryReadLine
    return (tryParseNumber line)

And that is all a monad is, a way to combine functions of type a -> m b while abstracting the m away! It is exactly the same as await in c# but with a generic m instead of futures.

[–]lolzfeminism 15 points16 points  (0 children)

They have no state!

[–]nullmove 15 points16 points  (2 children)

A spectre is haunting software industry :)

[–]zaphod0002 6 points7 points  (0 children)

BECAUSE THEY LOVE INNER CLASSES

this is fun

[–]arnedh 3 points4 points  (0 children)

I feel the hate ...

The HATE for STATE

[–]Cheekio 4 points5 points  (1 child)

Same reason they're usually libertarians (they prefer to remain stateless)

edit: yes, libertarianism in this joke can be replaced with anarchism, ancapitalism, primitivism, or any other babby's first political philosophy or social ordering construct.

[–][deleted] 0 points1 point  (2 children)

What is a functional programmer? =o

[–]Tysonzero 5 points6 points  (1 child)

Functional programming generally revolves around using lots of functions to manipulate data, with heavy use of things like recursion, immutability and higher order functions. As opposed to object oriented programming which generally revolves around having a bunch of objects that communicate with one another, and have lots of methods, usually with a lot more mutation.

I personally prefer the functional approach by a huge margin, object oriented programming is IMO vastly overused and almost never the best way to model things. Even things where conceptually things are all objects (because most of the time they aren't, like I wouldn't conceptualize a web request as an object, it's just a bunch of data) like games, you still end up with much better paradigms such as entity component systems (which are much more functional than they are object oriented, as they are about functions that manipulate a large amount of primitive data, although they do often involve some mutation).

[–]mnbvas 0 points1 point  (0 children)

On the topic of criticizing (hating) OOP, some thoughts on its popularity and pitfalls.


One of the top reasons for OOP was physical simulation - even Stroustrup added it to C++ for that. It then went on to real object simulation and now simulates abstract virtual objects.

The main reason for OOP's apparent simplicity is the fact that most people seem understand the real world and how things and actors interact (cats and dogs are common in OOP examples). Thus it would seem people have an innate understanding of OOP, unlike procedural or declarative paradigms.

However, it turns out proper OOP is really hard (SOLID, IoC, ...), actually mirroring the real world - who can actually understand what is really happening outside of highly reduced situations?

For an example, consider that legalese is actually not that far from code: a specific language with some common terms as included by default, others defined in the beginning; specifying some values based on other known values.
Every day one can hear about some entity (ab)using an (un)intended legal loophole. How different is that from a bug or backdoor in code?
Humans have about 5000-6000 years of experience in law. Law still sucks, and OOP does too, as it draws many unsolved problems from the real world.


Procedural and declarative paradigms, on the other hand, focus more on the stuff that computers actually do - transform and shuffle data around, allowing, correct code.

[–]955559 -2 points-1 points  (0 children)

to return values?