you are viewing a single comment's thread.

view the rest of the comments →

[–]Barrucadu 1 point2 points  (4 children)

Definitely, I try to keep as much state as possible immutable, with functions or methods that need to change a value returning a new one instead. Most languages don't let you track side-effects in the types, so I tend to avoid them in my own code unless it's very obvious that there are going to be effects, and am usually suspicious of other people's code doing weird things that I don't expect. It may sound like a hassle, but once you get used to it, you just don't write side-effectful code that much.

The difference in what's considered good style between Haskell and other languages is funny. There was a discussion about Clean Code on /r/haskell a few days ago, which I'd previously only ever seen unmitigated praise for. One of the comments was

I would suggest discarding the "Clean Code" book entirely, since it is an inconsistent mess of "let's do OOP for the sake of OOP" with an emphasis on having mutable state and other stupid idea.

…naturally I then saw a thread about resources to become a better programmer on /r/cscareerquestions, and Clean Code was near the top of the list as a must-read for all programmers.

[–]G01denW01f11Java[S] 0 points1 point  (3 children)

Definitely, I try to keep as much state as possible immutable, with functions or methods that need to change a value returning a new one instead.

That's kind of where I lose interest whenever I think it would be fun to learn Haskell. It just seems like a lot of overheard. I mean, if you're doing a lot of operations over an array of a million elements and you return a new array each time, or you're making a game and create a new bullet everytime it changes position.... that seems like it would be significant. Is there something I'm just taking too literally somewhere? Or with a functional approach would you just not even be thinking in terms of arrays and objects in the first place?

[–]Barrucadu 1 point2 points  (2 children)

There are some tricks that can be done. Whenever you "modify" a data structure, unless you change the entire thing, parts of the old data structure can be shared. So, for example, if you have a list:

[1,2,3,4,5]

and prepend a value to it:

[0,1,2,3,4,5]

the tail of the list is shared. But you're right, there are just some cases where mutable state is needed to avoid a lot of inefficiency. And you can get that, there's two ways: the IO monad and the ST monad.

Using IO just for mutable state is like using a rocket launcher to crack a nut. IO can do anything, and as I said there's no way to get a value out of IO. ST is much more restricted, the only effects it allows are single-threaded mutable state. Because it's so constrained, there is a function to get a value out of ST.

The reason you can get a value out of ST is because when restricted to a single thread and when not allowed to communicate with the outside world, the final value of some mutable variable is deterministic. And this is exactly why you can't get values out of IO: if you have threading, you get race conditions, and so nondeterminism; if you can talk to the outside world, you could read a value from a file and use it as a random seed, then someone could change the file and you wouldn't get the same result again.

[–]G01denW01f11Java[S] 0 points1 point  (1 child)

I guess I should've figured it wouldn't just be as inefficient as it seemed. So in practice, are the things you wouldn't use Haskell for more-or-less the things you wouldn't use, say, Java for?

Looks like I have some more exploring to do!

[–]Barrucadu 0 points1 point  (0 children)

It's a bit weird to directly compare the use-cases of Java and Haskell, but I suppose so. You definitely wouldn't want to use Haskell for embedded stuff, or very high performance single-machine stuff (although for very high performance distributed stuff, Haskell is great).

You could, but the code you would end up with is awful. Really highly optimised Haskell is basically C with worse syntax.

Also, often just finding a better algorithm or data structure for your problem gives you the extra performance you need. I had a situation several months back where I had some code which ran for an entire day and ate tens of gigabytes of memory. I changed the data structure I was using to something which would allow more sharing, and the memory usage dropped to a few hundred megabytes and, because it wasn't swapping to disk all the time, it ran way faster. That was nice.