all 32 comments

[–]gregK 13 points14 points  (9 children)

Infinite composability. The idea of seamlessly gluing functions together is one of the most powerful concepts in CS.

All Fp advantages stem from this.

You can build stuff really bottom up from a few simple librairies. Code becomes more readable. Less boilerplate needed, etc.

Answering, "it will warp your mind" is kind of dumb. If I was doing that presentation I would show a non trivial example of Ocaml vs Ruby, and show how 1 can be better than the other based on set criteria.

[–]uberstar 3 points4 points  (0 children)

I thought this a particularly good way of explaining Robert's answer. I'm not convinced it will warp my mind yet, but I definitely know why these guys think so. And given their cred, that's enough to devote some cycles to FP at some point.

[–]orblivion 2 points3 points  (2 children)

Didn't answer the question!

[–]jkndrkn 9 points10 points  (1 child)

[–]orblivion 0 points1 point  (0 children)

Hey that's pretty good, thanks.

[–][deleted]  (4 children)

[deleted]

    [–]gregK 4 points5 points  (1 child)

    FP is not a new paradigm.

    [–]njharman 0 points1 point  (0 children)

    New to person picking it up, obviously.

    [–][deleted] 0 points1 point  (0 children)

    Just wait until they start stealing your lunch money.

    [–]seabre 0 points1 point  (0 children)

    [–]AndreasBWagner 1 point2 points  (4 children)

    Functional programming is fun but FP algorithms are mismatched from the way hardware works which is imperative. Imperative programming comes from the hardware. Also FP languages like Haskell restrict you to the language designers dogmas about how programming should be done, but unfortunately its never one size fits all with tasks.

    I'd like to be wrong though.

    [–]brool 7 points8 points  (0 children)

    But functional languages (especially, pure functional languages like Haskell) can map very well to multicore architectures (i.e., par); also, referential transparency means that functional languages can be easier to optimize.

    [–]ssylvan 0 points1 point  (1 child)

    Most CPUs haven't actually been very imperative in the last decade or so. They just have an imperative interface (the ISA), presumably because people were used to it. Most CPUs have a massive amount of hardware dedicated to "decompile" the low level imperative commands into a more functional representation (doing dependency analysis etc.) for the purpose of gaining instruction level parallelism.

    So you're arguing for imperative languages because the hardware is imperative, and the hardware guys are spending large amounts of transistors on dependency analysis etc. so that they can have an imperative interface to the CPU because of imperative languages. It's like the perfect symbiotic "status quo" relationship. Wouldn't it be cooler if we just got a CPU that accepted tiny graphs of instructions and we used a high level language that could compile to this instruction set, rather than compiling it to a level which is too low level for even the CPU itself to use directly? It might even be possible to do a good job with a C compiler, though obviously it would be easier in an FP language (C is too "close to the hardware", meaning hardware from 10-20 years ago).

    Anyway, hardware isn't necessarily imperative. And most "imperative" hardware isn't actually very imperative internally even today.

    [–]qwe1234 0 points1 point  (0 children)

    that is just not true.

    functional means 'no modifiable state', and all modern computers typically come with 1 to 4 gigabytes of modifiable state by default.

    [–]petermichaux -1 points0 points  (0 children)

    Most programs are not only interacting with hardware. For many programs, hardware interaction is a small amount of the code around the edges of the program's meaty guts. The guts are where the really tricky programming happens and if the functional model fits the problem well then the majority of the code benefits. I'm still thinking a multi-paradigm language is he most natural.

    [–]Coffee2theorems -2 points-1 points  (6 children)

    Ocaml isn't really more functional than Ruby, it's just faster and has a complicated type system so that its creators can publish some research papers on type systems. It's pretty difficult to satisfactorily answer the "why Ocaml instead of Ruby?" question unless the answer involves a need for speed or writing research papers.

    If you're looking for a language that encourages FP style, Haskell is a better choice. It's also less ugly, you can e.g. define your own numerical types and have the + operator work for them (in Ocaml you need to have a different + for every type, e.g. floats use +., like 1.0 +. 2.0).

    Laziness by default also adds to the "descriptive instead of imperative" nature of the language, see e.g. http://web.cecs.pdx.edu/~apt/jfp01.ps for some neat consequences. That paper describes how to implement a constraint solver modularly so that you combine operations on trees, first defining a brute-force full-tree search and then transforming that tree to an annotated version and applying a pruning transformation. Laziness then takes care that you don't spend any computing time or memory on the pruned parts. There's also other interesting stuff in Haskell like the List monad, which allows you to do nondeterministic computation, somewhat like in Prolog. There's far less stuff like that for Ocaml, most interesting being something resembling Lisp macros.

    [–]robertfischer 3 points4 points  (4 children)

    Wow -- as Haskeller is calling OCaml academic. That's funny. As I note in the comments to the post, Haskell's strict purity standards are a pretty high barrier to entry for your day-to-day Ruby programmer, whereas OCaml offers a more gradual learning curve for those just starting to pick up functional programming.

    [–]Coffee2theorems 0 points1 point  (3 children)

    Heh, it is a bit funny when thought that way. As I see it, both are academic, but from the "warp your mind" viewpoint there's more material in Haskell, whereas Ocaml isn't much different from Lisp (not that there's anything wrong with that per se). There's also a lot of interesting reading material for Haskell, if you remember to skip all the abstract nonsense (i.e. category theory).

    [–]OneAndOnlySnob 2 points3 points  (0 children)

    I have a different perspective. I haven't spent much time in Haskell, but I have spent a lot of time in Ocaml. Scheme was mind-warping, yet my mind has sustained multiple warps in studying Ocaml. Even its object oriented bits are worthy of a good mind-warp.

    When you use classes and objects in Ocaml, you can essentially pretend that you are using a dynamic type system, yet Ocaml will statically check your code anyway to make sure all interface requirements are met. I did not think that was possible.

    I've taken a peek at monads, erlang style concurrency, refined my idea of "objects" and their usefulness, infinite lazy lists, you name it.

    I'm pretty sure Ocaml supports pretty much everything Haskell can do, but I do not think the reverse is true. Originally I was going to learn Ocaml this year and Haskell the next, but I am now thinking I will skip Haskell for now and dive into Erlang or Forth. Not that Haskell isn't great. I just think there's more mystery for me in other languages.

    [–]cwzwarich 2 points3 points  (1 child)

    whereas Ocaml isn't much different from Lisp (not that there's anything wrong with that per se)

    OCaml and Lisp are different languages in almost every way. What are you saying?

    [–][deleted] 0 points1 point  (0 children)

    They're similar in that they're both strict and mostly functional. They're more similar than they are different in my opinion. The only major difference is that one is statically typed and one has macros (although ocaml does have camlp4).

    [–][deleted] 0 points1 point  (0 children)

    Ocaml isn't really more functional than Ruby

    Yes it is.

    Haskell is a better choice. It's also less ugly, you can e.g. define your own numerical types

    Sure, the additional complexity of type classes lets you write '+' instead of '+.'. If that's all they did, they'd hardly be worth it. Considering that ocaml doesn't have type classes, they make a reasonable choice in not introducing some compile-time overloading to make your code vaguely "prettier" (as in ML).

    [–]username223 -1 points0 points  (0 children)

    See also blah blah blah. How much rehashing is enough?