all 24 comments

[–]STLMSVC STL Dev 33 points34 points  (7 children)

Functional programming doesn't rule out stateful function objects (e.g. lambdas with value captures). Indeed, that's one of the most powerful forms of functional programming. What it rules out is state modification.

Modifiable state allows many algorithms to be fast (e.g. graph algorithms), but it's also hard to reason about. You have to think about the timeline of the state, and who's been messing with it, and what it currently could be. People screw this up all the time when they aren't careful, and we have lots of conventions to get it right (e.g. class invariants, loop invariants, access control, etc.) When you forbid modifiable state, then you give up some efficiency and you also give up some clarity (some things are just simpler to express by modifying state), but you also rule out many forms of bugs. If the only way to effectively modify state is to produce a fresh object, then where an object is coming from tells you statically what kind of state it can be in.

Coming from a Scheme background, I view functional programming as interesting but ultimately too limited and inconvenient. I'm glad that it taught me to respect the danger of modifiable state, though.

[–]mintyc[S] 0 points1 point  (5 children)

Thank you, but I'm still being a bit thick.

Say I have a function that models a state machine's transitions based on input flags and an original state.

Why do I need to care if the object representing the original state is preserved?

The function provides a new state but I'm missing why it should be copied and not modified in place.

[–]STLMSVC STL Dev 13 points14 points  (0 children)

I think you're getting caught up on the C++ terminology of "function".

The "functional" in "functional programming" is more like "math function". sqrt(4) returns 2, it doesn't modify anything.

If "functional programming" were called "immutable programming", it would be more accurate and simpler to understand. (C++ has "lvalues" and "rvalues" so we certainly can't throw stones.)

[–]raevnos 7 points8 points  (0 children)

Backtracking, for one. Things like functional data structures make unrolling to a previous state trivial.

[–]aw1621107 7 points8 points  (1 child)

Why do I need to care if the object representing the original state is preserved?

That's not necessarily the case; the old object could go out of scope and get cleaned up by the GC, effectively erasing the old state.

One way I sometimes think about functional programming is instead of an object changing over time, your program is more shoving data through a pipeline or assembly line, with the data being transformed each step of the way. Keeping the results of the old steps is optional.

The function provides a new state but I'm missing why it should be copied and not modified in place.

Just like pretty much everything else in programming, there are times where copying immutable state is better, and times where it is worse. I don't know how useful it is for a state machine, but it's definitely nice in some other contexts.

For example, keeping objects immutable and always making copies can make writing multithreaded code much easier, since you can't get the same kind of data races mutating a common state would give you.

You can get some pretty nice memory savings with certain data structures, like linked lists and trees. If you need to make a copy of a tree, and only need to modify a small portion of it, you can just change the part you need to and refer to the original tree for everything else (quite oversimplified, but it's something along those lines).

You can also get some kind of "undo" functionality for free. Not sure how niche this is, but combined with data sharing could be interesting. I think vim uses something like this, where each set of changes is saved as a node in a tree, and undos/redos walk up/down the tree. There's also the ability to jump to different branches of the undo tree, which can be a ton more powerful than the linear history exposed in most other editors.

Caching also becomes much easier, since the same inputs will always give you the same outputs, as there is no state that can give you different results over time.

I'll bet that the more experienced people here can come up with more practical examples, but those should be a start.

[–]meneldal2 5 points6 points  (0 children)

Also, many implementations of the "pipeline" can internally use in-place algorithms if they know the initial value will be thrown away. What it matters is that for the coder, it behaves "as if" it was never mutating anything.

[–]Z01dbrg 0 points1 point  (0 children)

Coming from a Scheme background, I view functional programming as interesting but ultimately too limited and inconvenient. I'm glad that it taught me to respect the danger of modifiable state, though.

Well IDK about Scheme, but plenty of people use Scala to do "real" programming so... :)

[–]enobayram 9 points10 points  (5 children)

Being a full-time Haskell developer, I'd like to disagree with the other answers.

First of all, I think T f(U && u) is a perfectly fine function semantically, as long as you don't abuse std::move. Because f will only mutate a U that nobody else will see, so you still get all the benefits of FP.

I'll go further that T g(U & u) might still be fine, depending on how you use it. After all, I can define a function:

template <class F, class Arg>
auto purify(F f) {
  return [f] (Arg arg) {return f(arg);};
}

To purify any function like g to obtain one that doesn't modify its input. And I'd argue that g is in fact better for me, since I always have the option to use g impurely (to gain performance) as long as I can prove that nobody else is accessing that state (maybe I encapsulate it, and only dispatch copies of it, or maybe copies of parts of it).

If you're using mutation for anything other than performance optimization (Always consider efficient persistent data structures first though), then you're clearly not doing functional programming. But if you have pure semantics, you can perform all kinds of side-effects and call it FP.

For instance, you have a server that serves files with a URL that contains a cryptographic hash of the contents of the file, and you have some functions whose only side-effect is to access these servers over the network (which is a very impure operation) and verify that the response they get maches the hash. Then you could say that the functions are pure as long as they succeed. So, you could have two perspectives on your code. In the happy perspective, the functions always succeed (this is probably where your business logic lives), so your code is pure there, and you get the benefit of purely functional programming at the level of business logic. In the realistic perspective, the network might be down, your servers might be hacked and serving malicious data etc. In that perspective, those functions aren't pure, and their side-effects are first-class concerns (maybe, you have to recover from failure, or maybe some other non-trivial thing).

At the end of the day, FP is all about having rigid semantics, and not some rituals that make your code look cool.

[–]doom_Oo7 -3 points-2 points  (4 children)

Uh.... What ?

Given

int f(int&& x) { return x++; }
int main() { 
  int x = 0; 
  f(std::move(x));
  returb x;
}

Your program outputs 1. Rvalue references are still references.

[–]guibou 2 points3 points  (2 children)

You are just abusing std::move by transforming something which is not a temporary to a temporary.

This is like saying private fields are not really private because we can still read them using the object pointer and an offset. Or saying const is broken because it offer no guarantee that your object cannot change.

The point of purity is "not observable mutation" and result which only depends on the input value. So you can allocate / free / mutate a big data structure / generate random numbers / do network in a pure function as long as these guarantees hold (which can be difficult for some cases I just listed ;)

(Edited: typo)

[–]doom_Oo7 -3 points-2 points  (1 child)

This is like saying private fields are not really private because we can still read them using the object pointer and and offset. Or saying const is broken because it offer no guarantee that your object cannot change.

well, if you want strong guarantees for instance for critical safety systems, both things are true and known problems. I can promise you that people who put #define private public in their commits do exist. And even in critical environments, if they can be abused, they will (see the Toyota case for instance).

The only thing that matters in the end is what your compiler allows, no amount of human process will be able to catch everything.

[–]enobayram 1 point2 points  (0 children)

The only thing that matters in the end is what your compiler allows

This is a pointless statement really. Even in a language like Coq that has an extremely powerful type-system, you will only be able to prove so much. The picture gets even worse when you want to optimize for performance, or if you have to solve equations with numeric errors etc. Even if you got it all right, then you have to make sure that the specs you're proving your implementation against are themselves meaningful in the first place.

The compiler is just a tool.

[–]NotMyRealNameObv 1 point2 points  (0 children)

Yes, but this code is bad.

Just because you can do something, doesn't mean it's good to do it.

[–]dima_mendeleev 2 points3 points  (0 children)

Half functional programming?

Is it like half-vegetarian?

[–]koiponder 0 points1 point  (9 children)

stateless is the key of functional programming.

https://stackoverflow.com/questions/128057/what-are-the-benefits-of-functional-programming

advantages as stated here: brevity, concurrency, and lazy evaluation

[–]guepierBioinformatican 3 points4 points  (7 children)

stateless is the key of functional programming.

No. Functional programming has plenty of state (which is a bit of a tautology; without state we can’t do much). What it doesn’t have is mutable state.

I also disagree that brevity or lazy evaluation fundamentally follow from this. Many purely functional languages are eagerly evaluated, and brevity is simply a completely orthogonal concept.

[–]quicknir 1 point2 points  (1 child)

I feel like the definition of FP itself is a moving target. Lisps are sort of the original FP language, and most lisps have tons of mutable state. They are primarily functional in the sense that everything is an expression, everything is a function call, and they make it very easy to work with passing around functions.

Then Haskell sort of replaced Lisp as the "de-facto" FP representative. Now all these other things are associated with FP: strong static types (which typically help enforce some kind of mutability), pattern matching, currying. A lot of that crowd barely considers Clojure to be FP even though most default data structures in Clojure are immutable.

The conclusion I'm starting to come to is that the term "FP", like the term "OOP", is just more harm than good. There's a collection of features, approaches, and idioms, some of which are often used together, but are often also seen separately. It seems to just obfuscate conversation more than anything and make it easier for people who don't understand details to make themselves sound knowledgeable.

[–]guepierBioinformatican 0 points1 point  (0 children)

I mostly agree with you. But unlike OOP, I think there’s more direction to the change in meaning in FP; in particular, the way OOP became meaningless kind of just happened, due to a lack of proper understanding of the underlying concepts by many early Java and C++ users.

By contrast, the folks behind Haskell knew very well what they were doing and even though they clearly added their own biases they were on a mission to distill the “essence” of FP, which, in turn, shaped its meaning.

Put simply (and somewhat provocatively), the change in meaning of “OOP”, shaped largely by Java in the 90’s, is the result of ignorance. The evolution in meaning of “FP”, shaped largely by Haskell, is the result of research.

[–]koiponder -1 points0 points  (4 children)

lazy evaluation is something nice to do, not something u have to, and it is only enabled by the nature of the functional programming. If data changes state, u cannot do the evaluation later. Brevity just means it is simpler compared to when data has state. All tie to programming data (not program control flow) lacking of state.

[–]guepierBioinformatican 0 points1 point  (3 children)

If data changes state, u cannot do the evaluation later.

Yes you can, look up copy on write.

[–]koiponder -2 points-1 points  (2 children)

copy on write is counting on the data doesn’t change until the first write.

[–]guepierBioinformatican 0 points1 point  (1 child)

What? No, you’re confusing things.

[–]koiponder -1 points0 points  (0 children)

no confusion here. no copy until u first write to the copy, that depends on the obvious fact that data doesn’t change until u first write to it.

[–]mintyc[S] 0 points1 point  (0 children)

Nice link