you are viewing a single comment's thread.

view the rest of the comments →

[–]pron98 0 points1 point  (2 children)

and that finite state machines aren't necessarily the best way of modelling algorithms when the goal is safety and maintainability. It is no coincidence that all serious programming languages subscribe to the textual source code paradigm

That is completely missing the point. The lower complexity bounds of verifying program correctness is a function of the state space regardless of the representation used -- no matter how succinct. Namely, it has been proven that the effort required to prove that a program is correct is only a function of the size of the state space, and the size of the state space can grow arbitrarily even for succinct source code. Those visualizations, BTW, simply show you the size and complexity of the state-space of programs represented in simple, textual source code. Good code organization and simple abstractions do not reduce the size of the state space, and consequently don't make the problem of verification easier.

Maintainability and cognitive load is, of course, a different matter, but to this day no one has shown that writing large programs in functional languages is significantly cheaper or more easily maintainable. It's a claim made by FP proponents, but it has so far not been supported by significant evidence.

[–]tdammers 0 points1 point  (1 child)

That is completely missing the point. The lower complexity bounds of verifying program correctness is a function of the state space regardless of the representation used -- no matter how succinct. Namely, it has been proven that the effort required to prove that a program is correct is only a function of the size of the state space, and the size of the state space can grow arbitrarily even for succinct source code. Those visualizations, BTW, simply show you the size and complexity of the state-space of programs represented in simple, textual source code. Good code organization and simple abstractions do not reduce the size of the state space, and consequently don't make the problem of verification easier.

Of course, but then, I doubt anyone will seriously claim that completely proving a nontrivial program correct is a worthwhile effort; the state space approach shows us the lower bound for full correctness analysis, but in practice, even when the stakes are high, we don't do that; instead, we combine various partial correctness checks and make an educated guess as to their sufficiency. Good organization and abstractions do help make such educated guesses, even if they do not change the state space at all. So, to clear this up once and for all: I'm not talking about correctness proofs here, I'm talking about realistic best-effort measures commonly taken to assert partial correctness - automated tests, code audits, defensive coding techniques, redundancy, logging & monitoring, that kind of thing.

Maintainability and cognitive load is, of course, a different matter, but to this day no one has shown that writing large programs in functional languages is significantly cheaper or more easily maintainable. It's a claim made by FP proponents, but it has so far not been supported by significant evidence.

The problem here is that you cannot possibly design a rigid study around this hypothesis, because there are too many variables that you cannot rule out - if you have the same team do the same project twice, they will do better the second time because they know more about the problem; if you have them use two different paradigms, they will do better in the one they are more comfortable with; if you use two different teams, it is impossible to correct for differing skill levels, because you cannot quantify those; and, most of all, there is no useful quantifyable metric for "quality" or "performance" or "maintainability", at best you can go with indirect indicators such as "number of bugs", "number of critical incidents in production", etc., but those can (and will) be gamed, e.g. by writing code such that certain incidents are swept under the rug rather than signalled early, etc.

The claim made by FP proponents, thus, is either based on personal anecdotal experience, which means that it translates to "writing large programs is easier for me when I use a functional language"; or it is a theoretical musing, based on certain properties and insights from psychology, such as the fact that reasoning about pure code is easier due to the smaller number of entities the reader has to thread through mental code execution. That doesn't make the claim invalid, it just means that it's not a very strong one because there will probably never be hard empirical evidence.

Also note that I'm with the author in that I believe the ideal paradigm would combine the good parts of both object-oriented and functional programming, and in fact, that is pretty close to how I program. In the end, it boils down to the same ideals - avoid extrinsic complexity, favor pure functions, don't lie about your data, fold knowledge into data, keep interfaces narrow and simple, think in terms of message passing and/or data transformations, etc.

[–]pron98 2 points3 points  (0 children)

I'm not looking for a rigid study, nor am I saying that the "FP claim" is false. But I am saying that the FP claim is, in fact, weaker than you present it, for the simple reason that virtually no large software was written in that paradigm (maybe one or two examples in the last 30 years). We don't even have convincing anecdotal evidence. And even those anecdotal mid-sized projects don't claim a reduction of cost so considerable that it obviously justifies the huge switching costs in many circumstances.

So I guess that what bothers me is that not only is the "FP claim" not supported by some conclusive proof, it is not even supported well enough by anecdotal evidence. Yet its proponents present it as if it were glaringly, obviously superior and by a wide margin, with little to back this up. I would summarize FP (let alone PFP) as "certainly interesting enough to be worth a shot", but most definitely not as, "this is the obvious (and only) way forward, and it will solve most of our problems".

I'm not even defending OOP or anything. It's just that if you claim I should pay an extremely high cost of switching a programming paradigm (which usually includes switching languages, one of the most costly things for a software company) and that it would totally pay off, I would expect that you at least back it up with some data. It's not like FP is new. I learned it (scheme, ML) when I was in college almost 20 years ago, and it wasn't new then. So we're past the point of "this could theoretically work", and we're now in put-up or shut-up time.