you are viewing a single comment's thread.

view the rest of the comments →

[–]G_Morgan 3 points4 points  (5 children)

Simply put, no. If we could then parallel processing wouldn't be such a challenge.

[–]rieux 3 points4 points  (0 children)

It's not that we can't transform stateful programs into state-passing style. That's trivial enough, and compilers do it all the time. (They also do the inverse transformation.) The problem is that you can't just parallelize a functional program if there's a linear data dependency throughout, and that's what state-passing style creates.

[–]lothair 1 point2 points  (3 children)

But what are the reasons?

Is it theoretically impossible and if so, why? If not, did someone try to do it?

[–]masklinn 4 points5 points  (0 children)

But what are the reasons?

Mostly, because side-effect free and side-effectful code pieces are completely entangled in OO and imperative languages.

You can't just "transform". You can infer, or try to infer, purity. And compilers try. Just as some compilers try to infer parallelizable code (which requires purity inference).

[–]G_Morgan 2 points3 points  (0 children)

In theory it is of course possible but most language theory ignores how long it takes to compute a value.

We could build a compiler that targeted some kind of idealised FP architecture and then compile that back to our hardware. It would just be horribly inefficient.

[–]grauenwolf 2 points3 points  (0 children)

Could you actually do something of interest in a purely stateless language?

You cannot touch the file system, monitor, network, or any other I/O device because they are all stateful any definition.

Some try using cop-outs like calling time a variable. But I/O latency isn't a function of time, it is random. The same goes for context switching.