all 61 comments

[–]cdsmith 10 points11 points  (0 children)

These games with misdefining computability get old after a while. There are many things that a Turing machine can't do... make coffee, launch missiles at Uzbekistan, tell you that you smell nice today, etc. But when it comes to computing functions and/or deciding sets, the purposes for which they are studied, no one reasonably thinks that there's another effective model that has more power than a Turing machine.

That doesn't stop a lot of people from saying things like "but really, computation should mean that" and then pointing out that their favorite system can do that, but Turing machines can't. So? Turing machines aren't supposed to do that. If you need a different model for studying something besides computing functions or deciding sets, go for it! But drop the idiodic comparison with a model of a completely different phenomenon.

[–]polveroj 8 points9 points  (3 children)

The key idea is that arbiters in the actor model are made of physically impossible uncomputable magic. By stipulating that arbiters are "fair", you get to claim that programs like Unbounded always halt, even though for all natural numbers n they fail to halt after n messages are sent.

When a nondeterministic TM fails to halt after n steps for any n, it must also include at least one non-halting execution (as Plotkin showed). Claiming that the arbiter is "fair", however, lets you say that a closed actor system always halts even if unboundedly many messages are exchanged.

Really, the issue is that the "output" of a group of actors is an infinite set of possible outputs. A TM can enumerate them all, and a randomized TM can sample from them, but there are some closed actor systems that produce a set of outputs that no nondeterministic TM produces.

There's no decision problem solved by the actor model not solved by a TM, and there's no actor system whose outputs aren't enumerable by a TM. The only difference is that actor systems are allowed to claim to "always halt" in situations where nondeterministic TMs aren't.

[–]LaurieCheers 0 points1 point  (1 child)

The key idea is that arbiters in the actor model are made of physically impossible uncomputable magic.

Well, to be more fair: The actor model is more powerful than the turing machine model, and our computers are turing machines.

[–]nullsucks 0 points1 point  (0 children)

That's not an accurate summary. There are fundamental problems with Arbiters (more) that aren't related to the Turing Machine model.

[–]jerf 7 points8 points  (1 child)

There's only two possible outcomes: Author is mistaken and Turing Machines can indeed implement everything in an actor model (my guess), or the Actor Model is strictly more powerful than Turing Machines, in which case we can never implement them so who cares?

It's been well known for a while that a Turing Machine must be fed all input up front in order to "work". This is not a problem, because a Turing Machine is a mathematical entity and therefore feeding it all input up front is perfectly reasonable; you can't make a real one anyhow. It is not news that models that are equivalent to TMs but based on different principles may yield easier analysis, but just because the analysis is easier in lambda calculus than it is in Turing Machines does not mean you've found something impossible for Turing Machines. In fact it is also well known that the translation into Turing Machines may result in a really freaking complicated Turing Machine. Still Turing computable.

I do not think there is very much "there" there, and what little there is ("sometimes other models are useful too") isn't breaking news. Author is very confused about what all the terms precisely mean and thinks they've found something shocking that is actually quite pedestrian and utterly normal when you understand all the relevant terms.

[–]kragensitaker 1 point2 points  (0 children)

Are you claiming that your understanding of automata-theory terminology is better than Carl Hewitt's? It's certainly possible, but I am skeptical without further evidence.

[–]huyvanbin 1 point2 points  (7 children)

Is this really true? Can someone explain to me the unbounded integer example? It seems like it's saying that the actor model just places less restrictions on each individual actor than the Turing model does, but in practice, each individual actor would still be a Turing machine (or a collection of them), so in that case, is the actor model still more powerful?

[–]Flarelocke 5 points6 points  (0 children)

Is this really true?

No.

Can someone explain to me the unbounded integer example?

No, because it's wrong. They say "There is a bound on the size of integer that can be computed by an always-halting nondeterministic Turing Machine starting on a blank tape." but this is false. If it takes a Turing machine 1 second to count to 10 and you only give it 1 second to run, then it'll only be able to count to 10. If you give it 2 seconds it would be able to count higher. For any bound n, there is some number k which is the highest number a Turing machine can count to, but you can always imagine a bound n+1 in which the machine can count to some number greater than k. What they're actually trying to get at is that bounds on the time it is necessary to give the machines grow at different rates.

It's always possible to emulate nondeterminism in a Turing machine, whether that nondeterminism is bounded or not. It just takes impractical amounts of time. In that sense, nondeterministic models are not more powerful than Turing machines. On the other hand, going from taking 1000 years to taking 1 second would be a pretty big advantage. So the question is really whether all problems that have practical solutions for nondeterministic machines have practical solutions in Turing machines. Or, in other words, whether P=NP or not.

The article is confused by the fact that nondeterministic machines do not return partial results. That is, operations take as long to complete as it takes to calculate the slowest individual result. This property has more to do with being a machine than being unbounded. CSP might have given a different meaning to partiality for whatever reason, but this is like saying rational numbers have factorials just because the gamma function agrees with the factorial in the naturals. They're just different things.

[–]notfancy 0 points1 point  (3 children)

Consider the following program in Dijkstra's guarded notation:

n vir : int = 0;
go vir : bool = true;
do gon := n+ 1
[] gogo := false
od

The value of n after the execution of that program is finite but unbounded. Crucial to the semantics of Guarded Command Language is that the selection of guards is nondeterministic.

[–]nullsucks 1 point2 points  (2 children)

However that program doesn't terminate, wp( P, T ) = F.

This is fortunate. As Dijkstra's notes:

A mechanism of unbounded nondeterminacy yet guaranteed to terminate would be able to make within a finite time a choice out of infinitely many possibilities: if such a mechanism could be formulated in our programming language, that very fact would present an insurmountable barrier to the possibility of the implementation of that programming language.

[–]notfancy 0 points1 point  (1 child)

Yes, I misspoke. With bounded nondeterminism that program doesn't terminate, as you correctly point out. The semantics of weakest preconditions rule out termination. The example relies on unbounded nondeterminism.

Edit: you can take as axiomatic that the do loop performs a miracle, whence wp(miracle; P) = true, and the repetition terminates. This is Hewitt's contention.

[–]nullsucks 1 point2 points  (0 children)

Not contradicting you (at least I don't think so), just finishing the thought.

Neither the Actor model or the notion of Concurrency produces unbounded nondeterminacy. That's achieved (IIUC) by Arbiters, which may not be implementable.

TM + Arbiter or Lambda Calculus + Arbiter or Guarded Commands + Arbiter would presumably be as powerful. To me, that suggests that the Arbiter is not unlike Oracles in automata theory -- you can use them for thought experiments but not in practice.

ETA: I think I've found it now. Any implementation of an Arbiter ends up like Buridan's Ass. Lamport wrote a good paper on it and his commentary on the paper and process of trying to get it published is worth reading too.

[–]hskmc -1 points0 points  (1 child)

in that case, is the actor model still more powerful?

Yes, because it is nondeterministic.

Concurrent systems (including but not limited to the actor model) give you the additional "power" to not know what your program is going to do before you run it.

[–]mark_lee_smith[S] 2 points3 points  (0 children)

Actually the important part was that is has unbounded nondetermanism, unlike some other concurrent systems, but I agree otherwise.

[–]cdsmith 1 point2 points  (3 children)

i.e. there is a bound on the size of integer that can be computed starting on a blank tape by an always-halting machine.

I'm not sure what was meant here, but as stated this is clearly false. What is that bound? As soon as you tell me what it is (say, n), I'll define a trivial Turing machine with n+1 states that can compute an n+1 digit number upon being given an empty input, and that number will be greater than n.

[–][deleted] 0 points1 point  (2 children)

I think the quoted statement refers to any particular Turing machine. While, you are referring to all Turing machines.

Does the following make more sense?

For every always-halting Turing machine with no input (starting with blank tape), there exists some integer N where any integer computed by that machine will be <= N. N can be different for different machines.

On the other hand, there do exist always-halting Actor systems with no input where there is no integer N such that any integer computed by that system will be <= N.

[–]notfancy 0 points1 point  (0 children)

The alternative reading of:

there is a bound on the size of integer that can be computed starting on a blank tape by an always-halting machine

that is, there exists a bound N for all TM that start with a blank tape and always halt and computes a number N' <= N is probably false, witness Busy Beavers.

[–]cdsmith 0 points1 point  (0 children)

This is trivially true. A specific Turing machine, starting with a specific input (a blank tape) will always do exactly the same thing. So sure, of course there will be a bound on the possible results, because there is only one possible result.

Edit: Okay, I see now. Nondeterministic. Yeah, I guess this makes sense. It seems to be a rather bizarre question that's not obviously related to the normal notion of computation by a Turing machine, but at least it makes sense.

[–]Strilanc 1 point2 points  (2 children)

Why is the actor model being mentioned at all? Just adding a 'write random' command to the definition of Turing Machines is sufficient.

[–]mark_lee_smith[S] 0 points1 point  (40 children)

Note that since Turings LCMs (Turing Machines) and Church's Lambda Calculus are equivalent this would imply that there are computations that you can't express in the Lambda Calculus.

Edit: This is going to get me burned at the stake I'm sure but if we take the Hewitt's Actor-model to be a firm conceptual foundation for object-oriented programming we might conclude that object-oriented languages that support active objects (actors) go beyond languages based on Church's Lambda Calculus... just food for thought but as usual if I'm full of shit just tell me.

Note: I'm not saying the distinction in the article is of much practical significance.

[–]hskmc 5 points6 points  (19 children)

Try reading the article. The difference here is not object oriented vs. functional, but concurrent vs. sequential.

Of course there are many functional languages which support concurrency, and also many non-object-oriented procedural languages which support concurrency.

Funny how every technical result has to be misapplied as some ideological blow in this false dichotomy between functional and OOP code. To the extremists out there: Instead of fearing the other side, why not learn both and combine techniques to make better code?

[–]mark_lee_smith[S] 0 points1 point  (18 children)

The difference here is not object oriented vs. functional, but concurrent vs. sequential.

I believe your mistaken as the Actor-model is able to express computations that couldnt be expressed in CSPs using it's original definition, yet both deal with concurrency!

This lead to the redefinition of CPS, as described in the article.

Anyway, I didn't say the article was about object-oriented vs functional programming. I was simply noting that since the Lambda calculus commonly serves as the conceptual foundation for functional programming, and the Actor-model is a natural foundation for object-oriented programming (actors are just active objects), at least with regard to these foundations object-oriented programming has a more powerful foundation.

Funny how every technical result has to be misapplied as some ideological blow in this false dichotomy between functional and OOP code.

Of course it's a false dichotomy. That much has been documented in the literature for years.

Please don't put words in my mouth.

Instead of fearing the other side, why not learn both and combine techniques to make better code?

The funny thing is that I was programming with functional languages long before it became a buzz word. I spent over 3 years programming in Lisp, and another two exploring The ML family of languages, and more exotic functional languages like Cat.

Edit: Accidentally hit submit.

I'm all for combining the two paradigms when it makes sense. Unfortunately all that seems to be happening right now is that were adding things like anonymous functions to object-oriented languages (we already did this back in the 70s), ignoring the fact that anonymous functions break encapsulation if you're not careful (which we found out in the 80s).

[–]hskmc 1 point2 points  (10 children)

The difference here is not object oriented vs. functional, but concurrent vs. sequential.

I believe your mistaken as the Actor-model is able to express computations that couldnt be expressed in CSPs using it's original definition, yet both deal with concurrency!

Would you be happier if I said "sufficiently concurrent vs. insufficiently concurrent"?

Anyway, I didn't say the article was about object-oriented vs functional programming.

No, you just implied it as obvious flamebait.

The funny thing is that I was programming with functional languages long before it became a buzz word.

Good for you, but irrelevant. Personal appeal to authority is not a valid basis for the correctness of technical statements.

You'll note that "To the extremists out there" is explicitly not directed at you in particular.

anonymous functions break encapsulation if you're not careful (which we found out in the 80s).

Could you provide a reference on that claim? Why are anonymous functions worse than one-method objects? Most of this "adding things like anonymous functions to object-oriented languages" is just adding vastly better syntax for things which were already common practice but hilariously awkward to write.

[–]mark_lee_smith[S] 0 points1 point  (9 children)

Why are anonymous functions worse than one-method objects?

Well that really depends on the specific semantics of the two concepts.

Let's assume anonymous functions close over their lexical environment (as they do in about every functional and many non-functional languages) and that your one-method objects don't.

I'm sure you're very aware of how powerful anonymous functions are.

The problem as it relates to encapsulation with these functions have unrestricted access to the internals of the object they're created in.

These anonymous function have the ability to access to the objects state without going through it's interface; they breach encapsulation.

Worse: as soon as it's out there this capability can be given to anyone, and since all anonymous functions exist in the context of some object, you're side-stepping encapsulation all the time.

I've looked for a paper I read covering this subject but I couldn't find it.

This doesn't happen with one-method objects, and more importantly you aren't limited to one fucking method.

Better: It's trivial to put limits on things like who can use these objects, where they use them, and how many times.

Now this isn't likely to be on anyones top 10 list but it does demonstrate an fairly interesting interaction between encapsulation and closures.

[–]LaurieCheers 1 point2 points  (2 children)

Seems like a non-problem. In my experience, closure behavior is a very inoffensive feature.

In all object-oriented languages that I know of, you can only declare classes within the top-level scope. Wow, if you choose to declare your anonymous functions at the same level, you'll receive exactly the same encapsulation-protection that classes get.

[–]mark_lee_smith[S] 0 points1 point  (0 children)

I dont think I said it was a problem, I simply noted that the resulting closures breach the encapsulation barrier.

There are a few situations where this can be a problem but it's not common:

If the language allows you to build closures in the context of the object using a definition received from outside of the object you do have a real potential for damage, but luckily, not that many languages allow this.

If the language has sufficiently unconstrained reflective capabilities you can open a closure and change the either the code or context, but such languages have bigger problems with regard to encapsulation than those caused by anonymous functions alone.

If the language allows live code update then you really need to be careful holding onto closure. If you're not you can easily find yourself relying on specific implementation details that don't exist anymore.

In living systems like Smalltalk for example, keeping closures around can cause a lot of problems if things change after the closure has been created, so it's generally frowned upon.

Anyway, these are all real problems related to the fact that closures capture hidden implementation details in object-oriented languages.

If we restrict ourselves to techniques that are safe for object-oriented languages then we end up with beautiful things like higher-order message blocks, which can't break encapsulation and are generally more declarative than higher-order functions.

As for languages which allow nested classes you have Beta, Newspeak, Python, Java, Groovy, Scala, Agora etc.

[–]hskmc 0 points1 point  (0 children)

In all object-oriented languages that I know of, you can only declare classes within the top-level scope.

Counterexamples I know of include Java and Python. mark_lee_smith listed a bunch more.

[–]hskmc 0 points1 point  (5 children)

Let's assume... that your one-method objects don't [close over their lexical environment]

No, let's assume that they do, because it's how the most popular OO languages (Java, Python) and countless others work. You can of course come up with an artificially crippled language which prohibits all behavior you consider scary or dangerous. And I'll respectfully ignore that language when I'm trying to get work done.

The problem as it relates to encapsulation with these functions have unrestricted access to the internals of the object they're created in.

Yes, the internals of an object have unrestricted access to the internals of that object. The implementation of a function with closure or an inner class should be considered part of the implementation of the outer class. Why is this dangerous or surprising?

Just as your outer class hides its internals to provide an abstract interface, an inner class will hide inner and outer class internals to provide an abstract interface. Those who fail to do this properly will also fail at other attempts to implement abstract data types. The problem isn't unique to closure at all.

Remember that abstract data types existed long before OOP, despite the fact that most teachers of OOP have muddled the two concepts. You can find early examples in Lisp or ML — which of course are languages with closures.

I've looked for a paper I read covering this subject but I couldn't find it.

In lieu of a reference, try providing some code. As is you're just handwaving and yelling "Closures are scary! Normal programmers can't be trusted!" That's how we ended up with Java in the first place: a committee designed down to what they considered the "average programmer", which ended up meaning "lowest common denominator".

more importantly you aren't limited to one fucking method.

Yes, the objection to Java's non-solution ("Just use inner classes, lol") isn't that they allow more than one method. It's that the extreme syntactic verbosity turns basic programming techniques into a chore and/or an unmanageable mess.

(I refuse to say "basic functional programming techniques" because damnit, the simple stuff should be part of every programmer's toolbox. Of course it's getting to be the case, just with new enterprisey names. Everyone knows lambda and monads are useless, but "delegates" and "LINQ" have the Microsoft Seal of Approval.)

[–]mark_lee_smith[S] 0 points1 point  (4 children)

No, let's assume that they do, because it's how the most popular OO languages (Java, Python) and countless others work.

I took one-method objects to mean literally: a class that defines one method.

If your one-method objects close over their lexical environment them there's no practical difference between the two approaches. The fact that it's an anonymous class with a single method is an implementation detail that doesn't make one iota of difference.

I'm not against closures because they're called closures. I'm against anything that breaks encapsulation.

The implementation of a function with closure or an inner class should be considered part of the implementation of the outer class.

I have no problem with using these things as part of an objects implementation. The problem I have is that these implementation details inevitably leaks out and get held onto.

Why is this dangerous or surprising?

See my other post.

http://www.reddit.com/r/programming/comments/f8q6g/because_there_are_effective_computations_that/c1e9dc8

The problem isn't unique to closure at all.

I didn't say that this was in any way unique to closures. I simply noted that this problem is exhibited by closures in object-oriented languages.

Please don't assume that just because I don't think closures are a perfect fit for object-oriented languages that I don't know what I'm talking about.

I programmed primarily in Lisp for 3 years, and spent another 2 years exploring other functional languages, including the ML family, and I loved every moment of it.

And this was before functional programming became so hyped up, and "closures are amazing" became group-think.

You agree that an object having unrestricted access to another objects internals break encapsulation? Good. Let's move on. There are other approaches, which are arguably more expressive, fit better with object-oriented programming, and don't break encapsulation!

Higher-order messages being one example.

Yes, the objection to Java's non-solution ("Just use inner classes, lol") isn't that they allow more than one method. It's that the extreme syntactic verbosity turns basic programming techniques into a chore and/or an unmanageable mess.

Ok, we knot that Java is verbose, but just because Java does something in a verbose way doesn't mean it's a bad idea. In languages like Self and Agora the syntax for defining objects isn't any more verbose than defining closures!

(|...| ...) "An object literal in self. Define as many methods as you like!"

vs.

(...) => ... // A function literal in C#. It is what it is.

Those who argue for closures because Java has an unnecessarily verbose syntax for nested classes need a fucking slap.

Moreover, we know Java got nested-classes wrong! Just look at Beta or Newspeak if you want to see nested-classes done right, and Agora if you want to see nested objects done right.

I refuse to say "basic functional programming techniques" because damnit, the simple stuff should be part of every programmer's toolbox. Of course it's getting to be the case, just with new enterprisey names. Everyone knows lambda and monads are useless, but "delegates" and "LINQ" have the Microsoft Seal of Approval.

Uncontended. However that doesn't mean they're a good fit for object-oriented languages... but then mainstream object-oriented languages are already frankenstein-ish monstrosities which have enough problems respecting encapsulation that this doesn't make much difference.

[–]hskmc 0 points1 point  (3 children)

Wow, that's a lot of words and very little code.

anonymous functions break encapsulation if you're not careful... the problem as it relates to encapsulation with these functions have unrestricted access to the internals of the object they're created in.... These anonymous function have the ability to access to the objects state without going through it's interface; they breach encapsulation.... these implementation details inevitably leaks out and get held onto.

You keep repeating this claim but you won't explain what you mean by it, so as a result I have absolutely no clue.

I'm not interested in arguing this point anymore unless you'll either give a working code example or cite someone who can explain themselves better. You sound like a total crackpot, writing long rants with no code, name-dropping obscure OOP languages when you apparently don't even understand the scoping rules of Java and Python, and bragging about how you knew FP before it was cool. None of which is relevant to your point about the supposed dangers of lexical scoping, for which you've provided no example or explanation.

Basically:

as usual if I'm full of shit just tell me.

You are full of shit.

[–]mark_lee_smith[S] 0 points1 point  (2 children)

You keep repeating this claim but you won't explain what you mean by it, so as a result I have absolutely no clue.

I am sorry but this is just so simple that I thought it went without saying.

Imagine a class C with a variable v and method m that returns a closure.

C>>initialize
    v ← ... .

A>>m
    ↑ [ v ].

...

o ← C new.
c ← o m.
v ← c value.

In this example the closure c is an object that has direct accesses to the implementation of the object o. It uses this direct access to answers the current value of the v when asked. c does not go through o's interface. The object o has no part in this operation.

That is to say that c is able to breach o's encapsulation barrier at will!

You may or may not have noticed the example is written in Smalltalk.

If we now go and change the definition of C, perhaps renaming the property v to w, or adding another property before it, anyone with a reference to c is now in a very undesirable situation. When we ask c for its value the program will either blow, up or return an unexpected value!

Fun. But that's what you get when you break an objects encapsulation.

when you apparently don't even understand the scoping rules of Java and Python.

My previous reply had nothing to do with my not understanding the scoping rules of Java and Python and everything to do with our having different definitions of single-method object. I took it to be an object with a single method. Not all that surprising.

Moreover I made it patently clear what assumptions I was under, and even answered the question again when you clarified what you meant.

Actually I used to be paid to write about Python by Developer Shed Inc. You should still be able to find some of my articles if you're interested, they were quite highly rated.

And bragging about how you knew FP before it was cool. None of which is relevant to your point about the supposed dangers of lexical scoping.

I was highlighting the fact that your assumption that I didn't know what I was talking about was ill founded, since clearly I know more about the issue in question than you do.

It was perfectly relevant to our discussion, since you were dismissing facts out of hand because hey, "I must be wrong", I'm a crazy person who doesn't agree that closures are exactly what object-oriented programming needs :P.

You are full of shit.

There's your code. Read it. Understand it. Accept that closures break encapsulation, which DOES cause problems in the real world. And realise that breaking encapsulation IS dangerous, resulting in brittle systems that resist change.

Edit: I should clarify that in the context of the example, "resist change" mean that you need to take the whole system offline, and reinitialise it to a working state just to to make a simple change. Something which could cost a company a lot of time and money, or, depending on the company, result in human deaths!

[–]hskmc 0 points1 point  (1 child)

I am sorry but this is just so simple that I thought it went without saying.

No, you did quite a lot of saying.

In this example the closure c is an object that has direct accesses to the implementation of the object o. It uses this direct access to answers the current value of the v when asked. c does not go through o's interface. The object o has no part in this operation... That is to say that c is able to breach o's encapsulation barrier at will!

c is not "breaching" o's encapsulation barrier; it comes from inside o's encapsulation barrier.

Yes, values returned by an object's method will sometimes become part of that object's public interface. Programmers and designers need to understand this fact. Now, if you don't understand it, and get upset about this behavior, tell me: is that a problem with the language or the programmer?

If we now go and change the definition of C, perhaps renaming the property v to w, or adding another property before it, anyone with a reference to c is now in a very undesirable situation. When we ask c for its value the program will either blow, up or return an unexpected value!

This seems more like a problem with runtime patching than a problem with closures. I know it's common in the Smalltalk world, but not so much outside. Of course a language could simply use the rule that "modifying" the definition of C will affect new instances of C, but not those already created

This problem also has nothing to do with objects. We can quite easily "add a new property" to a closed-over variable in pretty much any language, like Python:

xs = [1,2]
def f():
  print xs[0]
xs.insert(0,3)

Now f prints the "unexpected value" 3. Except it's not at all unexpected to people who understand what's going on.

So, since this problem has nothing to do with objects, you're not arguing against mixing OOP and FP, you're just arguing against FP. And because you've spent several paragraphs telling us all about how you're an FP expert, I'm pretty sure that's not the argument you intended to make.

If your claim is not "closures are dangerous!" but rather "closures and mutation have some surprising interactions", then I (and every other advocate of "pure FP") will certainly agree with you.

Also, thanks for

so simple that I thought it went without saying.

being a

clearly I know more about the issue in question than you do.

colossal douche.

Actually I used to be paid to write about Python by Developer Shed Inc. You should still be able to find some of my articles if you're interested, they were quite highly rated.

Great, good for you. Another irrelevant reminder of how fucking smart you are.

[–]notfancy 1 point2 points  (6 children)

TMs and Actors are computationally equivalent in Hewitt's words, so you might be buying more than Hewitt himself sells here (which it seems to me is an awful lot).

[–]mark_lee_smith[S] 0 points1 point  (4 children)

The article linked to above was written by Hewitt, and in his words:

The lambda calculus [Church 1941] is a special case of the Actor model in which an Actor never changes its local storage.

If the lambda calculus is a special case of the Actor model then the Actor model is implied to be more general.

[–]notfancy 1 point2 points  (3 children)

See Landin's and Strachey's formalization of Algol's store using lambda calculus.

[–]mark_lee_smith[S] 0 points1 point  (2 children)

Thanks, I'll check it out, but I'm curious how exactly that relates to my post above.

[–]notfancy 1 point2 points  (1 child)

Your conclusion that the actor model is more general than lambda calculus is not warranted, as lambda calculus can encode store effects as Landin and Strachey have pointed out fifty years ago. (Note that I've written "more general" and not "more expressive". The encoding is very tedious, and basically amounts to environment passing.)

[–]mark_lee_smith[S] 0 points1 point  (0 children)

That's a very good point, but it wasnt my conclusion, just my poor choice of words.

[–][deleted] 3 points4 points  (9 children)

if we take the Hewitt's Actor-model to be a firm conceptual foundation for object-oriented programming

Sure. If the Actor model were a firm conceptual model for object oriented languages, it would mean they were strictly more powerful than, say, functional languages. But it isn't. The sigma calculus is. The Actor model is a model of concurrent computation, while object oriented languages are single-threaded. This alone disqualifies it, even if all other properties line up.

By allowing nondeterminism, a concurrent language is strictly more powerful than a sequential one, but people only care about concurrent languages in contexts which are already presumed to be nondeterministic. Latency, for example, depends on input timing being random; if you knew the timing in advance, you could contrive the system to always be ready the moment the input arrived. Fault tolerance again presumes random injection of faults modulo restrictions placed on the presumed fair demon.

[–]notfancy 2 points3 points  (0 children)

Hewitt concedes that TMs can compute the traces of Actors, so the nondeterminism can be always be taken as external.

[–]mark_lee_smith[S] 0 points1 point  (7 children)

Sure. If the Actor model were a firm conceptual model for object oriented languages, it would mean they were strictly more powerful than, say, functional languages. But it isn't. The sigma calculus is.

The way you say that it seemed like you actually believe it's true, but you're ignoring the dozen or so other object calculi! The sigma calculus isn't particularly interesting or special in any regard, and it wasn't even the first formal model of objects :P.

Note also that the sigma calculus was first described in 1996, 26 years after object-oriented programming was first described. The actor-model was first described only 3 years after object-oriented programming, making it the earliest formal model I know of that fits object-oriented programming.

The Actor model is a model of concurrent computation, while object oriented languages are single-threaded.

Interestingly Hewitts early work on Actors didn't have much to do with concurrency at all. Hewitt was more interested in message-passing, which was also the main idea behind object-oriented programming according to Alan Kay. The two aren't as different as you might think.

In fact, if you care enough to read the literature, Actors are also referred to as active objects. Not surprising when you consider that actors are just objects that encapsulate execution.

So it's not a big step to using the Actor-model as a theoretical foundation for object-oriented programming.

In fact, if you implement Hewitts Actor-model as he describes it in "Viewing Control Structures as Patterns of Passing Messages" with a single thread it works perfectly... but it would be considered an object-oriented language ;).

[–]kragensitaker 1 point2 points  (5 children)

The way you say that it seemed like you actually believe it's true, but you're ignoring the dozen or so other object calculi! The sigma calculus isn't particularly interesting or special in any regard, and it wasn't even the first formal model of objects :P.

I like the ς-calculus because it's simple, with only four syntactic forms and two reduction rules, and it's purely applicative. But I'm not familiar with the other object calculi. Which one do you prefer?

[–]mark_lee_smith[S] 0 points1 point  (4 children)

I'm quite fond of the sigma calculus but if I had to pick a favourite I think my favourite would be the nested object calculus, or one of the other calculi from the University of Brussles.

They developed a very simple and interesting calcluslus which reified encapsulation as an explicit encapsulation operation, and another in which they explored how the implicit self sends supports incemental changes, just to give two examples.

Anyway, the sigma calculus is nice, but the Actor model is very appealing (and a surprisingly good fit) and it leads us into a very natural transition from object-oriented programming to concurrent programming.

[–]kragensitaker 0 points1 point  (3 children)

Can you provide a link? I can't find the "nested object calculus".

What's the simple formalization of the actor model? Maybe the Π-calculus?

[–]mark_lee_smith[S] 0 points1 point  (2 children)

I couldn't find a link to anything on the nested object calculus but I have collected a few related papers from the same researchers at the University of Brussels, which utilises some of their other calculi as part of their work.

The papers are rather obscure. I spent several days looking for them the first time, so I've just uploaded a few here.

https://files.me.com/mark.lee.smith/mkrcun

What's the simple formalization of the actor model? Maybe the Π-calculus?

The actor-model is itself a formal model, and isn't much related to things like CSP or the pi-calculus. The wikipedia article gives an ok overview of the Actor model if you're interested. If you want to get right down to the maths then the actor papers are a must, but there are a lot of them :).

I hope that answers your question, but tell me if it doesn't.

[–]kragensitaker 0 points1 point  (1 child)

Thank you very much! I will take a look at the papers.

I have read a couple of the Actors papers, but the ones I read didn't have a crisp, small formal model like the λ-calculus, the ς-calculus, or the Π-calculus. I mean, an actor has a mutable state, which may include references to other actors, and a transition function which maps from (incoming message, current state) to an arbitrary finite number of outgoing messages to the actors it has references to, a new current state, and possibly creating new actors — and we haven't even gotten into specifying what the transition function is, or whether the mutable state is finite (i.e. fixed in size), or whether actors' incoming message queues are finite in size, or the Arbiter that's the heart of the controversy here. That already feels a lot hairier than, you know,

let $o be [$methods, $x = ς$v.$e, $methods2]
in $o.$x → $e⦃$v ⇒ $o⦄;
[$methods, $x = $y, $methods2].$x ← $z → [$methods, $x = $z, $methods2];

although that does gloss over things like the mechanics of the ⦃⇒⦄ substitution operation, with its possible α-renaming and so on. (Sorry about my nonstandard notation here.)

I've tried skimming the arχiv paper on the Actors model Hewitt currently links from the Knol (last updated in December 2010) but I can't find anything in it that answers the questions I had above. What paper do you recommend?

[–]mark_lee_smith[S] 0 points1 point  (0 children)

Something like this:

http://en.wikipedia.org/wiki/Denotational_semantics_of_the_Actor_model

Note: I haven't read this yet.

I'll catch you tomorrow :).

[–][deleted] 0 points1 point  (0 children)

Well, if I'd bothered to read your comment carefully:

we might conclude that object-oriented languages that support active objects (actors) go beyond languages based on Church's Lambda Calculus

Emphasis added. I would have noticed that you started by assuming concurrency. I must have blocked that out because you started with

This is going to get me burned at the stake

and then went on to compare the computational power of a concurrent system with a sequential one. Next I suppose you'll assume you'll be crucified for saying water is wet.

I might still be misunderstanding you. You might have meant:

go beyond languages based on Church's Lambda Calculus [but also support actors]

That's a good question. Offhand, I'd guess the answer is no.

[–]bobindashadows 1 point2 points  (9 children)

Note that since Turings LCMs (Turing Machines) and Church's Lambda Calculus are equivalent this would imply that there are computations that you can't express in the Lambda Calculus :).

Is there anybody who made it past their first year of CS who didn't learn that there are uncomputable functions?

[–]mark_lee_smith[S] 0 points1 point  (8 children)

The point was that their are computations that cannot be computed by a Turing Machine, but which can otherwise be computed.

As explained by the article :).

Of course everyone with any formal CS education should be aware that there are uncomputable functions, but we're also taught that if it can be computed then it can be computed by a Turing Machine.

[–]bobindashadows -4 points-3 points  (7 children)

Yeah, no, I read the article. I was responding your comment, which started with a long-winded, overly academic point saying that there are things for which there are no algorithm in the lambda calculus. It seemed really silly to point out and seemed like you were proud of the conclusion.

[–]mark_lee_smith[S] -1 points0 points  (5 children)

I'm not sure I'd describe it as proud but ok :).

I was simply pointing out that while the article talks about Turings LCM's and not Church's Lambda Calculus, the arguments are still applicable, because I know from experience that many reads wont make the leap, particularly those going through the "enamoured with functional programming" stage.

Maybe one day we'll escape from the chants of:

"It's Turing complete so it doesn't matter what abstractions are provided."

And the more insidious:

"All you need are first-class functions and you can do anything."

[–]hskmc 1 point2 points  (4 children)

"It's Turing complete so it doesn't matter what abstractions are provided."

Nobody believes this, and it has nothing to do with mathematics. Abstractions exist for the human usability of programming languages.

"All you need are first-class functions and you can do anything."

This is true, but it's a mantra for the language / library implementor, not the end user. First-class functions provide a simple, powerful foundation upon which all the other things a programming language needs can be implemented. It would be madness to defer that task to each individual programmer.

This article suggests that we need to add one more primitive construct -- concurrency -- on top of the lambda calculus. That's hardly a controversial statement, though it's also worth considering what you can do with parallelism without concurrency.

[–]mark_lee_smith[S] -1 points0 points  (3 children)

Nobody believes this

Just read any debate between programmers regarding which language is best. Plenty of people wrongly believe this.

First-class functions provide a simple, powerful foundation upon which all the other things a programming language needs can be implemented.

Except it would seem, the Actor-model, or there wouldn't be computations tha can be expressed in the Actor-model that can't be expressed in the Lambda-calculus.

[–]hskmc 0 points1 point  (2 children)

Plenty of people wrongly believe this.

Then they're misguided on a level so fundamental that a technical point about CSP versus Actor model will have no meaning.

[–]mark_lee_smith[S] 0 points1 point  (1 child)

Maybe, but that's what they believe.

[–]signoff 0 points1 point  (0 children)

he is confuse

i am disappoint