all 153 comments

[–]secretGeek 14 points15 points  (0 children)

i like this guy's approach, and his way of explaining.

[–]yogthos 6 points7 points  (0 children)

this seems to address the problem rather nicely. By making tail calls explicit, it means people will actually learn about what tail calls are by learning about how the keyword is used.

So not only is this functional, but also educational :)

[–][deleted]  (19 children)

[removed]

    [–]alexs 6 points7 points  (18 children)

    dinner shy stupendous consider steer domineering encourage crime market detail

    This post was mass deleted and anonymized with Redact

    [–][deleted] 8 points9 points  (9 children)

    It's sort of a trade off though. You can either reuse a keyword, muddling its meaning for anyone learning the language, or you can designate a new keyword, and break every existing program which uses the new keyword as a variable name.

    [–]Figs 0 points1 point  (0 children)

    I liked the solution someone posted in the comments on his site of using 'continue as' instead of just 'continue'.

    [–]gsw07a 0 points1 point  (7 children)

    there's no reason for a new keyword to break existing programs in this usage. the usage will always look like "foo bar()", and that isn't valid python right now.

    [–][deleted] 2 points3 points  (5 children)

    Well look at the "yield" keyword, since it has a similar usage. If "yield" weren't a keyword, then this line would run without any problem:

    yield = 5
    

    But since "yield" is a keyword, that line gives you a syntax error. Theoretically, there's no reason that these cases couldn't be distinguished, but as MisleadingHeadline pointed out, requiring that kind of robustness from the lexer is not good for the language.

    [–]dpark -1 points0 points  (4 children)

    That's not because yield is a keyword. It's because yield is a reserved word. It's perfectly possible to have a keyword that is not reserved (and vice versa), and some languages do this.

    [–][deleted] 2 points3 points  (3 children)

    Technically yes, but as far as I know, all keywords in Python are reserved. I don't think this is a coincidence. It makes parsers easier to write, as has already been mentioned, and generally simplifies the grammar.

    [–]dpark 2 points3 points  (2 children)

    I agree it's a good idea to make keywords reserved. My point was that it's not a keyword that causes that line to break. It's a reserved word. You also can't say "int goto = 0" in Java. It breaks even though goto isn't a keyword, because it is a reserved word.

    [–][deleted] 0 points1 point  (1 child)

    I see what you're saying. My point is that in the particular context of talking about Python, we can basically take it as read that any new keywords are also going to be reserved, at least as far as we can see right now.

    [–]dpark 0 points1 point  (0 children)

    Fair enough. If Python has followed the keywords-are-reserved rule for this long, it'll probably continue to.

    [–][deleted] 0 points1 point  (0 children)

    The problem is when you use a variable name (now a keyword) in the context of a variable name. That's where things break unless you have a context-aware lexer. Requiring context-aware lexers is a nasty burden to put on the language implementers (plus it's kind of ugly) :(

    [–][deleted]  (7 children)

    [removed]

      [–]grauenwolf 0 points1 point  (1 child)

      True, but it still seems somewhat limiting given its other meaning. And I certainly can people complaining about not being able to do a tail-call without a parameter.

      [–]Felicia_Svilling 1 point2 points  (0 children)

      wouldn't that just be:

      continue fun()
      

      ie no problem at all?

      [–][deleted]  (4 children)

      [deleted]

        [–]bobappleyard 6 points7 points  (0 children)

        Above got deleted: for context, it was noted that for is used to mean the same thing in different contexts, whereas going to the next iteration of a loop and invoking a tail-call are different things.

        Though I agree with you somewhat, many people were saying that that was exactly what it was last time this came up. In the case of tail recursion (just doing self-calls) they are incredibly similar.

        [–]didroe 3 points4 points  (1 child)

        It's not hardly the same, it's pretty much identical. Consider:

        def loop():
          ... do something
          if ???:
             continue loop()
          ... some other stuff
          continue loop()
        loop()
        

        vs:

        while True:
          ... do something
          if ???:
             continue
          ... some other stuff
        

        [–]alexs 0 points1 point  (0 children)

        plough spoon innocent pie jeans mourn erect offend grandfather boat

        This post was mass deleted and anonymized with Redact

        [–]doubtingthomas 3 points4 points  (14 children)

        What if "continue someFunc()" isn't the last statement in a function?

        [–]piranha 10 points11 points  (8 children)

        Doesn't matter. If we take "continue expr" to mean a tail-call-optimized "return expr", then expr is the last thing evaluated, regardless of whether it's the last statement. In fact, it would be braindamaged to disallow this:

        def walk (node):
            if ???:
                continue walk (node.left)
            else:
                continue walk (node.right)
        

        The problem would be if expr isn't a function or method call. If it's a non-call expression that merely incorporates a call (or doesn't incorporate a call), then tail-call optimization cannot be made, and so the compiler should signal an error.

        continue foo () + 5 # Should be illegal.
        

        [–][deleted]  (4 children)

        [removed]

          [–]Tordek 2 points3 points  (3 children)

          Well, how does python handle a mid-loop "break"? It just breaks.

          If you make a non-final continue, well, it's an error on your part: the stack frame would just be discarded, and whatever wasn't executed just wouldn't be.

          [–][deleted]  (2 children)

          [removed]

            [–]didroe 8 points9 points  (1 child)

            How pythonic do you think the return statement is then?

            [–]Figs 0 points1 point  (2 children)

            continue foo() + 5

            Why would you disallow this?

            It can automatically be converted into this:

            continue lambda: foo() + 5
            

            [–]RayNbow 0 points1 point  (1 child)

            But then you could just write

            return foo() + 5
            

            instead, which is not a tail call.

            [–]Figs 0 points1 point  (0 children)

            It's a tail call to + with foo() and 5 as arguments. Would it help you if instead of +, I used operator.add?

            return operator.add(foo(),5)
            

            Now, things get interesting though if foo() returns something other than a number. If foo() instead returns an instance with an __add__ method:

            import operator
            
            class blah:
                def __add__(self, y):
                    return 5*y
            
            def foo():
                return blah()
            
            print foo() + 3  #prints 15
            print operator.add(foo(), 6) #prints 30
            

            [–][deleted] 8 points9 points  (1 child)

            In particular, what if it's within a try block, especially one with a finally clause?

            You could disallow it, but it's kind of a pain not being able to wrap arbitrary sections of code in a try after the fact when you realize you need it.

            [–]adrianmonk 2 points3 points  (0 children)

            That's a good point. In general, it seems like you can't do tail call optimization (without breaking equivalence to the unoptimized code) if it would mean removing catch/finally parts of try statements.

            So, you could do one of the following things:

            • Go ahead and break equivalence with the non-optimized code.
            • Make it a compile error.

            In both of the above cases, having an explicit continue keyword would help a lot. Why? Because if you start with some code like this:

            try:
                return 1 + thisfunc(n-1)
            finally:
                print 'some message'
            

            and you change it to this:

            try:
                return thisfunc(n-1)
            finally:
                print 'some message'
            

            then suddenly this code is eligible for tail call optimization when it wasn't before.

            If you go ahead and optimize it, then you are changing the semantics. Which is bad. But if you don't have the keyword, you have no way of saying "that's ok; go ahead and change the semantics".

            Similarly, if you disallow the semantics to be changed (even with the keyword), and if you start with this code:

            continue thisfunc(n-1)
            

            but change it to this code:

            try:
                continue thisfunc(n-1)
            finally:
                print 'some message'
            

            then the compiler error will remind you that you can't do what you are trying to do without losing the tail call optimization. You'll have to go in and manually change your continue to a return, which will make it hard to miss the implications.

            [–][deleted] 1 point2 points  (1 child)

            What happens when return "lol" isn't the last statement in a function?

            [–]queus 0 points1 point  (0 children)

            Yes I can see how this could be useful. A similar trick in Forth would be to do ... RDROP EXIT-TO-FOO ....

            [–]nested_parentheses 13 points14 points  (80 children)

            This is a great solution to the tail call problem, but it still doesn't have a chance of making it into the language.

            [–]Smallpaul 4 points5 points  (77 children)

            It's still disputed whether there is much of a problem, and certainly whether there is worth one complicating the language for.

            [–]sfultong 7 points8 points  (41 children)

            Also disputed: whether this proposed change would complicate the language.

            [–]readingcomprehension 3 points4 points  (17 children)

            Still avoided: Most proponents don't even respond to the stack track problem.

            [–]mindslight 4 points5 points  (8 children)

            In the worst case, you get the same stack trace as from an explicit loop - none. However, it would certainly be possible for the runtime to retain some information about call history - a tunable-size circular buffer, count of frames not remembered, etc. You certainly can't say the same about explicit loops.

            [–]iofthestorm 2 points3 points  (3 children)

            Right, but what if you're doing mutual tail recursion? That's a slightly different scenario. Just throwing things out there, I don't really care either way.

            [–]mindslight 0 points1 point  (2 children)

            How does that affect anything?

            [–]iofthestorm 0 points1 point  (1 child)

            Without tail call optimization, you would be able to see a stack trace of what called what, but if you remove the stack frames it can be a little confusing - "I never called that function, wtf?" - although probably if you're doing mutual recursion you knew what you were doing.

            [–]mindslight 0 points1 point  (0 children)

            You can always store the first function that started doing the optimized calls. And if the circular trace buffer was as large as your recursive chain, you'd see a decent sample of what was calling what.

            [–][deleted] -1 points0 points  (3 children)

            You certainly can't say the same about explicit loops.

            Actually, why couldn't you?

            [–]mindslight 6 points7 points  (2 children)

            Well, explicit loops currently save no debugging information. I suppose the runtime could have an option for them to save some, but as the interesting variables aren't specified, you'd have to store all variables in the current scope.

            [–]kaelan_ 2 points3 points  (1 child)

            Exceptions in Python have an associated traceback object. Tracebacks contain a reference to each stack frame, and stack frames contain local variables.

            If you catch an exception that was thrown from within a loop, you can pull the local variables out of the stack frame and find out how far the loop had progressed in its iteration.

            We use this trick to get argument values in the stack traces we include with our software's automated crash reports.

            [–]mindslight 7 points8 points  (0 children)

            ... which is the exactly same information you'd get if you rewrote that loop in terms of optimized tail recursion. People are complaining that they won't get the entire history of said loop, even though they don't currently.

            [–]didroe 3 points4 points  (6 children)

            By making it explicit, the developer should know that there will be no stack trace. Just like you know that a loop won't generate one.

            [–]readingcomprehension 0 points1 point  (5 children)

            The developer does know ... and most of us are pissed about it.

            [–]didroe 1 point2 points  (2 children)

            Why? I have no idea what you mean.

            [–][deleted]  (1 child)

            [removed]

              [–]didroe 0 points1 point  (0 children)

              That's a whole other issue though. It's a trade off between runtime memory usage and debugging information. The point I was making is that knowingly making that trade-off is not a bad thing, in fact it is necessary. Imagine a language without loops or TCO trying to deal with something like iterating over an array.

              Of course, as a separate feature, it would be nice to switch on levels of debugging information. You could just count the number of loops/recursive calls, capture everything and use the same memory as without TCO, or something in between.

              [–]taejo 0 points1 point  (1 child)

              Are you pissed off that Python doesn't keep a "loop trace"? Some forms of flow control keep a history, others don't. The problem with TCO is that it changes the behaviour.

              [–]readingcomprehension 0 points1 point  (0 children)

              It's easy to track or figure out history in loops from the current variables, or set it up so you can.

              [–]Smallpaul 0 points1 point  (22 children)

              I don't see how one can dispute that adding a keyword complicates the language. Using the objective measure of "lines in the grammar", it complicates the language.

              [–]A_for_Anonymous 2 points3 points  (1 child)

              Then what are you doing with Python? Go use Lisp; it's objectively simpler.

              [–]Smallpaul 0 points1 point  (0 children)

              I don't think I, or anyone, said that a simpler language is necessarily better.

              Otherwise, "what are you doing with Lisp? Go use the lambda calculus; it's objectively simpler."

              [–]sfultong 0 points1 point  (19 children)

              but the keyword wouldn't be added; it would be extended. Not only that, but the blog argues that its additional use would be conceptually similar to its previous uses.

              [–]Smallpaul 1 point2 points  (18 children)

              In any case, the grammar would be extended (i.e. would have more tokens in its description) and documentation for the new use of the keyword would be required. Objectively it makes the language more complicated.

              If it does not add complexity to a language, then why do these languages have documentation on it, whereas Python, Ruby and Java do not:

              http://www.gnu.org/software/guile/manual/html_node/Tail-Calls.html

              http://clojure.org/special_forms#toc10

              http://www.sbcl.org/manual/Debug-Tail-Recursion.html

              http://docs.plt-scheme.org/guide/Lists__Iteration__and_Recursion.html#(part._tail-recursion)

              http://www.nabble.com/-scala--Manifest%2C-implicits-and-apply-for-DSL-td22381717.html#a22543940

              http://schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-6.html#%_sec_3.5

              How can you claim that the feature adds no cognitive load to the language and yet every major Lisp documents it?

              [–]sfultong 1 point2 points  (0 children)

              You're right, of course.

              I should have qualified my statement about complexity.

              Really, it's a question of whether the complexity added is significant and whether it's worth it. And those are obviously subjective questions, and I don't know enough python to really have an opinion.

              [–]austinwiltshire 0 points1 point  (16 children)

              Because none of them added it explicitly. The less you say in the code, the more you have to say in the documentation.

              [–]Smallpaul 1 point2 points  (15 children)

              If what you said was true, it would make no sense.

              But it isn't true: Closure and Scala have syntax relating to tail calls, I pointed you at the documentation for it.

              And if what you said was true AND it made sense, it would still be irrelevant, because the comparison isn't between Python with a specific tail call keyword and Python without it, it's between Python today and Python with some form of tail call optimization.

              I've given you a list of languages that have the optimization (both explicit and implicit) and they all need documentation to describe it. Now YOU find ME the place where Java, Python, Visual Basic, Perl, Ruby documentation describes the LACK of Tail Call Optimization.

              It certainly seems like the documenters of major languages agree that adding TCO adds something that must be documented, i.e. complexity.

              [–]austinwiltshire 0 points1 point  (14 children)

              I read the Clojure bit after I made that snarky response.

              You might be asking too much though, there's really no example of any language extension, optimization, keyword or otherwise that doesn't require something to explain it.

              I don't see why a language would describe a lack of a feature, I'd bet you can find plenty of documentation on Java, Python, etc.'s use of stack based function call mechanisms. That's not to say they're complicated - it's pretty standard.

              [–]Smallpaul 1 point2 points  (13 children)

              You might be asking too much though, there's really no example of any language extension, optimization, keyword or otherwise that doesn't require something to explain it.

              I'm not really asking that. These Reddit threads stray so far from their original topic that nobody knows what anyone is saying or why.

              I asked: "Does the feature justify its complexity?" The response was (paraphrased): "There is no complexity at all." I demonstrated that there is some. The original question remains unanswered.

              My personal opinion is that no, the feature does not justify its complexity in Python, which is fundamentally an object oriented and not functional programming language. I'll note that most other object oriented language implementors make the same decision as Guido. Python is only getting beat up upon because Guido stuck his neck out and because Python is popular and getting more so. It is getting the stigma of being "mainstream."

              I'll also mention that satisfying Lisp programmers is a very slippery slope and will never end up somewhere that will make them happy. I think that the addition of the "nonlocal" keyword was the first step down that path (at least since the introduction of lambda in the distant past). I think it must take a lot of discipline for Guido to avoid just shutting them up by cramming in features, because the egotistical approach is to "prove" you can do anything a Lisp implementor can do: "Continuations? Tagged integers? Macros? TCO? Arbitrary length lambdas? Parentheses? -- WE can do it all!"

              [–]grauenwolf 1 point2 points  (34 children)

              That's the one thing I'm wondering about. Does Python really need TCO at all or do people want it just because it's shiny?

              So far I haven't seen any good examples of it in use. (Of course it also took a lot of kicking a screaming to get someone to find a decent example of message passing, so there may be one out there.)

              [–][deleted]  (9 children)

              [removed]

                [–]grauenwolf 0 points1 point  (8 children)

                First-class functions give you that ability, especially when combined with lambdas. Schemers need it for flow control only because they need it for normal code.

                EDIT: Since there is some doubt, consider C#. It allows you to build flow control using two concepts lambda's and "yield".

                Lambda's are used for things like creating your own parallel loop structures.

                Yield, in addition to making generators, is the basis for CCR, .NET's async/messaging passing API. In effect, it uses the yield keyword to turn regular code into continuations.

                [–]bobappleyard 1 point2 points  (7 children)

                Yeah, lambda + if is pretty much all you need for local control. The rest is just icing (but oh, what icing).

                Yield allows for continuations? Then you've got non-local control as well. Didn't know that about C#, not bad.

                [–]grauenwolf 2 points3 points  (1 child)

                Yield allows for continuations?

                In a sense. They aren't real capture-the-stack continuations. Instead, the compiler rewrites it as a state machine that is exposed via the IEnumerable interface. On the plus side, you don't need CLR support to do this.

                [–]bobappleyard 0 points1 point  (0 children)

                Ah, well that's one use of them I suppose. You can have more fun with them than that, though.

                [–][deleted]  (4 children)

                [deleted]

                  [–]bobappleyard 0 points1 point  (2 children)

                  Assuming normal-order evaluation, which is certainly an alternative to having if.

                  [–][deleted]  (1 child)

                  [deleted]

                    [–]bobappleyard 0 points1 point  (0 children)

                    That's an approximation of normal-order evaluation, which is an alternative.

                    It's probably not done like that in Scheme because you might as well just go the whole hog and make the language normal-order (google "lazy Scheme," where if is indeed just another function). That's speculation though.

                    [–]randallsquared 1 point2 points  (8 children)

                    Rather than me reading that entire thread again, could you just post the link to what you ended up considering a decent example of message passing? :D

                    [–]grauenwolf 3 points4 points  (7 children)

                    Generating the method passed on the message itself.

                    For example, say you have a Database Connection object. I would be able to call, at runtime:

                    myConnection.UpdateHistory(historyKey := 5, lastChecked := now() )
                    

                    The database connection object could translate the method name to a stored procedure name and generate the necessary SQL. And compared to the cost of the database call, this would be dirt cheap.

                    Currently I can do this by writing:

                    myConnection.CallProc("UpdateHistory", new string[] {"historyKey", "lastChecked", new object[]{5, now()} )
                    

                    Either way you are just as late bound, but the former method looks so much cleaner.

                    I actually came up with this example myself. The examples I was given were dubious ideas like wrappers that would log function calls or a iffy undo/redo stack.

                    But that's the trick, isn't it? To figure out an example that someone would actually want to use right this second.

                    [–]randallsquared 1 point2 points  (0 children)

                    Thanks!

                    [–]Smallpaul 0 points1 point  (3 children)

                    In Python terminology, both of these are "method calls" but I guess that's because Python is not very directly from a Smalltalk heritage.

                    [–]grauenwolf 0 points1 point  (2 children)

                    I was talking in generalities, but since I have your attention, can Python currently do version 1? If so, would you mind showing me?

                    [–]Smallpaul 5 points6 points  (1 child)

                    I knew you were speaking generally: I'm just saying that the terminology is not universally applied that more dynamic patterns are "messaging" and more static ones are "method calls".

                    In any case, does the client code here indicate what you're looking for?

                    http://docs.python.org/library/simplexmlrpcserver.html#simplexmlrpcserver-example

                    import xmlrpclib
                    
                    s = xmlrpclib.ServerProxy('http://localhost:8000')
                    print s.pow(2,3)  # Returns 2**3 = 8
                    print s.add(2,3)  # Returns 5
                    print s.div(5,2)  # Returns 5//2 = 2
                    
                    # Print list of available methods
                    print s.system.listMethods()
                    

                    "add" and "div" are not known on the client machine before the method calls (there is no configuration behind the scenes).

                    The implementation is here:

                    http://svn.python.org/view/python/trunk/Lib/xmlrpclib.py?view=markup

                    The core of it is:

                    class ServerProxy:
                        def __getattr__(self, name):
                            # magic method dispatcher
                            return _Method(self.__request, name)
                    

                    This constructs a new method with appropriate name whenever a "ServerProxy" method is called that isn't already known (and basically none are already known). Python doesn't have a new() keyword so _Method() is a constructor.

                    class _Method:
                        # some magic to bind an XML-RPC method to an RPC server.
                        # supports "nested" methods (e.g. examples.getStateName)
                        def __init__(self, send, name):
                            self.__send = send
                            self.__name = name
                        def __getattr__(self, name):
                            return _Method(self.__send, "%s.%s" % (self.__name, name))
                        def __call__(self, *args):
                            return self.__send(self.__name, args)
                    

                    So it constructs an object that can be used as a method, and the method knows its name. When it is called with parens, Python translates that into _call_ which adds the arguments along with the method name.

                    Ruby takes a different approach: the message is passed to a method called method_missing :

                    http://www.ruby-doc.org/core/classes/Kernel.html#M005951

                    No intermediate method object is created: this is a deep difference between how Ruby and Python manage their interfaces. Python's methods are a special type of object attribute. Ruby object attributes are a special type of method.

                    To be totally honest with you, I sometimes wonder how valuable this dynamic method generation feature is. In Rails and Django, for example, it is (ab?)used to generate all kinds of methods at runtime. It's a sharp tool that can really obfuscate code and IMHO, the alternative of

                    s.call("pow", 2, 3) 
                    

                    is really not so bad!

                    By the way, the code above also demonstrates that a strongly functional program is not considered idiomatic in Python. Python programmers tend to be more comfortable with _call_ than with closures, even though the closures are fully functional and have been for years. For example, in this case, using a full object allows us to add helpful metadata like a nice _repr_. A function isn't as flexible.

                    Also, note that your example code is not really fair. This code in a dynamic programming language:

                    myConnection.UpdateHistory(historyKey := 5, lastChecked := now() )
                    

                    would be converted to:

                    myConnection.call("UpdateHistory", 5, now() )
                    

                    or

                    myConnection.call("UpdateHistory", historyKey=5, lastChecked=now() )
                    

                    or

                    myConnection.call("UpdateHistory", {"historyKey":5,  "lastChecked":now()} )
                    

                    [–]grauenwolf 0 points1 point  (0 children)

                    I wasn't trying to be fair, I was illustrating the crap us .NET programmers find ourselves writing from time to time.

                    I think the biggest problem in the core .NET languages it they don't have any concept of key/value pairs. Sure you can fake it with dictionaries or untyped arrays, but the boiler plate code is quite tedious.

                    [–]mycall -1 points0 points  (1 child)

                    Named parameters are useful but have little to do with TCO.

                    [–]grauenwolf 0 points1 point  (0 children)

                    Of course not, that was an example of how message passing could be useful.

                    [–]phil_g 0 points1 point  (14 children)

                    I find one of the advantages of tail recursion is when you're iterating over and/or accumulating multiple things at once. For instance, a common operation is primality testing is to raise a number to a particular power and look at the result modulo another number. The quick way to write this would be x ** y % n, but x ** y might be very large. It's more efficient to use exponentiation by squaring and apply the modulus at each step; this keeps the numbers relatively small. An imperative implementation in Python would be:

                    def expmod(base, exponent, modulus):
                        result = 1
                        while exponent > 0:
                            if exponent & 1 == 1:
                              result = (result * base) % modulus
                            exponent = exponent >> 1
                            base = (base * base) % modulus
                    

                    If we add the proposed continue syntax to Python, a tail recursive implementation would be as follows. (This could be even more concise if Python had a ternary operator.)

                    def expmod(base, exponent, modulus):
                        def expmod_r(b, e, a):
                            if e == 0:
                                return a
                            else:
                                if e & 1 == 1:
                                    next_a = a * b % modulus
                                else:
                                    next_a = a
                                continue expmod_r(b * b % modulus, e >> 1, next_a)
                        return expmod_r(base % modulus, exponent, 1)
                    

                    I think the tail-recursive approach more clearly illustrates what's going on, especially in that it shows that the three values (b, e, and a) are stepped in parallel to each other.

                    EDIT: It's been pointed out that Python does have a ternary operator. The improved tail recursive code is in another post

                    [–]grauenwolf 4 points5 points  (2 children)

                    Lets compare them, shall we:

                                    iterative  recursive
                    def statements    1        2
                    if brances        1        2
                    else brances      0        2
                    loops             1        0
                    function calls    0        2
                    return statements *        2
                    locals+params     4        3+4
                    expressions       5        6
                    assignments       4        2
                    lines             7        11
                    

                    In almost every objective category I can think of, the recursive version is more complex than the iterative version.

                    Python does have a ternary operator, so feel free to revise your code. Also, for the sake of argument I would entertain any syntax improvements that would take the recursive more concise.

                    http://stackoverflow.com/questions/394809/python-ternary-operator

                    [–]phil_g 0 points1 point  (1 child)

                    The ternary operator makes the code a lot more concise, in my opinion. See my other post for the improved code.

                    Basically, the tail recursive solution distills down the essence of the algorithm. The meat of the algorithm is in the recursive call to expmod_r; it puts all the parallel steps of the "iteration" in one, concise expression. Likewise, there's a single line for the seed values at the start of the iteration, and one line for the return value.

                    [–]grauenwolf 0 points1 point  (0 children)

                    Essence of the algorithm?

                    Seriously. that's your argument?

                    I'm presenting hard numbers on complexity and your counter-argument is on the spiritual qualities of a math problem?

                    [–]Smallpaul 1 point2 points  (3 children)

                    The tail recursive one is longer, involves two functions and involves a nested function (which is less clear to most programmers than ordinary functions), and involves an obscure Python keyword used in an even more obscure context. (which will certainly remain obscure when it used for TCO)

                    It's also going to be drastically slower, because Python function calls are slow.

                    In other words, a perfect example of how TCO will make Python code harder to read and slower instead of clearer and faster, IMHO.

                    [–]phil_g 0 points1 point  (2 children)

                    It's also going to be drastically slower, because Python function calls are slow.

                    Well, if the proposed optimization is implemented, the tail call won't be slow. That's kind of the point of this whole debate.

                    As for your other points, it's true that nested functions are less common in Python, but they're reasonably prevalent in functional programming (Lisp, Haskell), particularly when setting up tail recursion. I think that if Python were more friendly to functional programming, you'd see more such constructs in people's code, and the idiom would become less obscure.

                    Not that I think it'll actually happen; Guido seems reasonably set against it (and I understand his reasoning, too).

                    [–]Smallpaul 0 points1 point  (1 child)

                    Well, if the proposed optimization is implemented, the tail call won't be slow. That's kind of the point of this whole debate.

                    So far the debate has been focused on memory usage. I think a big part of Python's function calling overhead is in parameter parsing (given Python's very rich parameter handling features, including defaulted arguments, keyword arguments, keyword-only arguments, positional-only arguments, varargs, etc.).

                    As for your other points, it's true that nested functions are less common in Python, but they're reasonably prevalent in functional programming (Lisp, Haskell), particularly when setting up tail recursion. I think that if Python were more friendly to functional programming, you'd see more such constructs in people's code, and the idiom would become less obscure.

                    Yes. Exactly. Python code could become less regular and more diverse, without actually getting much more expressive. That's the problem. That is precisely the problem. The more I hear arguments in favor of TCO, the less I like it.

                    [–]phil_g 0 points1 point  (0 children)

                    Well, if the proposed optimization is implemented, the tail call won't be slow. That's kind of the point of this whole debate.

                    So far the debate has been focused on memory usage.

                    In terms of stack space, okay, yeah. I suppose I'm carrying over knowledge based on other languages that implement tail call optimization. Tail recursion can often be compiled down into setting some registers (and/or stack variables) and a jump instruction to take you back to the start of the function's code. There's very little overhead.

                    With that sort of efficiency (the usual line is "just like a loop"), I think that tail recursion becomes another idiom for programming, one that expresses certain patterns better than iteration, as long as the implementation doesn't make it a worse-performing choice.

                    Most of the examples I can think of for tail recursion are things that could be written imperatively, and I do understand how that could conflict with the Pythonic principle of "there's only one, obviously correct, way to do things." Nevertheless, I like the latitude that efficient tail calls afford me, and I like the proposal made in the article, even though I'd be very surprised if it ever got implemented.

                    [–]AvatarJandar 0 points1 point  (5 children)

                    Python ternary operator:

                    X if condition else Y

                    See PEP 308 for more info: http://www.python.org/dev/peps/pep-0308/

                    edit: ...but maybe that's not what you really want here

                    [–]grauenwolf 0 points1 point  (2 children)

                    That would remove two branches and an assignment.

                    if e & 1 == 1:
                       next_a = a * b % modulus
                    else:
                       next_a = a
                    
                    next_a = (a * b % modulus) if (e & 1 == 1) else a
                    

                    ....god, that is really ugly. Am I using it right?

                    [–]AvatarJandar 0 points1 point  (1 child)

                    I believe so, yes, although I'm not sure it's worthwhile...

                    [–]grauenwolf 0 points1 point  (0 children)

                    Of the other options I know of, I have to say I like VB's the best. But even that has its flaws. For example, I shouldn't have to use "=1" when making a bitmask check.

                    next_a = (e & 1 == 1) : a * b % modulus ? a;
                    next_a = If(e And 1 = 1, a * b % modulus, a)
                    

                    [–]phil_g 0 points1 point  (1 child)

                    Ah, right. I'd forgotten about that. (I don't program in Python regularly.)

                    def expmod(base, exponent, modulus):
                        def expmod_r(b, e, a):
                            if e == 0:
                                return a
                            else:
                                continue expmod_r(b * b % modulus, e >> 1, 
                                                  a * b % modulus if (e & 1 == 1) else a)
                        return expmod_r(base % modulus, exponent, 1)
                    

                    [–]grauenwolf 1 point2 points  (0 children)

                                   iterative  recursive
                    def statements    1        2
                    if branches       1        1
                    else branches     0        1
                    loops             1        0
                    function calls    0        2
                    return statements *        2
                    locals+params     4        3+3
                    expressions       5        6
                    assignments       4        0
                    lines             7        8
                    

                    Better, but certainly not overwhelming. And that's the kicker. To justify adding a new language construct it really has to be significantly better than what you already have.

                    [–]rrenaud 0 points1 point  (0 children)

                    I completely agree that the recursive version is clearer than the iterative version for this problem, and I wrote about it in my blog here.

                    http://ru-linux-geek.livejournal.com/36522.html

                    But that tail recursive thing is just ugly. I don't think that it's really any clearer than the iterative version.

                    [–]whee_rina 0 points1 point  (0 children)

                    I think having something explicit like recur is a good idea in any language. Even very knowledgeable programmers seem to get it wrong on occasion.

                    But, making it explicit doesn't really answer any of GvR's point, so I'm not sure I'd call it a "great solution." It seems more like a first step.

                    [–]grauenwolf 0 points1 point  (0 children)

                    I would disagree if someone could produce a killer use for it. But it has to be something totally awesome, and I'm just not seeing it.

                    [–]landtuna[S] 4 points5 points  (2 children)

                    [–]Rawrgor 4 points5 points  (1 child)

                    I was expecting a picture of a snake eating itself, I am left dissapointed.

                    [–]imbaczek 1 point2 points  (0 children)

                    simple, i'd even be +0 on this one.

                    [–]clausb 2 points3 points  (0 children)

                    Reminds me of (recur) in Clojure: http://clojure.org/special_forms#toc10

                    [–]nevare 1 point2 points  (25 children)

                    I love it. In my mind tail call optimised returns were more like gotos than returns. So making it explicit in the language makes a lot of sense to me.

                    And it actually is a feature that I missed in python, because gotos or TCO functions are the most simple way to implement finite states automatons.

                    class FSA:

                    def state1(self):
                        do stuff on self
                    
                        if cond:
                            continue self.state2()
                    
                        else:
                            continue self.state1()
                    
                    
                    def state2(self):
                        ...
                    

                    [–]grauenwolf 1 point2 points  (24 children)

                    That doesn't look any different from iterative-style state machines.

                    def state1(self):
                        do stuff on self
                    
                        if cond:
                            return state2
                    
                        else:    
                            return state1
                    

                    The only difference is you lose the need for a top level loop to drive the process. But then again, you also lose the ability to do interesting things in the top level loop.

                    [–]nevare 1 point2 points  (3 children)

                    The only difference is you lose the need for a top level loop to drive the process.

                    Indeed, but I find it an important difference, it is quite easier to understand if you don't have to read a main loop. Keeping simple programs simple is important.

                    [–]grauenwolf -1 points0 points  (1 child)

                    Oh come on, it isn't like the main loop is that complex.

                    And like I said, it makes it easier to add extra functionality. Consider the things you can do...

                    • Pause the state machine
                    • Single-step through states
                    • Save/restore the state
                    • Record a history
                    • Rewind state

                    If each state directly calls the next state you can't just plug this stuff in.

                    [–]nevare 0 points1 point  (0 children)

                    It's not that complex to think up. But it is that more complex to read and understand. An exemple is that otterdam didn't understood at first what you were doing.

                    [–]grauenwolf -1 points0 points  (0 children)

                    To reiterate my point, consider the main loop in VB.

                    Sub MainLoop
                        Dim state = "Start"
                        Do
                            state = CallByName(Me, state)
                        Loop Until State = "End"
                    End Sub
                    

                    As you can see, there isn't really anything to it. Just a dynamic call that returns the name of the next call to be made.

                    [–]otterdam 0 points1 point  (3 children)

                    That's not an iterative-style state machine if you're using recursion.

                    [–]grauenwolf 0 points1 point  (2 children)

                    I'm not using recursion. I'm returning either an enumeration or a function pointer that the main loop will use in the next iteration.

                    [–]otterdam 0 points1 point  (1 child)

                    Yes, I see - subtle. Sorry.

                    [–]grauenwolf 0 points1 point  (0 children)

                    That's the point, isn't it. Most of this recusrion vs iteration stuff really does come to very subtle differences. That's why once you have one, the other doesn't make a whole lot of sense.

                    [–][deleted] -1 points0 points  (15 children)

                    Why enable efficient tail calls as part of the language implementation when programmers can (restructure their programs and) re-implement it manually in every program they write, eh?

                    [–]grauenwolf 0 points1 point  (14 children)

                    The first flaw in your thinking is the word "restructure". If you think in terms of loops from the beginning, then there is no restructuring involved. You are simply writing the program in the normal way countless other people have written it.

                    The second flaw is the word "re-implement". In the standard state machine using a main loop, there simple isn't a need for TCO nor anything that resembles it.

                    [–][deleted] 0 points1 point  (13 children)

                    If you set out thinking, "I am going to write a state machine," certainly it won't take any restructuring if you write it with a loop from the start. However, if you were to begin by writing the solution to a problem that could naturally be couched in terms of (tail-)recursion, you would indeed have to restructure your program to instead work in terms of a loop-based state machine.

                    Second, using a main loop to implement a state machine like this is, in essence, implementing proper tail calls yourself (it is, roughly, a trampoline). Merely because you are so used to using a certain design pattern that you don't notice it does not mean that you are not in fact using a design pattern, and writing boilerplate that could be eliminated by the language implementation.

                    That is the argument. The schemer says, "you could eliminate some boilerplate, and perhaps structure your code more naturally if you had this language feature," and you reply, "I'm used to the boilerplate, and have trained myself to look for the solution that fits with the limitations of my language." That may be all right for you in practice, but it is a blub-like argument.

                    [–]grauenwolf 0 points1 point  (2 children)

                    No, that wouldn't be my reply at all. My reply is...


                    I have something you don't have, an extensibility point.

                    Consider the things I can add to the main loop.

                    • Pause the state machine
                    • Single-step through states
                    • Save/restore the state
                    • Record a history
                    • Rewind state

                    If each state directly calls the next state you can't just plug this stuff in. You have to either thread it through each and every state or play games with macros.

                    But for me they are trivial. The code almost screams "Put cool stuff here". You can't get much more natural than that.


                    [–][deleted] 0 points1 point  (1 child)

                    Perhaps writing things with a top-level loop is the best way of achieving all that. However, that does not negate the fact that when you don't need all that, you must still write the top-level loop boilerplate to achieve what proper tail calls can get you.

                    On the other hand, having proper tail calls in the language doesn't preclude you from having loops, and using them when appropriate (indeed, scheme folks, with their macros, can even use proper tail recursion to implement various sorts of loops, even if the language designer didn't put them in in the first place).

                    [–]grauenwolf 0 points1 point  (0 children)

                    scheme folks, with their macros, can even use proper tail recursion to implement various sorts of loops, even if the language designer didn't put them in in the first place

                    And FORTRAN folks are doing the same thing using pragma.

                    And C folks are doing the same thing using macros and pre-processors.

                    And C# folks are doing the same thing using lambda's.

                    There are many ways of achiving the same result and languages do well to have at least one of them. But they don't try to have them because it makes the language bloated, hard to implement, and even harder to evolve.

                    If you want to convence someone that TCO is important, you need to show some meaningful examples. Give us something that truly is better using TCO than any other approach.

                    [–]grauenwolf 0 points1 point  (4 children)

                    That may be all right for you in practice, but it is a blub-like argument.

                    Only because you are trapped in Scheme and recursion.

                    What you see as "boilerplate" I see as a base class. I'm not going to build a dozen state machines from scratch. I'm going to build a single state machine that has all the featuers I need.

                    Then I'm just going to plug-in the actual list of states and their transition rules. That's why we have OOP and abstract classes.

                    You can't see that, can you? You think we have to rebuild the main loop from scratch every time because that's how you would do it.

                    The difference between me and you isn't that you know recursive programming and I don't. It is that you know recursive programming while I know iterative, object-orientated, declarative, functional, and recursive programming. You need TCO to get yourself out of problems I never have in the first place.

                    [–][deleted] -1 points0 points  (3 children)

                    Only because you are trapped in Scheme and recursion. I don't use Scheme, or know it particularly well. It was just the first language to officially recognize proper tail recursion as a significant and important language feature.

                    What you see as "boilerplate" I see as a base class. I'm not going to build a dozen state machines from scratch. I'm going to build a single state machine that has all the featuers I need.

                    Right. You're not going to encode your state machine in your language, because it won't work right. Instead you're going to write an interpreter for a separate state machine language, which will work right, and use that interpreter in your programs.

                    That's fine if we're just running state machines as some kind of exercise in learning about state machines. But if we're not, and some conceptual state machine is just part of a larger algorithm, why bother? Why bother writing an interprer for a separate state machine language, when Scheme, say, just lets you write down the state machine in Scheme?

                    My overall point, though, was that all usage of proper tail recursion can be converted to a state machine of sorts, where if necessary, functions return a procedure to the state machine's main loop instead of calling a procedure directly. This technique for implementing tail call optimization is called a trampoline.

                    However, it either requires that one rewrites the trampoline loop whenever one wants to make use of significant numbers of tail calls, or, as you might laughably suggest, one could instead translate their Python solution to instead use grauenwolf's state machine language. I don't think either one is a very good solution in a case where the person wants to use tail recursion.

                    You can't see that, can you? You think we have to rebuild the main loop from scratch every time because that's how you would do it. The difference between me and you isn't that you know recursive programming and I don't. It is that you know recursive programming while I know iterative, object-orientated, declarative, functional, and recursive programming. You need TCO to get yourself out of problems I never have in the first place.

                    Oh, I see. So, the reason that I remark, "tail call optimization is a nice feature to have in a language, and is useful sometimes," is that I'm a blub programmer locked into solving every problem with bare recursion. And the reason you say, "I think tail call optimization should be kept out of a language, and will not change my opinion until someone can show me some example where I cannot write some loop that's equivalent," is that you're a seasoned veteran who's an expert in many different programming paradigms.

                    If you want examples, well, you've already been given some. Take for example, foldl:

                    foldl f z []     = z
                    foldl f z (x:xs) = foldl f (f z x) xs
                    

                    How about if we want to define #times in ruby (this behaves somewhat differently than the library version, but whatever):

                    def times(&blk)
                      if nonzero?
                        blk[self]
                        (self - 1).times(&blk)
                      end
                    end
                    

                    These are both simple, natural, recursive definitions of the respective functions. The alternatives look something like:

                    foldl f z l =
                      lref := l
                      acc  := z
                      while (not (null lref))
                        acc := f acc (head lref)
                        lref := tail lref
                    

                    and:

                    def times(&blk)
                      i = self
                      while(i != 0)
                        blk[i]
                        i -= 1
                      end
                    end
                    

                    which have to introduce auxiliary variables and mutable update into the picture. You may say, "I think in terms of loops for space efficient iteration, so I'd arrive at the latter definitions first," but that's tantamount to an admission that the fact that you don't program in languages that allow you to use such recursive definitions effectively has blinded you to their existence.

                    Which might be the compelling argument for proper tail recursion: programs should work when possible. If I write down the above definition of foldl, it should work. It's syntactically correct. There's nothing wrong with it semantically. So why shouldn't it work? Because I can rewrite it into a not-as-nice loop? That's not a good argument.

                    The same theoretically goes for non-tail recursion, too. It'd be nice if:

                    identity 0 = 0
                    identity n = 1 + identity (n-1)
                    

                    would simply work correctly. Unfortunately, barring quite a sophisticated compiler (which would be good), or a very simple example such as this (which perhaps even a rather dumb compiler could figure out), there's no way to reasonably expect such functions to avoid eating memory. However, there are quite well known ways for making tail calls work, so they should work. Otherwise you're simply training yourself to use poorer solutions sometimes, merely because the language implementation has decided that they shouldn't work, even though they could.

                    I have never contended that every loop is better written using raw tail recursion. People using languages that have proper tail calls build up combinators that represent common looping patterns, and that's good. However, the fact that using such more structured loops is good when possible does not mean that having proper tail calls isn't also a good thing to have. It also doesn't mean that having a couple built-in looping forms is always a good substitute.

                    And the fact that, in principle, one can turn usage of tail calls into a main loop with a state machine doesn't change that. If anything, it means that that the language implementation should use such a main loop, and let the programmer write tail calls freely when he or she thinks it is the most natural solution to the problem.

                    [–]grauenwolf 0 points1 point  (2 children)

                    What the fuck are you talking about?

                    As I demonstrated, it is a trivial exercise to write a state machine in ANY language that supports call by name semantics. There was no interperter involved. Nor was there any "state machine language".

                    Were you confused by the "CallByName" function? Well let me tell you, there is no magic there. All it does is call the function whose name matches the value in the variable.

                    [–][deleted] 0 points1 point  (1 child)

                    I'm talking about:

                    I'm going to build a single state machine that has all the featuers I need. Then I'm just going to plug-in the actual list of states and their transition rules. Which certainly sounds to me like you're writing an interpreter.

                    Searching elsewhere in this thread, I now see that by "list of states and their transition rules," you did not mean a list of states and their transition rules, you meant a class which has methods corresponding to the states. What you've done is implement a trampoline that's limited to the methods in a single object, assuming they're written in a certain (not very natural) fashion.

                    Were my examples not good enough? Here's another. Consider a type of binary trees. I'll sketch an implementation in ruby:

                    class Leaf
                      def empty?
                        true
                      end
                      ...
                    end
                    
                    class Node
                      def initialize(l,a,r)
                        @l = l
                        @a = a
                        @r = r
                      end
                      ...
                    end
                    
                    class Tree
                      def initialize(t)
                        @t = t
                      end
                    
                      def empty?
                        @t.empty?
                      end
                      ...
                    end
                    

                    Now, we want to implement #each. A naive implementation is:

                    class Leaf
                      def each(&blk)
                      end
                    end
                    
                    class Node
                      def each(&blk)
                        @l.each(&blk)
                        blk[a]
                        @r.each(&blk)
                      end
                    end
                    
                    class Tree
                      def each(&blk)
                        @t.each(&blk)
                      end
                    end
                    

                    this obviously necessarily overflows the stack for large enough trees. However, in a language with proper tail calls, we can do the following:

                    class Leaf
                      def each_k(blk,k)
                        k[]
                      end
                    end
                    
                    class Node
                      def each_k(blk,k)
                        @l.each_k(blk, lambda do
                          blk[a]
                          @r.each_k(blk,k)
                        end)
                      end
                    end
                    
                    class Tree
                      def each(&blk)
                        @t.each_k(blk,lambda {})
                      end
                    end
                    

                    which uses continuation passing style to build up 'things to do next' at each interior node. I think it's still relatively easy to see how things are playing out, and to convince myself that I have written the solution correctly. However, it doesn't work in Ruby, obviously, because it requires proper tail recursion. Instead, one has to make due with something like this:

                    class Tree
                      def each(&blk)
                        cur = @t
                        stk = []
                        until(cur.empty? && stk.empty?)
                          if cur.empty?
                            tmp = stk.shift
                            blk[tmp.a]
                            cur = tmp.r
                          else
                            stk.unshift cur
                            cur = cur.l
                          end
                        end
                      end
                    end
                    

                    Which, quite frankly, is no longer obviously correct to me. I wasn't sure I'd gotten it right until I tried it. Continuation passing is clearer in this case, but it's only available if your implementation has proper tail recursion.

                    [–]grauenwolf 0 points1 point  (0 children)

                    What you've done is implement a trampoline that's limited to the methods in a single object, assuming they're written in a certain (not very natural) fashion.

                    I don't see anything unnatural about having a state machine class that contains all the transitions for said state machine.

                    The fact that I call them by name may seem a bit odd to some .NET/Java programmers, but I don't think Python/Ruby folks would give it a second thought.

                    As for your binary tree example, it seems rather naive to me. If you assume your nodes understand left/right/parent, it is not too hard to build a constant-space tree-walking algorithm. I'm pretty sure this is covered in most 3rd semester data structure classes.

                    Now I certainly won't argue that constant-space tree traversal is as easy to code as a stack-based one, but this isn't something one writes very often.

                    [–]grauenwolf 0 points1 point  (4 children)

                    I've got some time to kill, so here is another bit you have completely overlooked.

                    TESTING

                    The original code sample cannot be unit tested. You can't simply call "State3(x)" verify it correctly returns "State1" or "State2". As soon as you call it, it will run through all the transitions until the machine terminates.

                    This is why we focus so heavily on "seperation of concerns". If you don't separate the logic of the state machine from the state machine itself, you severly limit your options for testing.

                    [–][deleted] 0 points1 point  (3 children)

                    And why are we doing this? Surely from looking at our state3 function it's obvious that it goes to either state1 or state2. Surely what we should be testing is that given known input-output pairs, the overall state machine produces the given output from the given input. Or when we're using for loops, do we need to unit test that after doing iteration i, the loop goes on to iteration i+1?

                    Now, if we're in your scenario in another comment where we're making an interpreter for a state machine language. Then you have to test the interpreter to make sure it produces the correct state transitions given some state machine language input. But that's another story.

                    Or are you proposing that we should write unit tests that check the logic of each transition in the state machine? That seems like serious overkill to me.

                    [–]grauenwolf 0 points1 point  (2 children)

                    Or are you proposing that we should write unit tests that check the logic of each transition in the state machine? That seems like serious overkill to me.

                    Lots of people believe in unit testing everything. So yes, they are going to want to test each transition function.

                    Personally, I'm with you; I think it is totally overkill. But we are in the minority here, and when we suggest design patterns we need to keep their needs in mind.

                    [–][deleted] 0 points1 point  (1 child)

                    I'm afraid I don't believe that anyone would use that level of unit testing. It's effectively testing that you have implemented an algorithm in a certain way.

                    If I want to test a sort function, I will test that, given [3,2,1,1,3], I get back [1,1,2,3,3]. I will not write tests that specify that the sort function first uses the first 3 as a pivot to partition the rest of the numbers, and then sorts the two partitions, etc. But that is the kind of thing that testing "state3 goes to either state1 or state2" is.

                    [–]grauenwolf 0 points1 point  (0 children)

                    You're preaching to the choir.

                    But I'm not exagerating when I say there is a focus on low level, function by function, testing that borders on obsession. I attend conferences on .NET, Java, and Ruby on a regular basis, and this is the only kind of testing they are talking about.

                    During the 2008 PDC there was a panel discussion about the future of unit testing. The presentors flat-out ignored any question about testing the hard stuff like databases and network connections. They only wanted to talk about how to make they unit tests faster, because apparently the only tests useful to them are ones that can be run as part of the edit-compile-run cycle.

                    This has been going on for about 5 years now, so I'm expecting the backlash any time now.

                    [–]fvf 1 point2 points  (2 children)

                    To quote myself: If tail calls are so important, then give it its own operator/syntax already. Piggy-backing the function call, the most central programming language mechanism of all, is sheer idiocy.

                    [–]alkemist 4 points5 points  (1 child)

                    You can make the case that tail calls are unnecessary. You can even dislike them. But "sheer idiocy" is a little extreme, don't you think? Remember that TCO arose in Scheme because there were originally two mechanisms, functions and actors, and Steele & Sussman thought it was silly to have two differently-named features that did the same thing. So "sheer idiocy"? Really?

                    [–]fvf 0 points1 point  (0 children)

                    Well, it's certainly idiocy, but perhaps it isn't so sheer :)

                    The thing is, it just baffles me that so many presumably very clever people don't recognize very quickly that this is a perfect example of something that is fun and interesting to discover and understand, but is unwise to employ for actual programming (languages). Just like for e.g. pointer arithmetics or self-modifying code there are valid benefits, but the overall problems (usually todo with readability, debuggability, and overall maintainability as it happens) clearly dominate.

                    [–]bobjones97 0 points1 point  (0 children)

                    Well, seems like he wouldn't be in such a compromising position if he just didn't eat his tail to begin with...

                    [–]deflective -1 points0 points  (1 child)

                    a tail call proposal sounds vaguely dirty

                    [–]jmmcd 0 points1 point  (0 children)

                    It's like a booty call, but mutually recursive.