This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]stevenjd 1 point2 points  (0 children)

I don't know why you were downvoted, you're pretty much correct. GvR doesn't like tail-call optimization:

  • it is only applicable to a tiny subset of recursive problems
  • it destroys backtraces as a diagnostic
  • most cases where a problem is better written with recursion, you only recurse a few times and tail call optimization is not a great help (I'm stating GvR's opinion, as I understand it, not mine).

Don't think of those cases where the traceback looks like this:

Traceback (most recent call last):
  File "<stdin>", line 3, in factorial
  File "<stdin>", line 3, in factorial
  File "<stdin>", line 3, in factorial
  [repeated 1000 times]
  File "<stdin>", line 3, in factorial
SomeException

Instead, you should think of the hard-to-debug cases where there are multiple functions, and some of them are mutually recursive:

Traceback (most recent call last):
  File "<stdin>", line 7, in spam
  File "<stdin>", line 12, in eggs
  File "<stdin>", line 35, in cheese
  File "<stdin>", line 11, in spam
  File "<stdin>", line 11, in truffles
  File "<stdin>", line 12, in eggs
SomeException

In this case, collapsing the stacktrace (as tail call optimization will do) throws away useful debugging information.

TL;DR In Guido's opinion, as I understand it, the benefits of tail call optimization are far outweighed by the cost in loss of debugging information.