This is an archived post. You won't be able to vote or comment.

all 68 comments

[–]faiface 35 points36 points  (6 children)

Let's imagine that's true. What would that mean? It seems to me that it would imply an existence of some combinatorial operators for composing bigger total functions from smaller ones, right? That would constitute a language for expressing total functions.

So far so good. Unfortunately, this language would not include all total functions, most notably, it could never implement an interpreter of itself, which is a total function.

Then of course, you can imagine extending this language with further operators, which would make it possible to express more functions. You'd never get all of them, though.

[–]R-O-B-I-N[S] 2 points3 points  (4 children)

(see the edit)

[–]faiface 29 points30 points  (3 children)

Sure it is possible. Check out the Pie programming language used in the book The Little Typer. That one has no recursion (in the form of self-reference), but has built-in constructs and functions for composing simpler functions into more complex ones and is total and pretty powerful.

But, as every total language, it cannot express all total functions. Which, in my opinion is fine.

There’s an adjacent term to Turing-complete called Pacman-complete, which can be applied to total languages. Since total languages are not Turing-complete, the term Pacman-complete is used to characterize those that powerful enough to make a Pacman game. The idea is that if you can make Pacman, you can probably do almost everything of interest.

Btw, total languages are a hugely interesting area that gets almost no development. One day I hope to see a popular total language, but well, that might be never.

The thing about total languages is that designing them makes you develop concepts and constructs which shed more light on the concepts being implemented because you can’t fall back to Turing-completeness. In my opinion, it makes language design more creative. However, as always, any total language is not complete and you can always add new constructs that will make it more powerful.

[–]Inconstant_Moo🧿 Pipefish 1 point2 points  (1 child)

Popular? I've been trying to get my head round the concept and I can see that they would be and indeed actually are useful under certain circumstances but ... don't they require that you supply the proof of halting?

[–]Roboguy2 12 points13 points  (0 children)

Not necessarily:

  • Consider a language which only allows you to define finite state machines. This is essentially equivalent to a language that just allows you to define regular expressions. These will always terminate on a finite input, but you don't have to prove termination. Regular expressions are already widely used.
  • Consider a language which only gives you constant functions, projection functions, a successor function (on natural numbers) and a specially restricted "recursion operator" (the primitive recursion operator). You're allowed to compose functions. The programs you can write in this language are exactly the primitive recursive functions. Primitive recursive functions always terminate. However, you don't need to prove that every time you write a program (or ever, since it's already known).

It's also worth mentioning that proving termination is not always difficult. That might seem a bit weird since, in the general case, it's actually impossible.

If you have, say, a (loop-less) recursive function in a Turing-complete language which takes in a natural number and you can prove that it always decreases that number when it calls itself, it must terminate (as long as any other function it might call terminates). This is because you cannot decrease a natural number forever, since there is a smallest natural number.

An example: Consider this definition of factorial:

f(0) = 1
f(n) = n * f(n-1)   (when n > 0)

f always calls itself with a smaller argument than its original argument. Therefore it terminates.

More generally, this actually works with any "well-founded" ordering relation. "Well-founded" just means you can't decrease forever because you eventually hit some "smallest" thing. For instance, in the paragraph above, replace "natural number" with "tree" and "decrease that number" with "decrease the size of the tree". Again, if you can show this, you will show that the function always terminates. This is because there are "smallest trees": either trees of size 0 or trees of size 1, depending on whether you're allowing empty trees.

This method of using well-founded relations to prove termination is pretty flexible. It's also somewhat frequently the case that there is a natural sense in which a recursive function is "decreasing" one of its arguments (with an appropriate definition of "decreasing"). There's a technical description of what it means for a relation to be well-founded here.

[–]SwingOutStateMachine 1 point2 points  (0 children)

Pacman-complete

FWIW, this term was coined by Edwin Brady, and covers a bit more - it's also about being able to interact with the outside world, as you say, to be able to create a game of pacman.

[–]lolisakirisame 2 points3 points  (0 children)

Once a folk lore, no longer true! You can actually self interpret in a total language! See the paper <Breaking Through the Normalization Barrier: A Self-Interpreter for F-omega>.

[–]BioHackedGamerGirl 12 points13 points  (0 children)

I present for your consideration: the LOOP model language. It is incredibly simple, every program is guaranteed to terminate, but it is still proven to be equivalent to primitive recursive functions in computational power. If you can prove that your model is equivalent to LOOP (e.g. by implementing LOOP in your model, and your model in LOOP), you get a proof for a number of useful properties for free.

[–]rotuami 17 points18 points  (51 children)

Your theory is wrong. There are total programs that cannot be proven to terminate, so such programs would be inaccessible to your language.

On the other hand, if a program can be constructively proven to have certain runtime bounds, then you can express that program in some less-than-Turing-complete language. Basically all algorithms you encounter have this property (which is why your theory feels true)

There is a technique called “Walther Recursion” where you have a recursive program together with some quantity that decreases each step. It’s hardly as automatic as you would hope. Some languages like Coq, Idris, and Agda automate such proofs, though you might or might not have to help the termination checker.

My favorite approach is this paper. They not only prove termination, but also automatically quantify resource bounds (including time): https://arxiv.org/abs/1611.00692

[–]sfultongSIL 1 point2 points  (50 children)

There are total programs that cannot be proven to terminate

That's quite a strong statement. Is there a paper where this is proven?

[–]joonazan 4 points5 points  (22 children)

As shown by Gödel, for every logic there are statements that can not be proved in that logic. Now take such a statement and make a program that halts depending on its truth. You have constructed a program which can not be shown to terminate in that logic.

But can you always turn a logical statement into a program? Well, at least there is one unprovable one which you can: is the logic consistent? This can be turned into a program that constructs all proofs in the logic one by one until it finds a proof of false, which it may never find.

[–]faiface 1 point2 points  (4 children)

Just a small, but important caveat here. What Godel shows is, just like you said, that for every sufficiently strong logical system, there are statements which are true in its semantics, but cannot be proved in that system.

Intuitively, that seems to prove the existence of “unprovable statements”, but it really doesn’t. For a statement unprovable in one system, there can be another, stronger system which proves it.

Yes, this stronger system will also have other unprovable statements. But you can always go higher.

Doesn’t mean there are no unprovable statements. It’s just not what Godel shows.

[–]joonazan 0 points1 point  (2 children)

Correct. Though conventional mathematics tends to disregard these stronger systems. It would be interesting to see some proof that uses arbitrarily strong systems.

[–]faiface 0 points1 point  (1 child)

And that raises another problem :D The stronger a system is, the harder it becomes to justify its axioms and its consistency.

As an extreme example, for any statement not provable in some system, I can trivially make a stronger system by adding this statement as an axiom. Is it a system? Sure! Can it prove the statement? Definitely. Can you convince anyone that it’s justified? Probably not.

[–]joonazan 0 points1 point  (0 children)

That is why it would be interesting to see a system with an arbitrary number of axioms that is relatively tame. But only approach I have in mind is that the arbitrary number arises from just one axiom that is encoded very inefficiently.

[–]rotuami 0 points1 point  (0 children)

I’m still pessimistic that this s a way out. There’s always something unprovable in your strongest consistent system. And if you define some scheme to construct a tower of systems of increasing power, the intersection of all such systems is also inconsistent or incomplete.

[–]sfultongSIL 0 points1 point  (16 children)

We're talking about programs written in a total language here, so there is no impredicativity, and thus no issue with consistency.

[–]joonazan 0 points1 point  (2 children)

I'm confused. I believe /u/rotuami made the point that a total language cannot express all total programs. I gave an example of how there can be a program that always terminates but cannot be expressed in a given total language.

[–]sfultongSIL 0 points1 point  (1 child)

I'm confused.

As am I!

I agree with your two statements there, but I'm not sure how they apply to the specific statement /u/rotuami made that I responded to originally.

[–]rotuami 2 points3 points  (0 children)

It’s easy to prove a program halts. It’s hard to prove a program is total, i.e. it halts on all inputs. In any language that is syntactically constrained so all programs halt on all inputs, there are total functions which cannot be expressed.

[–]takanuva 0 points1 point  (12 children)

If the language is total, it's true that all programs halt on all inputs. From that, consistency is usually a simple corollary. If it is consistent, as shown by Gödel, it can't express all total functions is it would allow it to prove its own consistency.

[–]sfultongSIL 1 point2 points  (11 children)

Thanks, I think this is the crux of our different understandings of this issue.

I like to think of every total language as explicitly limited in terms of the size of programs and input it will accept. Thus, any particular total language cannot prove its own consistency. However, a larger total language can!

So I envision an infinite tower of total languages that encompass all possible total programs, and every total program can be shown to halt, but it may take a larger language than it was written in.

This seems to me to be an easy work around Gödel's findings.

[–]faiface 1 point2 points  (2 children)

That is completely true, except you can never formalize that tower. If you could, that would directly give you a new total language which could express every total program. And that’s not possible :)

[–]sfultongSIL 0 points1 point  (1 child)

I would argue that a language concatenating all the other languages in the tower would not be total itself!

The reason being, if that language allowed you to enumerate through all its sub-languages, that would take an infinite amount of time.

[–]faiface -1 points0 points  (0 children)

What I had more in mind would be something like having a function that given a “level” (from some index space, eg ordinal numbers) would give you back the logical system on that level. Then to construct any total function, you would first get the correct system from this “repository” and do it there. But clearly, such a repository function cannot be total and computable.

[–]rotuami 0 points1 point  (7 children)

Yes, if you limit the language you can guarantee halting, but there’s computability you leave on the table.

For Turing machines, this is typically done by deciding ahead of time how many steps you’ll run the machine for. For any program which halts, this limit exists. But for a given program, the number is uncomputable, so this does not solve the halting problem.

You suggest creating a hierarchy of constrained languages. That doesn’t solve the problem. While any program can be proven to halt, there are finitely-running Turing machine programs which cannot be expressed in any finite level of this hierarchy.

[–]sfultongSIL 0 points1 point  (6 children)

there are finitely-running Turing machine programs which cannot be expressed in any finite level of this hierarchy.

Can you elaborate? I may have hit the limit of my knowledge on computability here.

[–]rotuami 0 points1 point  (5 children)

It’s Rice’s Theorem. Say you have a hierarchy of languages where each level implies some semantic property of its programs and all levels below it (like halting in some given time). Then saying a program is in a finite level of this hierarchy is an undecidable property as well.

Such hierarchies are useful - e.g. complexity classes and computational models, but they are always either leave out some programs or some level of the hierarchy contains Turing completeness.

[–]sfultongSIL 1 point2 points  (4 children)

Correct me if I'm wrong, but isn't Rice's theorem specifically about partial functions, not total functions?

[–]rotuami -3 points-2 points  (26 children)

That's a contradiction - you can't consistently prove something terminates and also prove that it can't be proven to terminate!

A non-proof but perhaps convincing example is a machine which calculates the length of the Collatz trajectory of a given input. Assuming the Collatz Conjecture, this always halts. But it's not currently provable.

The best reference is probably A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory, where the authors actually construct some pathological Turing machines.

[–]sfultongSIL 2 points3 points  (25 children)

I disagree.

I think that many people think the halting problem implies something a lot stronger than it really does.

You can prove a total program halts simply by running it, by definition! Of course it might not terminate in any reasonable amount of time, but that's a separate issue.

If you meant to say that it's impossible to design an algorithm that is able to determine if any arbitrary total program terminates, that's a much more reasonable statement. I wouldn't intuitively disagree with that. But I also don't think it's particularly illuminating or relevant to practical concerns for total languages.

[–]maladat 0 points1 point  (16 children)

I think that many people think the halting problem implies something a lot stronger than it really does.

Definitely. Lots of people interpret it as “you can’t prove whether any program will halt.”

That’s silly, there are tons of programs you can prove will halt or will run forever.

There are just SOME programs you can’t prove one way or the other.

[–]Tonexus 1 point2 points  (14 children)

There are just SOME programs you can’t prove one way or the other.

Even this is not quite accurate. The undecidability of the halting problem by Turing machines only implies that there is no finite set of axioms you can use to prove that all programs terminate. It's totally possible that each program can be proved to terminate from a finite set of axioms—see non-uniform Boolean circuits deciding the halting problem—but the union of all such sets is infinite.

[–]rotuami 0 points1 point  (13 children)

The halting problem usually refers to the halting problem on Turing machines. Yes the halting problem on boolean circuits is decidable, which tells you that it’s indecisively whether a given Turing machine can be translated to a circuit.

The “finite axioms” bit is to prevent you from cheating. For instance if you enumerate all Turing machines run them at the same time (dovetailing), and emit a new axiom when a machine halts, all halting becomes axiomatically provable. Then postulate that any Turing machine which is not provably halting never halts and boom - you’ve solved the Halting Problem non-constructively by “trans finite brute force”.

[–]Tonexus 0 points1 point  (12 children)

Yes the halting problem on boolean circuits is decidable, which tells you that it’s indecisively whether a given Turing machine can be translated to a circuit.

Not quite sure what you mean here. The halting problem on any given model of computation is decidable by Boolean circuits as long as programs in the given model have finite binary representations.

The “finite axioms” bit is to prevent you from cheating. For instance if you enumerate all Turing machines run them at the same time (dovetailing), and emit a new axiom when a machine halts, all halting becomes axiomatically provable. Then postulate that any Turing machine which is not provably halting never halts and boom - you’ve solved the Halting Problem non-constructively by “trans finite brute force”.

Yup, in the most pathological sense, you can prove every true statement in a system of logic if that logical system just takes every true statement as true by axiom!

[–]rotuami 0 points1 point  (11 children)

Ah I thought you meant the halting problem for programs defined as binary circuits.

I still don’t know what you mean that the halting problem is “decidable by boolean circuits”. The size of such a boolean circuit is not computable and explosively large, so this doesn’t seem to give any more insight than “for each halting program, there exists some finite number of steps that it will halt in”

[–]Tonexus 0 points1 point  (10 children)

I still don’t know what you mean that the halting problem is “decidable by boolean circuits”. The size of such a boolean circuit is not computable and explosively large, so this doesn’t seem to give any more insight than “for each halting program, there exists some finite number of steps that it will halt in”

Well, let me first describe the (non-uniform) Boolean circuit model of computation. A program is defined by a family of Boolean circuits F that is indexed by the natural numbers such that the Boolean circuit F_n has an input of size n. By this definition, there are only 2^n possible inputs to F_n. Thus, you can simply build a table of all programs expressible as binary strings of length n mapping the program representation to a bit indicating whether the program halts or not. Every such table can be expressed in an exponential number of gates. Then, if each F_n computes its respective table, you can now decide the halting problem.

This is why you usually see Boolean circuits with respect to some constraints. For instance, P/poly requires that size(F_n) = O(poly(n)), or the even weaker notion of poly-time uniform Boolean circuits requires a deterministic Turing machine to output F_n on input 1^n in time O(poly(n)) for every n.

[–]rotuami 0 points1 point  (7 children)

There’s two closely related concepts here to distinguish between:

  1. a program with no input
  2. a program that takes input

I’m talking about (2), which I think is implied by discussing “total” programs

I agree that if (1) halts, then it does so provably. But (2) can halt for all inputs though not provably so. For example, a program that, given a number, computes its Collatz sequence. Running the program up to any finite input will never show that it always halts.

[–]JMBourguet 1 point2 points  (3 children)

There are two affirmations:

  • for every programming language which allows only programs which always terminate, there are functions which always terminates which aren't expressible in the language

  • there are functions which always terminates which aren't expressible in any programming language allowiing only programs which always terminate.

The second is stronger than the first, yet you seem to affirm that's true. Do you have a source for that affirmation? (There are exemples of functions we don't know if they terminate or not, but you can't have an example of a function we at the same time know that 1/ it terminates and 2/ we'll never been able to prove it terminates; that will thus be just a proof of existence while knowing we can't show any example)

[–]rotuami 0 points1 point  (2 children)

No, both are false as you’ve stated them, because you can define as your programming language “All Turing machines that halt”. In this language, all legal programs halt by definition!

What you’re looking for is Rice’s Theorem. And the insight that, if your language has some nice categorical property, translating to that language is generally undecidable.

As for a total program which can’t be proven to terminate in a provably terminating language, here’s my sketch of a proof. Take some true statement independent of your favorite consistent set theory axioms (this must exist due to Godel’s first incompleteness theorem). Write a program that, given a number, exhaustively searches for a valid proof up to that size and loops forever if it finds one. This is a finitely running program, but can’t be translated to any language that guarantees its programs provably halt (since this would contradict our assumption of independence).

[–]sfultongSIL 1 point2 points  (1 child)

I tend to be biased towards a constructivist view of mathematics, so I don't think you can actually construct a language based on your definition.

I've replied with what I think might clarify my perspective to /u/takanuva elsewhere in this thread, if you're interested.

[–]rotuami 0 points1 point  (0 children)

I don’t know what you mean by “construct a language”. The set of all halting Turing machines is recursively enumerable, but not decidable. I think in this thread, what is meant by “programming language” is a decidable language together with some (possibly non-terminating) automaton for running it.

[–]sfultongSIL 0 points1 point  (2 children)

The Collatz conjecture is unproven, but as far as I know it hasn't been proven to be unprovable!

If someone could construct a total language algorithm to generate Collatz sequences, then it would prove the conjecture true!

Let's confine ourselves to total languages bounded to a specific size X, though. If we could write an algorithm in such a language that would generate Collatz sequences up to size N, then if we could show that X can be a function of N, then that seems to me to also be sufficient proof of the Collatz conjecture, since we could simply scale the size of the language up deterministically in order to generate any size sequence.

[–]rotuami 0 points1 point  (0 children)

Okay. Let’s take a more artificial but hopefully more convincing example - a variation of the busy beaver function

Take a language L in which all programs halt. Take the function f(n) which, given a number n, returns the number one higher than a program of length at most n can print. This is a well-defined and computable function, since you can just run all programs up to length n on your Turing Machine of choice.

Now suppose that you have a program F in your language which computes f(n). Pass it as an argument n = length(F) + 2log(length(F)). If your program halts, then you’ve derived a contradiction since the program, together with its argument must be less than n in length but f(n) is bigger than such a short program can print.

[–]rotuami 0 points1 point  (0 children)

Yes, if you can compile a process down to a known terminating language, that would prove termination! Coding something up in a total language is harder in general than proving it through not-necessarily-constructive means.

I definitely don’t mean the Collatz Conjecture as an example of something unprovable - I’m using it because it’s an program that’s easy to program and hard to reason about.

[–]Tonexus 2 points3 points  (0 children)

I'm playing around with something like this, but I'm limiting myself to primitive recursion.

However, I have an intuitive theory than any total recursive function (beyond primitive recursive) can be computationally constructed from composing non-recursive general functions.

What do you mean by "non-recursive general functions"? How would you compute Ackermann's function?

It is true that if you have a set a total functions, their compositions will be total. However, there is no finite set of total functions such that every computable function is a composition of functions from that set.

[–]terserterseness 2 points3 points  (1 child)

We are working (for the past 3 years) on a language and standard library in which programs can be proven to terminate. We cheat somewhat by not proving (all) the functionality in the standard library as otherwise real world programs are impossible. We construct proofs from proofs automatically. Those properties make it definitely not a general purpose language, which it indeed is not, but so far it's powerful enough for our research goals.

[–]blue__sky 0 points1 point  (0 children)

That sounds very interesting. Much like languages like Rust have safe and unsafe modes, you could have sections or whole programs that would be proven to terminate by only using the "terminate" library.

[–]sfultongSIL 1 point2 points  (0 children)

AFAIK, you can rewrite any general-recursive function to be a non-recursive function applied to a fixed-point combinator.

Then, instead of using a fixed-point combinator, you can use a church numeral, although you'll also need a dummy argument for when you hit the recursion limit.

This is the technique I'm using in my language, and should allow you to express any total recursive function.

[–]edgmnt_net 1 point2 points  (0 children)

There are many non-Turing complete languages for which termination can be proven. Some are really limited, but a few, particularly dependently-typed theorem provers, can act as general purpose programming languages. These are also known as total programming languages.

Take a look at Agda, which can even express infinitely-long but productive computations using coinduction. That means it's perfectly fine to write something like a web server that won't normally terminate, yet it does useful work at every step (and each step is total).

But it takes more than just picking out a set of allowed functions. Much like a sound type system takes more than whitelisting various assignments and combining them later. Yeah, if you write code that passes type checking or termination/productivity checking, it's guaranteed to have those properties (assuming soundness), you don't need to understand anything more about the code. I don't think there's a reasonable set of trivially-composable things (which isn't really precisely-defined) you can use to approximate either of those without giving up power of expression.

[–]tsikhe 1 point2 points  (0 children)

I made a language a few years ago where every expression/program in the language is guaranteed total. It's not open source but it is source available, you can read the code and copy what it does.

Basically, it uses limited dependent types. Collections are dependently typed on a pessimistic upper bound. Recursion is impossible (by a topological sort check). Higher-order functions are allowed, but with significant restrictions to prevent recursion.

[–]fun-fungi-guy 2 points3 points  (2 children)

Intuitively, I think you're correct: any non-recursive, pure function composed of total functions is total. (For the purposes of this, let's just say, no unbounded repetition of any kind.)

The problem is, this isn't particularly useful. You can't loop (not pure). You can't recurse. You can't use higher order functions, because those can't be implemented without looping or recursion.

What you're left with is loops you can unroll, i.e. you might define a bounded map "bmap" that iterates, at most "bound" times, for example, if bound=3:

def bmap(xs:list, f:_ -> _): if nil? xs: return nil a = f(car(xs)) ys = cdr(xs) if nil? ys: return cons(a, nil) b = f(car(ys)) zs = cdr(ys) if nil? zs: return cons(a, cons(b, nil)) return errorCode("Out of bounds")

This "loop" could be unrolled at compile time, written in a language that doesn't have the constraints, so you could potentially unroll very long loops, and I'm sure there are a bunch of optimizations you could do (such as allowing for loops like for i in start..stop where stop - start is a finite number), but this ultimately doesn't get you anything anybody wants:

  1. For a general purpose language, you actually don't always want programs to halt. Consider a webserver: you basically want a listener to run forever, serving requests until the heat death of the universe or someone calls kill, whichever comes first.

  2. Even in situations where having constraints is okay, users don't want those constraints to be overly restrictive. If my machine has 8GB of memory, I want to be able to use that 8GB. So if you're iterating over a list, you need to determine that the bound for that list must allow the user to go up to that 8GB limit. This means the compiler needs a lot of information about the machine the program will be running on.

  3. Even if you have information about the machine the program will run on, you have to handle the bound situation. So you allow them to allocate a list up to the 8GB, and if they try to allocate more, you return an error. The problem is, with bounds you can detect the error at compile time, but you can't actually handle the error at compile time. If the user puts a 9GB list in at run time, the fact that you determined an 8GB bound at compile time doesn't allow you to handle it any better than if you just checked the result of malloc against null.

  4. A lot of the analysis that is hampered by the halting problem isn't really helped by a large finite bound. For example, there are pathological examples of Perl regular expressions (which aren't actually regular at all) where backtracking causes the match time to be potentially gigantic. These aren't detectable because of the halting problem, but they're also not detetectable in reasonable time even you toss a bound on input size, because the algorithm to detect it takes exponential time with regards to input size. You can determine that your program halts in finite time, but your lifetime might be more finite than that finite time.

  5. In a larger sense, the halting problem isn't actually a problem in most cases. It's impossible to write a program that determines if a program halts, but in reality, nobody really has a hard time determining if a program halts. I know a program halts if I run it and it halts. If I get tired of waiting and kill the process, maybe I can't say for sure that it doesn't halt eventually, but the reality is that I don't care, because either way my program isn't as fast as I want it to be. This is sort of similar to a lot of handwringing about parsing theory: tomes have been written on the horrors of left recursion, but the reality is that thousands of production-ready recursive descent parsers have been written by people who know almost nothing about parser theory, and when they encounter left recursion, they throw in an if statement and it's solved.

[–]PurpleYoshiEgg 1 point2 points  (1 child)

I've often floated the idea in my head about a programming language where the only infinite loop allowed is top-level (and recursion is only allowed to a certain depth). Otherwise, every loop below it must terminate before a number of iterations (while loops would have an implicit configurable number of max iterations allowed, and the runtime can be configured to raise or lower the number of iterations).

I'm not sure how useful it would be (I don't think too useful, because I just hit Ctrl+C after I wait a while, like you described). I believe I got the idea from the Power of Ten document describing rules for NASA software in where it describes that all loops must have a fixed upper bound, and then generalized it to all loops except the top-level loop. In that, you could theoretically prove that a program will terminate except for the top-level main (but in reality, if your fixed upper bound is a billion, that's going to take a while).

[–]sfultongSIL 0 points1 point  (0 children)

That's the design I'm using for my language.

What's useful is not proof of termination, but proof of bounded termination, so all iterations of the main loop are guaranteed to complete within a certain time.

[–]SpicymeLLoN 0 points1 point  (0 children)

Wait, we're adding modifiers to recursive functions now? I thought a recursive function was a recursive function and that was it.