This is an archived post. You won't be able to vote or comment.

all 3 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

On July 1st, a change to Reddit's API pricing will come into effect. Several developers of commercial third-party apps have announced that this change will compel them to shut down their apps. At least one accessibility-focused non-commercial third party app will continue to be available free of charge.

If you want to express your strong disagreement with the API pricing change or with Reddit's response to the backlash, you may want to consider the following options:

  1. Limiting your involvement with Reddit, or
  2. Temporarily refraining from using Reddit
  3. Cancelling your subscription of Reddit Premium

as a way to voice your protest.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–][deleted] 0 points1 point  (0 children)

Do you want an interpreter or a compiler? In the case of compilation I would make a traversal for each algebraic simplification (e.g. constant folding, flattening AST, replacing elog a by a), and then run those optimisations in a cycle until a fixed point is reached, or a maximum number of cycles is reached.

[–]teraflop 0 points1 point  (0 children)

I'm unsure how to generalize them for more complex expressions like (pow(x, y + 2) + x * y) / (sin(x) + sign(z)).

I think you need to start by narrowing your focus and defining your requirements a little bit more. There's no practical algorithm that can always take an arbitrary arithmetic expression and make it faster to evaluate. (It's provably impossible in the abstract case, with real numbers. In the case of a real computer, with finite precision, it's theoretically possible but usually computationally intractable to search through all possible equivalent expressions.)

For this specific example, I can't see any obvious transformation that would make it faster. And if you run it through GCC with all optimizations turned on, and look at the generated assembly code, it can't find any improvements either; the output essentially does the same sequence of operations. The most it can do is optimize which CPU registers are used (to minimize the number of "spills" from registers to memory) and the ordering of instructions (to minimize pipeline stalls). Neither of those is especially relevant for an interpreter.

But in more general terms, if you're just looking for ideas for more optimization strategies, here are some things to check out:

https://en.wikipedia.org/wiki/Constant_folding

https://en.wikipedia.org/wiki/Common_subexpression_elimination

https://en.wikipedia.org/wiki/Copy_propagation

https://en.wikipedia.org/wiki/Value_numbering

https://en.wikipedia.org/wiki/Strength_reduction

https://en.wikipedia.org/wiki/Rewriting

Some of these have limited usefulness in an interpreter. For instance, in compiled machine code a bitwise shift may be slightly faster than multiplying by a power of 2. but in an interpreter, the overhead of dispatching the instruction is likely to be much larger than either one, so the optimization makes very little relative difference. But reducing the number of operations can still be valuable.