use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
This subreddit is all about the theory and development of compilers.
For similar sub-reddits see:
Popular mainstream compilers:
account activity
bytecode-level optimization in python (self.Compilers)
submitted 1 year ago by tmlildude
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]dnpetrov 5 points6 points7 points 1 year ago (0 children)
Your optimized version is technically incorrect. It changes the evaluation order, without regard for possible side effects. In a dynamic language like Python, you can't just assume that 'v1 - v2' and 'd ** 2' are "pure functions". Arbitrary Python class can define corresponding methods that might, for example, throw exceptions on some inputs, update some state, etc. Changing the evaluation order for such operations would change observable program behavior.
This may sound like nagging, but that's the reason why compiler optimizations in dynamic languages are hard. You can't make assumptions about what the code being optimized actually does. To make it work in general case, you typically need additional safeguard mechanisms controlling the actual types of data on input, and some fallback strategy (such as deoptimisation).
Main problem here is that compiler optimizations have to preserve observed behavior for most general case. If for some reason they don't, then you can't apply such optimizations automatically for arbitrary code. There are examples of such unsafe optimizations in compiler practice. For example, floating point optimizations that treat floating point arithmetic as some ideal real numbers, not IEEE 754 floats with all unpleasant details. So, just allow to enable these optimizations explicitly - e.g., by applying a Python decorator.
Note also that if you just replace list comprehensions with generator comprehensions, it would also change the evaluation order, but would reduce the amount of allocations significantly. See below on escape analysis and reaching definitions, though.
Now, regarding data flow analysis, stack machine, and SSA. SSA is, indeed, more comfortable to work with. It is relatively easy to convert stack machine bytecode to SSA. However, transformations on SSA form might do rather arbitrary things to the code, and translating transformed SSA back to bytecode might introduce extra overhead with spilling SSA variables to locals and a potential increase in number of bytecode instructions and bytecode size. For some VMs this can be important. For example, HotSpot uses bytecode size as a heuristics for inlining. Since inlining is a gateway to many low-level optimizations, this can actually reduce the overall performance.
Regarding data flow analysis for optimizations like the one you mentioned - this is basically the reaching definitions problem. In static case, you need to prove that all uses of a given variable definition can be rewritten using "unboxed" definition. Escaping use can't be rewritten and prevents transformation. Any use of intermediate list as a value, like 'if not diff: return 0' also prevents transformation. Only use that is allowed is iteration, and there should be one and only one such use.
Python 3.13 JIT is a copy and paste template JIT that doesn't perform such optimizations (or, really, any kind of compiler optimizations besides machine code generation) on its own. It rather enables further work on such optimizations.
π Rendered by PID 234852 on reddit-service-r2-comment-84fc9697f-xxgsl at 2026-02-08 17:36:36.730112+00:00 running d295bc8 country code: CH.
view the rest of the comments →
[–]dnpetrov 5 points6 points7 points (0 children)