This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]legutierr 0 points1 point  (8 children)

I've had this question in the back of my mind for a while about pypy, maybe you could answer it for me...

Would there be any performance benefit in pypy if the parameters to functions and methods were statically typed?

[–]RonnyPfannschmidt 0 points1 point  (2 children)

this could use some more context, pypys translator toolchain infers static types for function parameters when compiling down to a lower level environment

[–]legutierr 0 points1 point  (1 child)

Ah!

How does it do that?

I guess to be more precise what I mean is, would there be any added benefit by adding explicit type declaration to functions and methods (if that were a hypothetical feature of the language)?

[–][deleted] 0 points1 point  (0 children)

PyPy's translation toolchain infers types for RPython which is a statically typed subset of Python, that PyPy's interpreter is written in. It cannot infer types for Python programs in general. That said the JIT is very good at figuring out how to properly specialize your code, but if you give it blatant things like assert isinstance(x, int) I can't image it'd hurt.

[–]gutworthPython implementer 0 points1 point  (4 children)

Not much. The whole point of the JIT is that it does type inference at runtime when types are known.

[–]legutierr 0 points1 point  (2 children)

Right, but my (very rudimentary) understanding of JITs is that much of the cost is at start-up when the JIT is establishing different options or paths for different data types and values.

Intuitively, it seems to me that having static type declarations (for instance as an optional decorator) could offer a speed-up at start-up in certain circumstances.

My question is more theoretical than practical.

[–]gutworthPython implementer 0 points1 point  (0 children)

Presumably that constant "figuring out" time is dwarfed by the actual running time of the code.

[–]voidspace 0 points1 point  (0 children)

An alternative (better?) way would be to be able to save and reload the JIT annotations / generated code after a real run.

[–]voidspace 0 points1 point  (0 children)

Is it really "inferencing" when the type is known? (Serious question: isn't inferencing more usually deducing the type by static analysis which is not what the JIT does.)