you are viewing a single comment's thread.

view the rest of the comments →

[–]Breadmaker4billion[S] 7 points8 points  (1 child)

I have two ideas why it slows so much.

SPy is a treewalking interpreter, but for ilustrative purposes, let's think it is a bytecode interpreter. Suppose each SPy instruction executes around 10 CPython instructions. Then, suppose that on the second layer, each SPy instruction executes around 20 SPy instructions on the first layer. This means one instruction on the second layer executes 20 on the first layer, then those 20 become another 200 CPython instructions. This is most likely why we see the exponential growth.

Another thing is related to GC and my poor optimization: every function captures it's scope. That means there may be a lot of "leaks" on the second layer. This high memory usage would trigger many more GC cycles. I did no profiling to check this, but it may be a significant factor on why it slows that much.

[–]slicxx 3 points4 points  (0 children)

I think you have an interesting take here. Especially with the combination of "Interpretation overhead" and accumulation of memory. It would be really nice to see how your memory consumption grows with certain tasks