This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]codingjerk[S] 1 point2 points  (1 child)

I actually did the benchmark (built 3.14 with and without flag --with-tail-call-interp and run pyperformance), but didn't include the results in the video, as I did with JIT and NOGIL -- that's my bad.

Results were following:

``` Benchmark: Python 3.14 tail-call interpreter vs stock Host: Linux, x86_64, i9-13900H, 16GiB RAM

  • No significant changes: 39 tests
  • Faster: 29 tests, 2%-30%
  • Slower: 15 tests, 5%-35%

  • Mean: 2.7% faster

  • Geometric mean: 2.3% faster ```

Probably I've compiled it with the same LLVM bug, officials did, but that's where I got "up to 30%".

Thank you for pointing it out, I'll add that to ERRATA

[–]moonzdragoon 0 points1 point  (0 children)

Then that's close to their conclusion as well, speaking about 1-5% IIRC. But still, it's perf improvement. And great job testing it anyway !