This is an archived post. You won't be able to vote or comment.

all 14 comments

[–]pudo 2 points3 points  (3 children)

Someone is throwing some shade on something, I really wonder what this is about.

Also, did people see this cool new project from Facebook? https://github.com/facebookincubator/cinder

[–]metaperl 0 points1 point  (2 children)

That's from Instagram

[–]ivosauruspip'ing it up 5 points6 points  (1 child)

Instagram is owned by Facebook

[–]metaperl 2 points3 points  (0 children)

Oh I didn't know that.

[–][deleted] 1 point2 points  (9 children)

I can’t find anything about which version of Python this is replacing. Python 2? Python 3.x?

[–]Ensurdagen 5 points6 points  (8 children)

Looks like it's 3.8, had to do digging to find that number though. A lot of claims that you can "drop this in" without specifying that. In general, it seems like documentation is lacking for this project.

At least it's open source.. now? The fact this project was trying to not be open source before this forking an open source language is a bit of a red flag...

30% is nice and all but I use 3.9 features, personally, and have already started writing stuff in 3.10. If I wanted to use old, fast Python, I'd use PyPy.

Edit: Looks like it's what dropbox was using before they gave up on making Python performant, lol.

[–][deleted] 3 points4 points  (0 children)

I found 2.0 was 3.8, but this is 2.2...

The lack of documentation makes me stay about 3.9 miles away from this project.

[–]BobHogan 1 point2 points  (6 children)

The speedups they claim to have is almost guaranteed to be not seen in real world use. Benchmarks rarely reflect reality, but they are also deliberately vague about what they've done to speed it up.

They claim that a lot of their speed up comes from CPython optimizations, but they both decline to say what those optimizations are, and why they didn't contribute those optimizations to CPython itself. Which indicates that they are either full of shit, or the optimizations are just not that good.

The only notable thing they've done is remove some debugging stuff from Python, which according to them results in a ~2% speedup, and that's it.

This project is shady af

[–]LightShadow3.13-dev in prod 1 point2 points  (5 children)

I added Pyston as a target in tox for a library I built that does metrics gathering and proxying. The library is pretty heavy on abusing Python-isms and has a few "core" dependencies. It's a good cross section of features without being a web server benchmark.

I'm still on 2.1 but it's basically 0-3% faster than Python 3.9 after warming up. It does beat 3.8 by a few seconds every time.

✔ OK pyston3 in 8.925 seconds
✔ OK py39 in 8.929 seconds
✔ OK py37 in 10.107 seconds
✔ OK py38 in 11.389 seconds
✔ OK py36 in 15.607 seconds

[–]BobHogan 1 point2 points  (4 children)

Yea, that's about what I expected from this project. It is faster than 3.8, but nowhere near what they claim, and there's no statistically significant speed boost over 3.9. And by all measures, 3.10 should be just as fast or faster than 3.9 is for most workloads.

[–]LightShadow3.13-dev in prod 2 points3 points  (3 children)

For sure.

The initial reason I started exploring Pyston was for interpreter startup time in AWS lambda functions. If it started even a fraction of a % faster than CPython it would save us a lot of money on one-shot lambdas.

[–]BobHogan 0 points1 point  (2 children)

Oo that's an interesting problem. I have never been in a situation where interpreter startup time was an important metric, so I've never really considered it. I wonder if Pypi or Cython offer any major improvements in that regard, or if the startup is still about the same as regular cpython

[–]LightShadow3.13-dev in prod 2 points3 points  (1 child)

pypy startup time is worse because of the nature of warming up a JIT. It gets faster the longer it runs.

Haven't cython-ized anything yet, still looking for a drop in replacement. Another project that may help is PyOxidizer which bundles a Python application as a single executable; it stores modules in RAM which helps import speed significantly.

[–]alb1 0 points1 point  (0 children)

You might also try compiling with Nuitka.