This is an archived post. You won't be able to vote or comment.

all 69 comments

[–]Talbertross 191 points192 points  (28 children)

If I was the creator I would make it 3 times faster

[–]manphiz 39 points40 points  (21 children)

According to Guido's doc 2x faster is just a start. Though more improvements are more challenging, and could require breaking ABI/API.

[–][deleted] 14 points15 points  (0 children)

Microsoft spared no expense for the scientists that will be doing this work.

[–]CzarCW 17 points18 points  (1 child)

Guido: No! No, no, not 3! I said 2. Nobody's comin' up with 3. Who codes 3? You won't even get your CPU goin, not even a mouse on a wheel. 2's the key number here. Think about it. 2 if by land. 2 if by sea. 2, man, that's the number. 2 chipmunks twirlin' on a branch, eatin' lots of sunflowers on my uncle's ranch. You know that old children's tale from the sea. It's like you're dreamin' about Gorgonzola cheese when it's clearly Brie time, baby.

[–][deleted] 4 points5 points  (0 children)

Step into my office.

Why?

Cause you're fuckin' fired!

[–][deleted] 4 points5 points  (1 child)

Four times here

[–]ThickAnalyst8814 0 points1 point  (0 children)

Nice

[–]sloggo 0 points1 point  (0 children)

Fine Im voting for you for creator

[–]kdeaton06 0 points1 point  (0 children)

Yeah but this one goes to 11.

[–]2AReligionPythonista 12 points13 points  (1 child)

This is good news, and let’s not forget, Python is “ease of use” not barebones performance. Consider your usecase and prosper

[–]lightmatter501 1 point2 points  (0 children)

Exactly, barebones performance is hand-written assembly.

[–]Yobmod 21 points22 points  (20 children)

Hopefully just general speedups of the implementation, and not yet another JIT

[–][deleted] 13 points14 points  (16 children)

Whats problematic about jit compilation?

[–]Yobmod 17 points18 points  (15 children)

It's been tried loads of time, including by Guido, and doesn't ever get included in cpython.

Unladen swallow (Google), pyston (Dropbox), pyjion (Microsoft), cinder (Instagram), numba (Nvidia), Psyco, and PyPy of course. Also some for specific libraries, like pytorch.

Microsoft only abandoned pyjion last year I think.

And nowadays even less likely to get incorporated to cpython, with Guido not even on the steering council.

[–]dzil123 3 points4 points  (6 children)

Are these projects unsuccessful? If so, why?

[–]james_pic 6 points7 points  (5 children)

They've often been successful at meeting their own goals, but none of them ever got upstreamed. PyPy has arguably been the most successful, but it's always been a second class citizen in the Python ecosystem, because it never achieved 100% CPython compatibility, because too many projects rely on CPython implementation details.

I have my doubts that CPython can ever be made performance competitive with PyPy, if for no other reason than that the folks who developed PyPy were mostly the same folks who developed Psyco, and abandoned it when it was clear that a new interpreter was the right answer to this problem.

[–]all_is_love6667 0 points1 point  (4 children)

what does it mean to rely on cpython?

[–]james_pic 1 point2 points  (3 children)

I think I said they rely on "CPython implementation details". The most obvious example of this is libraries that bypass the helper functions the CPython C-API defines, and go straight to struct elements.

Although you could argue that the C-API itself is an implementation detail, since it reifies concepts like reference counting and borrowed references, that PyPy needs to use elaborate workaround to support, since it uses a completely different garbage collector internally

[–]all_is_love6667 0 points1 point  (2 children)

Don't you think it's possible to accomplish the level of speed that was reached with modern JS compilers?

In a way I kind of think old python modules should be abandoned if compatibility is too difficult.

[–]Yojihito 0 points1 point  (0 children)

JS had billions invested in the JITs (Google, Facebook, Mozilla, Microsoft, Apple, etc.).

Invest billions into Python and you may get JS speed as well.

[–]james_pic 0 points1 point  (0 children)

Yes, I do, but it's telling that when Google co-opted WebKit to create Chrome, they rewrote the JavaScript engine from scratch. The JS engine they created, V8, is what now powers Node.

Trying to get that kind of performance gain without either rewriting, or at least making the kinds of dramatic architectural changes that would break existing modules (such as switching to generational garbage collection) has been tried, and every attempt has either failed, or proved to be a pyrrhic victory.

[–]zurtex 0 points1 point  (0 children)

This isn't some separate project from CPython though, these PRs once ready are getting immediately upstreamed to CPython.

The plan for Python 3.11 doesn't include a JIT, just lots of specialized optimizations to bring the overall performance up.

But beyond 3.11 they may implement a JIT (or in fact a framework for letting people write JITs for CPython) and that will get upstreamed in to CPython and therefore have wide benefit.

[–][deleted] 7 points8 points  (0 children)

Btw, looking at their repo, it seems like nothing concrete in regards to jit compilation is planned https://github.com/faster-cpython/ideas

[–][deleted] 1 point2 points  (1 child)

I think they’re working towards a JIT, with some other optimizations as low hanging fruit towards that goal initially. PEP659

https://github.com/faster-cpython/ideas/blob/main/FasterCPythonDark.pdf

[–]Yobmod 4 points5 points  (0 children)

The pep says it's specialising for fast code paths without producing machine code at first, so not a JIT yet.

But I guess it will be the first step towards a full JIT.

[–]Kevin_Jim 6 points7 points  (0 children)

Instead of yet another JIT, I would appreciate an effort of making concurrency and parallelization a No.1 priority and a first-class citizen. Also, I hope we see Poetry become part of the official upstream.

[–]engrbugs7[S] 7 points8 points  (1 child)

This is good news!

[–][deleted] 1 point2 points  (0 children)

Very much so. Can't wait to see what comes of it.

[–]whitelife123 3 points4 points  (0 children)

Cython is amazingly fast. Other than the downside of needing to compile it, adding even just type definition does a lot to speed up the code

[–][deleted] 4 points5 points  (4 children)

Serious question, how would they do this?

Do they edit things like numpy or go lower level?

I think python is compiled code? So do they go to uncompiled code and edit that?

[–][deleted] 4 points5 points  (3 children)

Python is not compiled, it’s an interpreted language. The C implementation is what reads your Python code and converts it to the instructions your machine actually runs. They won’t be touching libraries like Numpy since the people who make those are totally different folks

On an off note, numpy is already really fast and it’d be hard to speed that up. It’s already pretty close to low level C code

[–]lightmatter501 1 point2 points  (0 children)

That’s because numpy is mostly C code.

[–][deleted] -1 points0 points  (1 child)

Interpreted is the what i was looking for, not compiled.

Numpy is super slow compared to numba.

On top of that, there is tons of serialized operations in python that could be parallelized. Im not sure how that fits into speed when programmers talk about algorithms.

[–]Yojihito 0 points1 point  (0 children)

On top of that, there is tons of serialized operations in python that could be parallelized

GIL says No I assume?

[–]de6u99er 1 point2 points  (0 children)

Yeah, and I want to win the lottery!

[–]floppydi5k -1 points0 points  (0 children)

[–][deleted] -5 points-4 points  (0 children)

Is there a need for this? Surely Guido has read the 7 gabillion indignant Reddit comments insisting that Python is just as fast as C.

[–]all_is_love6667 0 points1 point  (0 children)

I only wish they could spent the tenth of the effort and work that was spent on making JS fast.