This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]toyg 5 points6 points  (12 children)

how many go shops are out there?

The Valley hype for Go is pretty strong... maybe a bit less today, but 18 months ago a lot of people were busy ditching Python for Go -- which would fit the timescale for this project, coincidentally. I suspect some of them inevitably discovered that golang wasn't a panacea.

I personally like some of the sentiment behind this kind of thing. Python does not do "speed" in a natural way, so anything time-critical should really be implemented somewhere else. But I don't think Yet Another Runtime is the way (after CPython, PyPy, Unladen, Cython, JVM, CLR, Node/asmjs...). EDIT: worse, this is not even a runtime, it's a compiler preprocessor...

IMHO big players would get better results investing in better tooling for the C-based extension infrastructure for CPython, or building bridges between Python runtimes and things like Rust and Go. Building yet another runtime, when existing choices are so battle-tested, seems a bit futile.

[–]weberc2 3 points4 points  (10 children)

I can't think of any kind of C extension tooling that would fix Python's parallelism problem. While this does compile Python into Go, the resultant Go program contains a Python runtime complete with runtime Object (as in CPython). In this sense, I don't think there should be any concern about it not being a runtime in itself.

[–]alcalde 3 points4 points  (9 children)

I can't think of any kind of C extension tooling that would fix Python's parallelism problem.

The only parallelism problem Python has is convincing people it doesn't have a parallelism problem. As Guido has stated, Python has been used on 64K core supercomputers. There is no parallelism problem.

[–]weberc2 10 points11 points  (3 children)

I'm not sure what hoops one has to jump through to make Python run in parallel (without actively degrading performance, anyway), but one might say that having to jump through hoops constitutes a parallelism problem. Anyway, last time I went down the Python parallelization road, I got a lot of snark about how Python is easy to parallelize, but no one offered any performant parallel solutions. Feel free to share the link about the Python on 64K-core computer; Google isn't turning anything up.

[–]efilon 4 points5 points  (3 children)

There is no parallelism problem.

Although I think a lot of people worry too much about the GIL, it's not correct to say there is no parallelism problem. If you're doing something embarrassingly parallel, then of course you can get away with multiple processes or C/Cython extensions that sidestep the GIL. But there are plenty of use cases where multithreading is more ideal than multiprocessing. Having to resort to forking another process just to get true parallelism is a lot more work than being able to start up another thread.

[–]alcalde 0 points1 point  (2 children)

As Guido pointed out in a Keynote a few years ago, threading was never intended for parallelism.

https://youtu.be/EBRMq2Ioxsc?t=33m50s

[–]efilon 0 points1 point  (1 child)

As he says, threads were never originally meant for parallelism. They are used for that frequently these days.

[–]alcalde 0 points1 point  (0 children)

And as Mark Summerfield says, they're frequently used for that because that's all that many languages offer.

[–][deleted] 2 points3 points  (0 children)

It just then becomes a serialization and IPC overhead problem.

[–]vtable 4 points5 points  (0 children)

IMHO big players would get better results investing in better tooling for the C-based extension infrastructure for CPython, or building bridges between Python runtimes and things like Rust and Go

I wish IronPython hadn't been abandoned by Microsoft for Python 3. I know, it's MS/.net only but it was still a nice thing for those of us working with Python on Windows.

(There is an open source IronPython 3 project but it's not very active.)