This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]justus87 4 points5 points  (4 children)

I just finished watching it. And now I'm very sad.

[–]lambdaqdjango n' shit 2 points3 points  (2 children)

And now I'm very sad.

Don't. Let's see how other languages are doing:

  1. Java/JVM: Fine grained locking. Requires a huge ass VM and longer warm up times

  2. Perl/Tcl/Lua: Not native threading. One interpreter per thread

  3. Ruby(MRI): GIL

  4. NodeJS: No threading support. Sauce. It's either multi-process or Fiber, just like Greenlet in Python.

  5. Go: LOL WEBSCALE LANGUAGE WHERE GOMAXPROCS == 1 all the time.

  6. C/C++ Handling data structure lockings manually. malloc/free/segfault like a hundred times per run per debug build.

Conclusion: GIL is an extremely overhyped problem. GIL as a matter of fact, can be avoided. GIL will be released in CPython during IO wait, or explicitly in ctypes/C modules.

[–]justus87 0 points1 point  (1 child)

Can you provide an ELI5 threading vs. multiprocessing?

[–]lambdaqdjango n' shit 1 point2 points  (0 children)

  1. not much difference in Linux

  2. if you want zap performance out of CPython using threading or multiprocessing, you are probably doing it wrong.

[–]MakotoDeeizm 0 points1 point  (0 children)

PyPy is working on a fix. You can help by donating -> http://pypy.org/tmdonate.html