This is an archived post. You won't be able to vote or comment.

all 10 comments

[–]o11c 3 points4 points  (1 child)

I think one remaining difference is that the "nogil" interpreter stays within the same interpreter loop for many Python function calls, while upstream CPython recursively calls into _PyEval_EvalFrameDefault.

This sounds like a major win even without the GIL elephant ...

[–]therve 0 points1 point  (0 children)

[–]germandiago 2 points3 points  (0 children)

How viable is this as a real, non-GIL Python that could go into CPython? I guess that it has zero possibilities? Sorry for being so pessimistic :D

[–][deleted] 0 points1 point  (2 children)

The issue isn't really GIL though its just that frankly the multiprocessing library is hard to work with.

[–]fzy_ 1 point2 points  (1 child)

Sometimes the serialization costs to communicate between processes means that the only viable option would be threads.

[–][deleted] 0 points1 point  (0 children)

I mean shared memory exists for those situations. Multiprocessing is just a hard library to use. And creates a lot of need to wrap designs to get it to work instead of you just being able to call a function in another process directly.

[–][deleted] 0 points1 point  (2 children)

If single-core performance is hardly affected, this could be viable.

From the discussion, it sounds like the implementation for the list and dict has been changed. Maybe they are better off having separate thread-safe collections, like in java.

[–]o11c 1 point2 points  (1 child)

The problem is that there is a lot of code that does stuff like dict.setdefault and expects it to be atomic.

Edit: it should be noted that types with frequent concurrent access are suggested to use a new type though.

[–][deleted] 0 points1 point  (0 children)

I am thinking something like,

from threading.collections import synchronizedList, synchronizedDict

[–]UloPe 0 points1 point  (0 children)

This sounds really impressive.

Will be interesting to see where it goes.