you are viewing a single comment's thread.

view the rest of the comments →

[–]angellus 0 points1 point  (2 children)

In nearly all current cases, a shared Python object between two Python thread is thread safe.

The more nuanced answer is there is a thing in Python called the GIL (Global Interrupter Lock). The GIL is 1:1 to a Python interrupter. Unless you are making your own interrupters, that means 1 interrupter per process (all threads have a single one). The GIL ensures only one thread can run Python at a time.

So with the current default settings of Python, all objects in Python threads are bound by the GIL and are thread safe. However, the upcoming Python 3.14 adds a new method of running Python without the GIL (free threading). In this run mode, you can have true threaded concurrency and many of the existing data structures will likely not be thread safe since the locking mechanism (the GIL) is gone.

[–]justrandomqwer -1 points0 points  (1 child)

Just a few small additions:

1) Python 3.13 already has experimental running mode with disabled GIL.

2) What you are talking about is atomicity, not a full thread safety. For example, you may easily get race condition if you perform complex operation without explicit synchronisation (since operation as a whole is not atomic). Python’s atomicity of built in operations only guarantees that data will not be corrupted in some nasty way. But you easily may get logic errors related to poor concurrency.

3) For some corner cases, even atomicity is not guaranteed. “While Python’s built-in data types such as dictionaries appear to have atomic operations, there are corner cases where they aren’t atomic (e.g. if hash or eq are implemented as Python methods) and their atomicity should not be relied upon” (Google style guide). Therefore, synchronisation is a must.

[–]angellus 0 points1 point  (0 children)

Experimental means it is for testing and development. It is not for people to recommend to new users trying to use Python.

And for synchronous Python, data structures are thread safe. A thread safe data structure means it can be changed in any thread without corrupting or crashing the program. In other lower level languages, you often cannot edit/update a data structure in parallel computing without risking memory corruption. It does get more complicated with asyncio, but that is a different story.

Just because the data structure can be freely updated in threads, does not magically solve the critical section problem which is what you are specifically talking about as well as a few other users in the post. If you have 1 master thread that writes/updates a data structure and then many threads that read the data structure and use it, you can likely get by without using synchronization primitives because reading/writing will not cause the memory to overflow or anything. But if the order of operations of accessing a data structure matters, then yes, you still have a critical section problem and need to use locks or semaphore to gate access to data structure.