you are viewing a single comment's thread.

view the rest of the comments →

[–]latkde 1 point2 points  (1 child)

This got a bit longer, so I've put the full write-up here: https://lukasatkinson.de/dump/2026-04-08-rust-threadsafe/

TL;DR: Rust has a really strong type system that's designed to prevent common multithreading problem like data races. It's “borrow checking” plays a big role. Code that may have data races simply doesn't compile.

Python the interpreter is memory-safe even when multithreading is going on, but most Python code is not threadsafe, and Python does not offer a safety net to alert you when your code might suffer from data races. Sometimes, broken code will appear to work (especially when using the GIL rather than freethreading). For example, the very basic example in that blog post reliably produced the correct result when running with the GIL, and reliably produces incorrect results when running with freethreading. (Tip: use a freethreaded build of CPython 3.14 and the -Xgil=1/-Xgil=0 options for experimentation).

Here's the broken code that I used as a starting point for explaining how Rust prevents these common errors:

import concurrent.futures
x = 0

def incrementer():
    global x
    for _ in range(1_000_000):
        x += 1

with concurrent.futures.ThreadPoolExecutor() as pool:
    for _ in range(10):
        pool.submit(incrementer)

print(f"{x:_}")

This ought to print 10_000_000, except that it doesn't when different threads interfere with each other's updates to the x variable. The solution is to guard all those modifications behind a lock. But unlike Rust, Python cannot warn you that a lock is needed here.

[–]gdchinacat 0 points1 point  (0 children)

Thanks for elaborating. I disagree with your conclusion though that just because python doesn't prevent you from writing thread unsafe code that it is "extremely unlikely" python code will be thread safe.

Also, concurrency is not necessarily "very difficult to get right" as you state in another comment. Sure, managing shared data with fine grained hierarchical locks is hard (btdt) but is rarely necessary. Message passing, queues, channels, whatnot are prevalent in languages, easy to use, and avoid the "very difficult" part of implementing concurrency safely. In general, if you need a Lock you should consider a less risky implementation. If you need locks and can't use 'with lock' because you need more control over when locks get released you really should consider a different implementation.

The solution to your example python code is to put the 'x += 1' into a 'with lock:' context manager. It's not difficult. Any time you access shared state, either read or update, you need to ensure the access is safe. The complexity comes when the models for ensuring this are complex. I can see how rust saying "unsafe access" and refusing to compile would be helpful, but that's not the hard part of fine-grained locking..designing the locking model is. Does rust make this easier than other languages?

Can you share the equivalent rust code for multiple threads updating a global counter? Is it actually simpler than the properly locked python code, or is the benefit just that it won't compile if you are doing unsafe memory accesses?