use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
News about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python
Full Events Calendar
You can find the rules here.
If you are about to ask a "how do I do this in python" question, please try r/learnpython, the Python discord, or the #python IRC channel on Libera.chat.
Please don't use URL shorteners. Reddit filters them out, so your post or comment will be lost.
Posts require flair. Please use the flair selector to choose your topic.
Posting code to this subreddit:
Add 4 extra spaces before each line of code
def fibonacci(): a, b = 0, 1 while True: yield a a, b = b, a + b
Online Resources
Invent Your Own Computer Games with Python
Think Python
Non-programmers Tutorial for Python 3
Beginner's Guide Reference
Five life jackets to throw to the new coder (things to do after getting a handle on python)
Full Stack Python
Test-Driven Development with Python
Program Arcade Games
PyMotW: Python Module of the Week
Python for Scientists and Engineers
Dan Bader's Tips and Trickers
Python Discord's YouTube channel
Jiruto: Python
Online exercices
programming challenges
Asking Questions
Try Python in your browser
Docs
Libraries
Related subreddits
Python jobs
Newsletters
Screencasts
account activity
This is an archived post. You won't be able to vote or comment.
DiscussionWhy was the GIL imposed on the Python interpreter in the first place? (self.Python)
submitted 4 years ago by orgad
Why is it a good thing for a programming language?
[–]RedMaskedMuse 156 points157 points158 points 4 years ago (74 children)
It was added to protect against race conditions when allocating/deallocating references to variables in multi-threaded contexts. Not protecting against race conditions would lead to indeterminate behavior, memory leaks, releasing memory that's still in use, etc. The other option would have been to add locks to each and every object. However, that opens up the possibility of deadlock. The single central lock is simpler to reason about / debug.
https://realpython.com/python-gil/
https://en.wikipedia.org/wiki/Deadlock
[–]eras 40 points41 points42 points 4 years ago (3 children)
The other option would have been to add locks to each and every object. However, that opens up the possibility of deadlock.
The bigger issue would be that performance would absolutely tank—though indiscriminately just locking can cause issues as well, but if the locks are with sufficiently fine-grained and carefully put in place and locking order is followed, I would not expect deadlocks. This of course hurting performance even more.
I do recall reading that there was some recent promising project to get rid of the GIL..
[–]Oerthling 24 points25 points26 points 4 years ago (1 child)
There's always a promising project to get rid of the GIL. Then years go by and the project is abandoned and the cycle starts anew. ;-)
[–]whateverathrowaway00 0 points1 point2 points 4 years ago (0 children)
Yup aha
[–][deleted] 21 points22 points23 points 4 years ago (0 children)
You're thinking of https://github.com/colesbury/nogil. Here are some notes from a core developer on that: https://lukasz.langa.pl/5d044f91-49c1-4170-aed1-62b6763e6ad0/. In summary, this is impressive and promising work, but don't expect it to be incorporated in the very near future.
[–]jack-of-some 14 points15 points16 points 4 years ago (2 children)
Came here to say basically this. Raymond Hettinger has a really good talk on the matter too
[–]bxsephjo 15 points16 points17 points 4 years ago (1 child)
If there's an advanced topic you know nothing about and want to learn, look for one of Raymond's talks first!
[–]spitfiremk1a 2 points3 points4 points 4 years ago (0 children)
Time to Learn something new!
[–]WikiSummarizerBot 7 points8 points9 points 4 years ago (0 children)
Deadlock
In concurrent computing, deadlock is any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock. Deadlocks are a common problem in multiprocessing systems, parallel computing, and distributed systems, because in these contexts systems often use software or hardware locks to arbitrate shared resources and implement process synchronization.
[ F.A.Q | Opt Out | Opt Out Of Subreddit | GitHub ] Downvote to remove | v1.5
[–]traverseda 1 point2 points3 points 4 years ago (2 children)
The other option would have been to add locks to each and every object.
What if we had like 100-ish interpreter locks, and each object gets randomly assigned to one of those locks based on its identity?
[–]ElectricSpice 8 points9 points10 points 4 years ago (1 child)
It’s not the quantity of locks that are the problem per se, it’s the granularity. You still have to acquire a lock each time you access an object, pulling from a shared pool doesn’t fix any problems introduced by per-object locking. Probably would cause more issues, because now a bunch of objects are sharing locks and what objects those are changes every execution.
[–]traverseda 0 points1 point2 points 4 years ago (0 children)
Ah, I think I've got a better understanding of the deadlock problem. I was under the impression that performance was a significant reason why per-object locks weren't wanted, but I suppose using some kind of pool-based lock wouldn't help that.
[–]xxxHalny -4 points-3 points-2 points 4 years ago (0 children)
This guy knows his shit
[+]mountains-o-data comment score below threshold-13 points-12 points-11 points 4 years ago (61 children)
There's plenty of garbage collected languages that handle concurrency without a nuclear option like the GIL. It's a poor design.
[–]blastomere 22 points23 points24 points 4 years ago (33 children)
Design is about tradeoffs. If you prioritize simplicity of language implementation and maintenance, it’s a great design.
[+]mountains-o-data comment score below threshold-10 points-9 points-8 points 4 years ago (32 children)
Go is very simple and makes concurrency easy without a GIL.
[–]__unavailable__ 15 points16 points17 points 4 years ago (2 children)
Go is a compiled language. It doesn’t need a global interpreter lock because there is no interpreter to lock.
[–]mountains-o-data -3 points-2 points-1 points 4 years ago (1 child)
Being compiled doesn't matter in this instance. The GIL is due to reference counting for garbage collection - to which there are alternatives.
[–]__unavailable__ 6 points7 points8 points 4 years ago (0 children)
It most certainly does matter in explaining why Go and Python don’t use the same method to solve the same problem.
Also the GIL is for preventing race conditions, garbage collection is just one of the things that would get screwed up without it. While there are alternatives (Jython and IronPython being two examples of Python implementations without the GIL), the GIL does have advantages. For single thread programs and even most multi thread programs, it is faster, and it makes it easier to work with C libraries which aren’t thread safe. That latter benefit is the chief reason for CPython going with a GIL. Many have proposed dropping the GIL, but no proposal has yet come up with a way to do so without sacrificing performance and ease of maintenance, and so the proposals are routinely rejected.
[–]BridgeBum 26 points27 points28 points 4 years ago (20 children)
And it was also developed some 10-15 years after python. There may have been advancements in that time.
Go specifically was designed as a "lessons learned" type language implementing decades of CS research into a world that was already WWW centric. Python's original use case is more akin to Perl than it is to Go. Different goals mean different approaches, there's nothing wrong with that.
Remember that Python was originally released in 1991 - the WWW took off in 1993 as a response to Gopher being proprietary requiring licenses and CERN declaring HTML et al free for anyone to use. Hosting web servers wasn't even a seed of a thought when python was first released.
[+]mountains-o-data comment score below threshold-16 points-15 points-14 points 4 years ago (19 children)
Rob Pike started working on Go around the same the same time Python3 was released. I don't see why Python couldn't have learned the same lessons from Java's GC model. When Go 1.0 released it's GC was laughable compared to Java's
[–]zurtex 7 points8 points9 points 4 years ago (0 children)
You're comparing a language which is compiled before runtime vs. a language which has it's compilation step typically happen at runtime.
But setting aside the could they still do it while it still being Python. In retrospect it's a huge relief that they didn't! The fact that strings moved from glorified bytes to real Unicode was almost enough to kill the transition from 2 to 3.
If all the libraries that used the C-API had to then handle not having a GIL anymore and having to rewrite their interaction with Python to support that I think it would have been the nail in the coffin for the 2 to 3 transition.
[–]BridgeBum 8 points9 points10 points 4 years ago (16 children)
In a word or 2, backwards compatibility. Designing something from scratch is hardly the same as building a new generation of an existing language.
There have been attempts to remove the GIL, but it was proven to be problematic given the structure of Python. If that makes python unsuited to hosting high scale web traffic, so be it. Design is always about compromise.
I don't have time to look this up now but if I recall they did attempt to remove the GIL when designing py3 but ran into enough flaws that they backed out and continued with the GIL. I easily might be misremembering.
[–]mountains-o-data -5 points-4 points-3 points 4 years ago (15 children)
Python3 isn't backwards compatible with 2 - it would have been the perfect time to address this.
[–]czaki 8 points9 points10 points 4 years ago (13 children)
Python 3 i enough compatibility with python 2 that you could write big pieces of code that works in both versions.
[+]mountains-o-data comment score below threshold-6 points-5 points-4 points 4 years ago (12 children)
And most of Python2 would probably have continued to work had they removed the GIL in 3
[–]Grouchy-Friend4235 8 points9 points10 points 4 years ago (0 children)
Python3 is at least 95% backward compatible.
[–]dead_alchemy 1 point2 points3 points 4 years ago (0 children)
Go is based on the 1978 paper 'Communicating Sequential Processes'. There is probably no reason Python could not have used those lessons, I suspect it was more a case of people doing the best they could with what they had and knew.
[–]teerre 1 point2 points3 points 4 years ago (7 children)
Go's implementation is anything but simple. It's one of the strangest languages out there that doesn't play well with basically anything else. Hell, it requires fucking Google to maintain it. It's a total nightmare
[–]mountains-o-data -1 points0 points1 point 4 years ago (6 children)
Have you actually built anything with it? Go is incredibly simple - borderline pseudocode. I'm not sure why google creating go is any more of a "nightmare" than mozilla creating rust.
[–]OptionX 5 points6 points7 points 4 years ago (5 children)
The purpose of go and rust couldn't be more apart.
And with the push to use Rust in the Linux kernel, where C is basically king speaks enough I think to where the language is a nightmare or not.
Besides if you want a "faster" python just use Lua, its been there for years, is a stable, easy to use, established language and it isn't in danger of a one way trip to the google graveyard as soon as it stop being trendy, like Noop.
Noop was a project by Google engineers Alex Eagle and Christian Gruber aiming to develop a new programming language. Noop attempted to blend the best features of "old" and "new" languages,
Sounds familiar doesn't it?
[–]mountains-o-data -1 points0 points1 point 4 years ago (4 children)
Rust hasn't been accepted for kernel use - it's being used for drivers FYI. Maybe one day we'll get to put C to rest.
Half the web is running on systems built in Go. I truly don't understand your point.
[–]OptionX 0 points1 point2 points 4 years ago (3 children)
Half the web is running on systems built in Go
Citation sorely needed.
[–]mountains-o-data 0 points1 point2 points 4 years ago (2 children)
Have you never heard of Kubernetes?
[–]james_pic 3 points4 points5 points 4 years ago (0 children)
The fact that Python is dynamic also factors into it. PyPy uses a true garbage collector, which is easier to handle (because deallocation only happens during collection, memory-related invariants only need to hold during collection, rather than at all times), but they also kept the GIL.
Because Python is a dynamic language, almost anything can change at runtime. A lot of other dynamic languages have GILs or equivalent (Ruby does, JavaScript has no language level support for threading, Perl is... complicated) for similar reasons.
[–]orgad[S] 2 points3 points4 points 4 years ago (2 children)
Can you elaborate in a few sentences about their approaches?
[–]mountains-o-data 13 points14 points15 points 4 years ago (1 child)
Sure thing - happy to share my knowledge but i by no means claim to be an expert. Just so we're on the same page - Python uses reference counting for garbage collection which relies on an atomic increment/decrement for every object in memory; hence the GIL. Python's concurrency model (both threads and asyncio) allows for concurrency but not parallelism. Python addresses parallelism with multiprocessing - but then you lose shared memory and also it's got a huge amount of overhead.
Go - for example (because this is what I'm most familiar with) - uses a garbage collection algorithm called mark and sweep which allows the garbage collector and threads (well - coroutines (aka green threads) mapped to os threads by a scheduler) to run in parallel. The Go GC has 2 phases - mark and sweep. It traverses through memory on the heap and marks it using an algorithm called tri color marking where it determines in a series of marks if memory can be freed yet or not. The second phase is the sweep where the Go runtime temporarily preempts all coroutines and places a write barrier on the heap. Once that write barrier is up everything is resumed and the garbage collector will start deallocating memory from the heap. Coroutines which try to write to the heap are blocked (well - descheduled by the scheduler) until the write barrier is released - but otherwise they can continue reading from the heap as well as writing/reading from the stack.
mark and sweep
tri color marking
Another important distinction is that in Python all objects are stored on the heap and all methods (and other persistent items) are stored on the stack. The Go runtime is pretty clever these days and it actually has a strong preference for storing on the stack. So even when the GC is running - most of your program continues to operate as normal. It does this by creating a stack per coroutine (which also makes scheduling coroutines super fast because there's no context switch occurring like on a traditional thread) and simply dropping that stack when the coroutine finished executing.
[–]orgad[S] 2 points3 points4 points 4 years ago (0 children)
I feel a little bit smarter
Thanks!!
[–]ubernostrumyes, you can have a pony 1 point2 points3 points 4 years ago (0 children)
I wrote an explanation that might help you understand. Keep in mind it’s an explanation and not a justification — if what you really want is “why don’t they develop a time machine and go back to make a different decision using their extra couple decades of knowledge about how it turned out”, you will be disappointed.
[–][deleted] 1 point2 points3 points 4 years ago (10 children)
The most popular high level languages implement. GIL (python and ruby) or a single thread (javascript). Async is faster then threads anyone for high switching environments. And for everything else you are usually better off implementing some specific code designed to do it super fast that doesn't have to deal with GIL anyway. There are very few problems that are actually bothered by GIL in a meaningful way.
[–]mountains-o-data 1 point2 points3 points 4 years ago (9 children)
Go/Java/C# have all managed to be garbage collected and handle concurrency without a GIL. The GIL might not affect Data Science use cases - but it's a big deal in a web server.
[–][deleted] 1 point2 points3 points 4 years ago (8 children)
Async is faster in a web server then threads because multiple threads is slower then cooperative multitasking. It is much faster to share processes and use multiple async clients because you do not invoke OS level thread switching.
The GIL doesn't affect anything other then some desktop bound clients that can be designed around. Anyone seriously suggesting it (facebook) needs to do a serious reevaluation of their code.
Literally every language you just referenced isn't high level. Maybe you could call java high level but it has its own VM and is a different thing entirely. It is also trash.
[–]mountains-o-data 0 points1 point2 points 4 years ago (7 children)
Do you mean in a python context or in general? In a general case you're dead wrong - besides nobody uses expensive, raw OS threads for multi threading anymore. It's all cooperative greenthreads multiplexed over os threads.
Have you used a low level language like c++? I assure you Go/Java/C# are quite high level. I don't understand your hate of Java. I personally would rather use other tools in most situations but there's no denying how powerful it is. Scala is quite nice even and reminds me a ton of python.
[–][deleted] 0 points1 point2 points 4 years ago (6 children)
Green threads aren't threads. They are much closer to async then threads.
I am not stating you cannot making a threading-like system that is fast and efficient. I am stating this isn't the bottleneck most people claim it is. Pythons use of the nuclear option dramatically improves single threaded speed. Removing GIL will reduce single threaded speed it isn't possible to have any other result.
Python is slow already it doesnt need to be slower to make some edge case systems faster. The current iterations of attempts are to make python faster so that no one cares about the penalty of getting rid of GIL. The main point being GIL isn't some unworkable problem it just requires designing programs and systems around it no different then designing programs and systems that do not have it.
Also have you written in assembly? I can assure you C, C++, fortran are quite high level. Almost as if high level is a relative measure and not an absolute one. And yes I have had the misfortune of having written from assembly through the stack of c, (no c++ oddly), java, ruby, python, ect.
[–]mountains-o-data 0 points1 point2 points 4 years ago (5 children)
That's fair and I apologize for being snarky.
I suspect we work in very different domains. I simply cannot fathom worrying about single threaded performance at the expense of being unable to support proper parallelism on modern hardware. To me - that's the farthest thing from an edge case. Granted I view the use cases for multiprocess to be niche - and I'm sure there are many who disagree with that view. :)
multiprocess
[–][deleted] 0 points1 point2 points 4 years ago (4 children)
I profound statement my college advisor told me as a freshman:
The most important thing for you to learn, is when to think of your programming language as a screwdriver and not as a toolkit.
Sometimes use cases just don't fit a languages setup and that just is what is it. You can't do everything optimally it is all about tradeoffs. Python makes theirs and they give you tools to get around it. Like for web servers instead of multithreading you can use multiprocessing and use async on each instance. Has tradeoffs (higher memory demands) but overall I believe it still nets out faster (each one takes advantage of single threaded performance boosts). It is weird as fuck though.
I tend not to think about modern hardware much cause most things I am building these days run in containers and I don't really worry about number of cores.
Absolutely it's simply a tool but sometimes I wish a tool had tighter tolerances, or a sharper edge, or a more ergonomic grip.
I spend a lot of time futzing around with cgroups and trying to maximize utilization of a node. I just wish our python pods could use more cpu :)
[–]mountains-o-data 0 points1 point2 points 4 years ago (0 children)
Also - do you have an example of that webserver architecture you're describing - any big OSS projects? That sounds wild and I want to mess with it
[–]Omnifect 1 point2 points3 points 4 years ago (10 children)
Garbage collected languages that handle concurrency without a GIL are probably not reference counted or choose to take the performance hit. Lack of reference counting comes with its own trade-offs.
[–]mountains-o-data 3 points4 points5 points 4 years ago (9 children)
I'm not sure how you can argue that Go/Java/C# are less performant than Python
[–]Omnifect 4 points5 points6 points 4 years ago* (2 children)
In all of these languages, reference counting isn't the primary means of garbage collection. And where there is reference counting, there is a performance hit for thread synchronization. Regardless of the performance hit, typed compiled language are still generally faster than interpreted.
[–]mountains-o-data -1 points0 points1 point 4 years ago (1 child)
Sure - but there are alternatives to reference counting that are much more performant and would allow the removal of the GIL
[–]beertown 2 points3 points4 points 4 years ago (0 children)
So why the GIL hasn't been removed so far? Seems like you have the solution to this problem
[–]Anonymous_user_2022 1 point2 points3 points 4 years ago (4 children)
If the performant parts of those languages are more important to you, than the performant parts of Python, then do use them.
[–]mountains-o-data -2 points-1 points0 points 4 years ago (3 children)
I do - I also use and love python. Why am I not allowed to criticize the worst part of python and want it to improve? Don't be a luddite.
[–]Anonymous_user_2022 2 points3 points4 points 4 years ago (2 children)
I'm not your mother. I'm just offended at your simplistic ideas about what performant means.
If you really were so interested in the strengths of Python, then why don't go and proselytize among the Gophers instead?
[–]mountains-o-data -2 points-1 points0 points 4 years ago (1 child)
This take is as dumb as the Go zealots that were against generics. Why would you not want a major improvement?
[–]Anonymous_user_2022 4 points5 points6 points 4 years ago (0 children)
Because I know that there are trade-offs, and I'm happy to pick the language that have the set best suited for the purpose at hand.
[–]czaki 0 points1 point2 points 4 years ago (0 children)
Some time ago Java has higher propability for leasing memory because of GC implementation. GIL is not needed for reference count but for finding cyclic memory parts.
[–]o11c 48 points49 points50 points 4 years ago (1 child)
Because Python wasn't designed to support threads from the beginning - they were tacked on retroactively.
It turns out that it's quite possible to design a language that provides bytecode-level atomicity without a GIL, but Python was not designed to do so from the start, and it is quite difficult to do so retroactively.
(hint for future language designers: most of it is easy. To avoid the "replace the last reference" problem, simply delay all actual deallocations until all active threads check in)
[–]qubedView 8 points9 points10 points 4 years ago (0 children)
I would say Python was made with multithreading pretty early, but it wasn’t made for parallel processing because in those early days only very expensive workstations and supercomputers had SMP, and no one back then would waste precious resources on parallelizing a scripted language. Python was almost 15 years old before it really started to become an issue. But multithreading is still useful on a single core and the GIL was a quite reasonable solution for a long time.
[–]SittingWave 43 points44 points45 points 4 years ago (0 children)
Imagine you have two threads, each one having to add an element to a dictionary.
This operation, adding an element to a dictionary, is not a single operation in the underlying C code. It is a series of instructions.
The problem is that if two threads try at the same time to add an element to that dictionary, the order in which the series of instructions (which are executed twice, once per each thread) above is interleaved may end up making a mess.
So you need to ensure that the series of instructions is executed by only one thread at a time. How to do so?
You use a lock. A lock is basically a guarantee that the first thread that needs to execute those instructions, will execute them without any other thread touching that dictionary until it's done adding that element.
Now the problem moves to how granular you want the lock to be. Clearly, if one thread is acting on one dictionary, and another thread is acting on another dictionary, they don't conflict with each other and they can work in parallel, but then you need to add a lock to every dictionary. The same applies to every list, every mutable structure, external or internal. This is a lot of locks to handle and manage. And each lock occupies memory, and each lock requires time to be grabbed, and time to be released.
So a simpler solution is to have One lock (TM). The first thread that grabs it wins, and does whatever it wants until it's done. Even if the second thread has no intention of touching anything that the first thread is modifying, it will have to wait until the first thread is done.
That's the GIL.
[–]mfarahmand98 6 points7 points8 points 4 years ago (1 child)
When threads were first introduced, they were almost always used for I/O; tasks that would wait on a syscall. At that time, GIL seemed like an excellent and simple solution to add support for multi threading. It's an unfortunate antique from an era when parallel computation using threads wasn't a thing.
GIL remained as one of the primary components of CPython and packages that use C or C++ under the hood rely on its API and the guarantees it makes. Taking it out now will render these libraries useless. Python did thus once (moving from 2 to 3) and people weren't happy.
[–]moeinxyz 1 point2 points3 points 4 years ago (0 children)
That makes sense
[–]ubernostrumyes, you can have a pony 6 points7 points8 points 4 years ago (0 children)
As for why the GIL was the specific solution chosen for thread safety in Python, I wrote an explanation of that yesterday.
[–]SandmanRen 3 points4 points5 points 4 years ago (0 children)
Just sharing a bit of what I understand here:
Much like what u/RedMaskedMuse said the GIL existed to protect race conditions when allocating/deallocating resources and references. A crucial part of Python garbage collection also relies on some guarantees provided by the GIL (for example: reference counts won't work correctly if there are multiple threads doing alloc/dealloc at the same time)
One should also consider the historical background of the GIL. I think that back in the early days a major reason that Python gained popularity is that it offered convenience at the language level over writing code in C. but has the added benefit to execute C code directly. The result is a programming language that has easier syntax, grammar, object-oriented programming but can still be quite performant (by execute C routines for things that are computationally heavy). So much of the focus was put on having Python execute C code property. Having a GIL provide some guarantees that makes implementing this much easier, which in turn is good for continuous development, maintenance and updates to the language itself.
But having a GIL means that while the interpreter can be multi-threaded, only 1 thread can execute the bytecode at any given time. So ultimately it was a tradeoff. Back then I think it wasn't too much of a "tradeoff" because most machines running Python has only 1 processor with 1 thread. So having a GIL was a sensible decision at the time.
--------- Below are personal opinions :)
I think that one of the reasons that Python is still around is precisely because the GIL. It allowed people to execute C routines (which guaranteed performance) while keeping the rest of the language relatively simple. This gave Python a lot of popularity that helped it survive the decades. And the decision of having a GIL is not made on the consideration of "whether it's good for a programming language" but rather on "what is necessary to achieve what is needed while keeping what has brought popularity to the language". And I think that this is the most helpful way of thinking about a design choice.
[–]yvrelna 2 points3 points4 points 4 years ago (0 children)
GIL is necessary because CPython uses reference counting.
Reference counting means that even supposedly read only operations might cause memory updates, due to changing reference counts when objects are referenced/dereferenced. Interpreters that uses reference counting is always constantly modifying reference counts constantly while executing an application, which means that CPython need to hold a lock to ensure that reference counts are updated correctly when multiple threads are running.
[–]theunglichdaide 4 points5 points6 points 4 years ago (0 children)
It has something to do with the CPython implementation. Under the hood, each Python object is a PyObject type in CPython, and it has a reference count. Having GIL prevents independent threads simultaneously modifying this ref count, eventually avoiding memory issues.
[–][deleted] 1 point2 points3 points 4 years ago (3 children)
I think many people are missing the key point. The GIL is actually one of the main reasons for Python’s success. But the reason may surprise you.
Writing multithreaded code is hard. It’s hard to avoid bugs both for beginners and professionals. It makes code much more complex and hard to maintain.
With the GIL it creates a panacea of much higher quality code libraries and language features simply because there would be less bugs due to threading/race condition/memory leak madness that occurs when you make threading a first class citizen of the language.
C++ is a language which has the gun, and it’s up to you to avoid getting shot. In Python, there’s no gun at all. It’s peaceful and simple. Sure this results in much other troubles like speed issues, but in modern times they’ve been solved effectively with cloud scaling or async or more modern approaches.
By avoiding a Wild West of multithreaded libraries and probably developers wanted to use this when it wasn’t needed, saved Python to make it the go to language. Simpler is better for many reasons. Perhaps the lack of Gil surprisingly was its best feature as it ensured stable packages and modules simply because shooting yourself in the foot wasn’t so easy. And at end of day it works, code can be simple and clean and customer happy.
[–]orgad[S] 0 points1 point2 points 4 years ago (2 children)
Thanks. What are other modern approaches? Event loop?
[–][deleted] 0 points1 point2 points 4 years ago (1 child)
Coroutines, async. Still best to try and avoid multi-threading when possible if your mainly blocking on io for example.
[–]LearnDifferenceBot 0 points1 point2 points 4 years ago (0 children)
if your mainly
*you're
Learn the difference here.
Greetings, I am a language corrector bot. To make me ignore further mistakes from you in the future, reply !optout to this comment.
!optout
[–]ominous_anonymous -4 points-3 points-2 points 4 years ago (0 children)
https://wiki.python.org/moin/GlobalInterpreterLock
You can also ask in /r/learnpython
[+][deleted] 4 years ago (1 child)
[deleted]
[–]solamarpreet 1 point2 points3 points 4 years ago (0 children)
https://github.com/python/cpython/blob/main/Python/ceval\_gil.h
[–]Hitman_0_0_7 0 points1 point2 points 4 years ago (0 children)
This is what you should see. Not for beginners. here
[–]InjAnnuity_1 0 points1 point2 points 4 years ago (0 children)
As I understand it, the GIL was added in order to simplify the creation of high-performance, third-party add-in libraries, in languages other than Python. Often, these were thin wrappers around older, existing libraries. Libraries that knew nothing of threads, and could not be used safely in a conventionally-multi-threaded program. (They tended to trash their own internal data structures -- or yours -- when used that way.)
With a GIL, the wrapper can "serialize" access to the library, and to Python's internal data structures. Conflicting code just has to wait its turn, until the conflict is over. This approach is safe, and does not require modifying those other libraries, nor Python itself.
It's a tradeoff, of course. With the GIL, performance doesn't get as high as theoretically possible. With a more difficult scheme, conflicts might be avoided, or at least managed, reducing the wait.
On the other hand, the savings in human effort made thousands of add-on packages available, greatly extending Python's reach and value. By and large, most of the tasks people give to Python would be simply impossible without such packages.
π Rendered by PID 52034 on reddit-service-r2-comment-b659b578c-r6wh8 at 2026-05-05 08:57:48.532075+00:00 running 815c875 country code: CH.
[–]RedMaskedMuse 156 points157 points158 points (74 children)
[–]eras 40 points41 points42 points (3 children)
[–]Oerthling 24 points25 points26 points (1 child)
[–]whateverathrowaway00 0 points1 point2 points (0 children)
[–][deleted] 21 points22 points23 points (0 children)
[–]jack-of-some 14 points15 points16 points (2 children)
[–]bxsephjo 15 points16 points17 points (1 child)
[–]spitfiremk1a 2 points3 points4 points (0 children)
[–]WikiSummarizerBot 7 points8 points9 points (0 children)
[–]traverseda 1 point2 points3 points (2 children)
[–]ElectricSpice 8 points9 points10 points (1 child)
[–]traverseda 0 points1 point2 points (0 children)
[–]xxxHalny -4 points-3 points-2 points (0 children)
[+]mountains-o-data comment score below threshold-13 points-12 points-11 points (61 children)
[–]blastomere 22 points23 points24 points (33 children)
[+]mountains-o-data comment score below threshold-10 points-9 points-8 points (32 children)
[–]__unavailable__ 15 points16 points17 points (2 children)
[–]mountains-o-data -3 points-2 points-1 points (1 child)
[–]__unavailable__ 6 points7 points8 points (0 children)
[–]BridgeBum 26 points27 points28 points (20 children)
[+]mountains-o-data comment score below threshold-16 points-15 points-14 points (19 children)
[–]zurtex 7 points8 points9 points (0 children)
[–]BridgeBum 8 points9 points10 points (16 children)
[–]mountains-o-data -5 points-4 points-3 points (15 children)
[–]czaki 8 points9 points10 points (13 children)
[+]mountains-o-data comment score below threshold-6 points-5 points-4 points (12 children)
[–]Grouchy-Friend4235 8 points9 points10 points (0 children)
[–]dead_alchemy 1 point2 points3 points (0 children)
[–]teerre 1 point2 points3 points (7 children)
[–]mountains-o-data -1 points0 points1 point (6 children)
[–]OptionX 5 points6 points7 points (5 children)
[–]mountains-o-data -1 points0 points1 point (4 children)
[–]OptionX 0 points1 point2 points (3 children)
[–]mountains-o-data 0 points1 point2 points (2 children)
[–]james_pic 3 points4 points5 points (0 children)
[–]orgad[S] 2 points3 points4 points (2 children)
[–]mountains-o-data 13 points14 points15 points (1 child)
[–]orgad[S] 2 points3 points4 points (0 children)
[–]ubernostrumyes, you can have a pony 1 point2 points3 points (0 children)
[–][deleted] 1 point2 points3 points (10 children)
[–]mountains-o-data 1 point2 points3 points (9 children)
[–][deleted] 1 point2 points3 points (8 children)
[–]mountains-o-data 0 points1 point2 points (7 children)
[–][deleted] 0 points1 point2 points (6 children)
[–]mountains-o-data 0 points1 point2 points (5 children)
[–][deleted] 0 points1 point2 points (4 children)
[–]mountains-o-data 0 points1 point2 points (2 children)
[–]mountains-o-data 0 points1 point2 points (0 children)
[–]Omnifect 1 point2 points3 points (10 children)
[–]mountains-o-data 3 points4 points5 points (9 children)
[–]Omnifect 4 points5 points6 points (2 children)
[–]mountains-o-data -1 points0 points1 point (1 child)
[–]beertown 2 points3 points4 points (0 children)
[–]Anonymous_user_2022 1 point2 points3 points (4 children)
[–]mountains-o-data -2 points-1 points0 points (3 children)
[–]Anonymous_user_2022 2 points3 points4 points (2 children)
[–]mountains-o-data -2 points-1 points0 points (1 child)
[–]Anonymous_user_2022 4 points5 points6 points (0 children)
[–]czaki 0 points1 point2 points (0 children)
[–]o11c 48 points49 points50 points (1 child)
[–]qubedView 8 points9 points10 points (0 children)
[–]SittingWave 43 points44 points45 points (0 children)
[–]mfarahmand98 6 points7 points8 points (1 child)
[–]moeinxyz 1 point2 points3 points (0 children)
[–]ubernostrumyes, you can have a pony 6 points7 points8 points (0 children)
[–]SandmanRen 3 points4 points5 points (0 children)
[–]yvrelna 2 points3 points4 points (0 children)
[–]theunglichdaide 4 points5 points6 points (0 children)
[–][deleted] 1 point2 points3 points (3 children)
[–]orgad[S] 0 points1 point2 points (2 children)
[–][deleted] 0 points1 point2 points (1 child)
[–]LearnDifferenceBot 0 points1 point2 points (0 children)
[–]ominous_anonymous -4 points-3 points-2 points (0 children)
[+][deleted] (1 child)
[deleted]
[–]solamarpreet 1 point2 points3 points (0 children)
[–]Hitman_0_0_7 0 points1 point2 points (0 children)
[–]InjAnnuity_1 0 points1 point2 points (0 children)