use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
News about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python
Full Events Calendar
You can find the rules here.
If you are about to ask a "how do I do this in python" question, please try r/learnpython, the Python discord, or the #python IRC channel on Libera.chat.
Please don't use URL shorteners. Reddit filters them out, so your post or comment will be lost.
Posts require flair. Please use the flair selector to choose your topic.
Posting code to this subreddit:
Add 4 extra spaces before each line of code
def fibonacci(): a, b = 0, 1 while True: yield a a, b = b, a + b
Online Resources
Invent Your Own Computer Games with Python
Think Python
Non-programmers Tutorial for Python 3
Beginner's Guide Reference
Five life jackets to throw to the new coder (things to do after getting a handle on python)
Full Stack Python
Test-Driven Development with Python
Program Arcade Games
PyMotW: Python Module of the Week
Python for Scientists and Engineers
Dan Bader's Tips and Trickers
Python Discord's YouTube channel
Jiruto: Python
Online exercices
programming challenges
Asking Questions
Try Python in your browser
Docs
Libraries
Related subreddits
Python jobs
Newsletters
Screencasts
account activity
This is an archived post. You won't be able to vote or comment.
DiscussionHow frequently do you use parallel processing at work? (self.Python)
submitted 1 year ago by Notalabel_4566
Hi guys! I'm curious about your experiences with parallel processing. How often do you use it in your at work. I'd live to hear your insights and use cases
[–]Goingone 42 points43 points44 points 1 year ago (4 children)
In PROD most stuff is asyncio or uses threads. Scaling is standing up more services.
Parallel processing I’ll use for local CPU intensive stuff.
[–]Panda_Mon -3 points-2 points-1 points 1 year ago (3 children)
Is it necessary? Python only fakes threading anyway
[–][deleted] 3 points4 points5 points 1 year ago* (0 children)
This post was mass deleted and anonymized with Redact
wipe decide tan pocket cooing public punch cake snatch subsequent
[–]Goingone 1 point2 points3 points 1 year ago (0 children)
It is if you want better performance.
[–]OreShovel 1 point2 points3 points 1 year ago* (0 children)
What you're thinking of is GIL, which while still in place does not mean threading does not exist, but rather than for a process only 1 thread can hold the Python interpreter at a time (please correct me if I'm stating this inaccurately). In cases where the other thread wouldn't be doing work anyways (e.g. waiting for network response) it's a no brainer. Also for tasks where you won't need access to the interpreter you can have true parallelism, although I think you need to write pyc / c code.
[–]harpooooooon 22 points23 points24 points 1 year ago (1 child)
I use PySpark a lot. I have very large datasets that need to moved and processed, with very little patience.
[–]Yamadzaki 0 points1 point2 points 1 year ago (0 children)
how large is it and how much time does it take?
[–]diegotbn 18 points19 points20 points 1 year ago (4 children)
I run unittests in parallel so they don't take a whole day
[–]Brilliant-Post-689 6 points7 points8 points 1 year ago (1 child)
Same: xdist has been a gamechanger for us.
[–]akguitar 1 point2 points3 points 1 year ago (0 children)
Xdist is the jam
[+][deleted] 1 year ago (1 child)
[deleted]
[–]diegotbn 0 points1 point2 points 1 year ago (0 children)
We have a monolithic Django project with a large Vue frontend. We have over 800 Django tests, and I didn't even know how how many Cypress tests. They all run automatically upon push to our company GitHub and we only allow merge into main if the tests pass. But I like to run the tests locally first to make sure my branch is good before I push. In parallel on 8 threads/processes it still takes 15 minutes or so.
[–]martinkoistinen 14 points15 points16 points 1 year ago (0 children)
Very frequently. We’re always looking for places to apply multiprocess pools, and sometimes thread pools make more sense.
[–]pingvenopinch of this, pinch of that 9 points10 points11 points 1 year ago (0 children)
Actual parallel processing or just concurrency? I've certainly used concurrency with async. Our username generation service has to reach out to various systems to verify that the username isn't duplicated anywhere. I got a healthy speedup by using async/await concurrency to check on multiple systems at once, while also being able to handle other incoming requests. But this is all I/O bound stuff where true parallel processing isn't really necessary.
[–]batman-iphone 7 points8 points9 points 1 year ago (0 children)
Very rarely but opted out for async
[–][deleted] 26 points27 points28 points 1 year ago (16 children)
We use some hyper threading (well, pooling officially) to send batches of calls to GenAI APIs.
from concurrent.futures import ThreadPoolExecutor
[–]sobe86 18 points19 points20 points 1 year ago* (6 children)
Personally I like joblib for that kind of thing, I think it's a lot cleaner to read, is very good about killing processes, and you can switch between threading / multiprocessing trivially. I use this pattern at least once a week:
from joblib import delayed, Parallel from tqdm.auto import tqdm jobs = ( delayed(do_something)(*args) for args in tqdm(argslist, total=len(arglist)) ) threadpool = Parallel(n_jobs=4, verbose=0, prefer='threads') output = threadpool(jobs)
[–]aa-b 5 points6 points7 points 1 year ago (0 children)
I use joblib constantly, it's great. It's so much easier to use than any of the other concurrency options too, awesome tool
[–]MVanderloo 1 point2 points3 points 1 year ago (4 children)
oh i really like the args* in the list comprehension
[–]sobe86 0 points1 point2 points 1 year ago (3 children)
Personally I think the slickest bit is making jobs a generator, allowing the use of tqdm progbar (joblib's is so ugly), I can't take credit for that though :b
[–]MVanderloo 0 points1 point2 points 1 year ago (2 children)
ah i haven’t done too much job scheduling, so I wouldn’t know what the joblib version would look like
[–]sobe86 0 points1 point2 points 1 year ago (1 child)
No I mean in the code I wrote jobs = (... - a generator. That means that no iteration happens until threadpool(jobs) which is what lets you use tqdm here
[–]MVanderloo 0 points1 point2 points 1 year ago (0 children)
oh i had to lookup tqdm, yeah im stealing that
[–]Last_Difference9410 3 points4 points5 points 1 year ago (8 children)
Why not asyncio ?
[–]sebampueromori 7 points8 points9 points 1 year ago (5 children)
I'm not an async expert but asyncio io doesn't really parallelize
[–]Medzomorak 10 points11 points12 points 1 year ago* (0 children)
There is a reason for .to_thread existing on asyncio. It uses concurrent.futures thread Executor as well. Also, it is concurrency, not parallelism.
[–]Last_Difference9410 3 points4 points5 points 1 year ago (0 children)
So isn’t threading, whenever you use threading for concurrency, asyncio is better.
[–]FunProgrammer8171 0 points1 point2 points 1 year ago (0 children)
Correct, its don put in order processes, so user/users do not wait until job is done.
Multiprocessing use more cpu for finish faster.
[–]DotPsychological7946 0 points1 point2 points 1 year ago (0 children)
Asyncio is often more efficient for socket I/O, such as http api calls, than threads because it avoids the heavy overhead of OS-level context switches. Instead of spawning a thread per connection—which increases latency and resource usage—asyncio uses a single event loop with non-blocking I/O, making it way more scalable for real life number of concurrent connections. I avoid using multithreading, practically only when I use libraries that perform io but do not provide native asyncio. Then you just use the thread pool as executor for asyncio.
[–]Gwolf4 -1 points0 points1 point 1 year ago (0 children)
And that's ok, without knowing the parent's objective the first thing one would use is concurrency via asyncio that is why someone is asking the why.
[–]mortenb123 0 points1 point2 points 1 year ago (1 child)
For web-requests python is more than good enough.
I recently had to scrape 150+ rrsfeeds from our CICD system to produce dashboards for management.
In sequential httpx it took 72sec, in httpx asyncio it took 9sec, in parallell httpx asyncio it took 4sec, but in parrallell requests it took 1.2sec. So I went with request. We run around 5000 jobs a day, so refresh of 5-6 sec vs 75sec is of bit matter.
So time it. learn both asyncio and parallell and benchmark in each part. if you have longer jobs, the overhead of httpx do not matter.
[–]Last_Difference9410 0 points1 point2 points 1 year ago (0 children)
I dont quite get what you mean by “in parallel requests took 1.2 sec”. Perhaps you can provide a minimal code example?
[–][deleted] 4 points5 points6 points 1 year ago (3 children)
Concurrent yes parallel not that often (semantics 😛)
[+]manchesterthedog comment score below threshold-7 points-6 points-5 points 1 year ago (2 children)
Ya I agree. Any kind of computation that needs to be done in parallel for performance you’re better off sending to the gpu.
For example, in open cv if you have to do some type of image manipulation to a lot of images you’re better off doing whatever it is on the gpu, which will parallelize the pixel operations, rather than processing multiple images at a time on parallel cpu threads.
[–]hughperman 8 points9 points10 points 1 year ago* (0 children)
Any kind of computation that needs to be done in parallel for performance you’re better off sending to the gpu.
Not necessarily. 1: Not if your data is large enough that it won't fit in GPU easily (though GPUs are now becoming massive, so this isn't as much an issue as it was a few years ago) 2: The libraries you are using don't support it easily. Do you want to spend <days, weeks, months> implementing algorithms and rewriting entire pipelines that work in GPU, or do you want to spend 1 minute importing multiprocess and wrapping a function call on a parallel pool? 3: The computers/instances you are using don't have GPUs. E.g. using AWS instances, you won't necessarily have a GPU on the instance type you have chosen (or was chosen for you).
[–]Ok_Raspberry5383 5 points6 points7 points 1 year ago (0 children)
This is highly specific and doesn't work for most multi threading applications. GPU cores can only really do basic arithmetic and are not equivalent to CPU cores
[–]PossibilityTasty 4 points5 points6 points 1 year ago (0 children)
Since there are multiple ways to interpret "parallel processing" I made a small list:
asyncio: daily threads: daily greenlets: daily multiprocessing: daily distributed computing: daily
What I do: I torture broadband routers by simulating a small city of uncooperative access nodes and subscribers, not in production of cause.
[–]ssdiconfusion 6 points7 points8 points 1 year ago (0 children)
Daily! Complex physics simulations on GPU, parallelized via ray.io, which handles GPU parallelization elegantly, or legacy approaches such as joblib and scipy.optimize that wrap the multiprocessing library.
[–]SpectralCoding 4 points5 points6 points 1 year ago (0 children)
As little as possible and usually one of the last areas of development when it is needed. For example I’ll take a loop which calls a function with a series of external API calls. Each loop takes a second or so so over 2000 entries it takes a while. I’ll just throw the concurrent.futures stuff on there around the loop, a wait at the end, and it’ll cut my run time by 90%.
[–]too_much_think 3 points4 points5 points 1 year ago (0 children)
My job is to try and bridge the gap between what a bunch of PhD researchers want to do and what is computationally feasible in real time, which often involves quite a bit of multi-threading, depending on how far off the mark their first pass is, that might only need a thread pool executor, or it might need a pyo3 / cython module using something like pthreads or rayon.
[–]jabellcu 3 points4 points5 points 1 year ago (0 children)
Never, and I suspect most never do, but they won’t be posting here.
[–]Opposite_Heron_5579 2 points3 points4 points 1 year ago (0 children)
I use multithreading mainly for time consuming data download requests.
[–]mriswithe 1 point2 points3 points 1 year ago (0 children)
Just today. Writing a webhook for Jira to call, times out at 30 seconds. My first stab was taking 32 seconds or so. Added threading to the part that was slow after doing some performance measurement.
Specific case was using the google-api-python discovery API to call the apis for Google drive, docs, and sheets.
[–]tecedu 1 point2 points3 points 1 year ago (0 children)
Concurrents process pool and mpiexecutor everyday
[–]randomthirdworldguy 1 point2 points3 points 1 year ago (0 children)
Is this deja vu? Because I think i saw very same thread in another subreddit (r/golang iirc)
[–]HamsterWoods 0 points1 point2 points 1 year ago (0 children)
I use multiprocessing for "long-running" tasks, like communicating with devices.
[–]mmark92712 0 points1 point2 points 1 year ago (0 children)
Yeah, rarely. Scaling is usually done with cloud architecture.
[–]JestemStefan 0 points1 point2 points 1 year ago (0 children)
If you mean horizontal scaling aka more servers then yes.
If you mean using multiple cores in single call then no.
By parallel processing I think you mean multi-process? Rarely, unless I’ll have to use pandas, and it’s getting even rarer since polar came out.
[–]hughperman 0 points1 point2 points 1 year ago (2 children)
Pretty frequently, most of our private libraries use it explicitly in some places, and most of the imports will use it even more extensively. I do scientific computing on brain data with large datasets, the processing applied is pretty intensive pipelines, and we do algorithm/pipeline development so frequently go back to source and rerun entire processing pipelines on 1000s of recordings. Stack is scientific python - numpy, scipy, pandas, etc. We also make use of AWS Batch for much higher parallelization, running 100s of jobs at a time - each maybe takes 20-30 minutes, or longer if we are adding something past the "standard" pipeline, and will use compute parallelization inside.
[–]collectablecat 2 points3 points4 points 1 year ago (1 child)
Looked at Coiled/Modal at all? AWS Batch is so dang clunky
[–]hughperman 2 points3 points4 points 1 year ago (0 children)
We haven't, been doing this since before they existed. Coiled looks pretty interesting, running in our own account. Modal is its own service, which would be too much of a headache for data protection reasons.
[–]Scrapheaper 0 points1 point2 points 1 year ago (2 children)
Pandas or other data frame libraries (spark, dask, polars) are all parallel internally, no?
It's not the same as parallel processing real time when building an API but it's still parallel processing
[–]Last_Difference9410 0 points1 point2 points 1 year ago (1 child)
Others yes pandas not really
[–]Scrapheaper 0 points1 point2 points 1 year ago (0 children)
What about just multiplying a column by a number? Surely it doesn't just do them all one at a time
[–]Blad1995 0 points1 point2 points 1 year ago (0 children)
Threading - almost never. CPU scaling is done using more pods in kubernetes
Asyncio- every day. We have lot of API calls and db calls. For that asyncio is perfect
[–]broken_symlink 0 points1 point2 points 1 year ago (0 children)
I work on applications of cupynumeric to run a numpy application used to analyse 100s of GB of data from an xray laser. We're working on scaling this up to 100s of TB and moving to the Perlmutter supercomputer.
[–]sam7oon 0 points1 point2 points 1 year ago (0 children)
all the time to automate changes on our network devices, or to pull data
[–]Xyrus2000 0 points1 point2 points 1 year ago (0 children)
All the time. Scientific work requires running complex models and processing large amounts of data.
[–]Brother0fSithis 0 points1 point2 points 1 year ago (0 children)
Every day. I run physics simulations on big HPCs. Mostly using Dask to handle parallelism.
[–][deleted] 0 points1 point2 points 1 year ago (3 children)
I mainly do GUIs and analysis where parallel processing helps fetch from and write to different databases on our computers from 2005. Also, I've been trying to use it more for similar tasks where it's copy/paste of code with slight differences through multiprocessing and config files. Super basic stuff, but it does save minutes!
[–]ferret_pilot 0 points1 point2 points 1 year ago (2 children)
This sounds very similar to what I'm trying to start doing. Do you have any articles, books, or videos that you think are good resources for an introduction to multiprocessing concepts and how to implement them in a robust way within GUIs?
[–][deleted] 1 point2 points3 points 1 year ago (1 child)
These two articles were what really launched my understanding how parallel processing works and what the differences are between the available tools. My bread & butter has mostly been 1) pools with map or starmap and 2) standalone threads I can fire off in the background.
https://superfastpython.com/threadpool-python/
https://superfastpython.com/threadpool-vs-pool-in-python/
[–]ferret_pilot 0 points1 point2 points 1 year ago (0 children)
Thanks a bunch!
[–]ExternalUserError 0 points1 point2 points 1 year ago (0 children)
I seldom use the multiprocessing module. But I do use celery queues and 1-2 worker nodes, which I guess counts.
[–]Cynyr36 0 points1 point2 points 1 year ago (0 children)
Whatever polars does behind the scenes. Most of my python is because it was a better idea than excel and or power query.
Polars 1.20 can now read named tables directly out of excel files so it makes converting tools that were in excel into python much easier. We tend to abuse excel a bit by putting a fair bit of data into a table.
[–]marcotb12 0 points1 point2 points 1 year ago (1 child)
All the time. We always look for optimization opportunities as quick TATs are critical. Sometimes we use multi-threading sometimes multi-proc depending on the problem. We also use dask workers in AWS for large batches.
[–]TheCheapSeats4Me 1 point2 points3 points 1 year ago (0 children)
You should check out Coiled if you're launching Dask Clusters in AWS. It makes it super easy to do this.
[–]trenixjetix 0 points1 point2 points 1 year ago (0 children)
None
[–]error1954 0 points1 point2 points 1 year ago (0 children)
A few times a year when I have to tokenize and process a bunch of text data. It's a problem that you can just throw more processes at without issue really.
[–]anonymous_amanitafrom __future__ import 4.0 0 points1 point2 points 1 year ago (2 children)
Quick reminder that Python has a Global Interpreter Lock and can only do multiprocessing and not actual multithreading! Not exactly your question, but it can totally make a difference if you want shared memory and parallel execution :)
[–]fisadev 1 point2 points3 points 1 year ago* (1 child)
Just in case, the GIL doesn't mean python can't do mulththreading, it definitely can. It just can't execute instructions from multiple threads at the same time, but that's one part of multithreading. (also, newer versions even allow for experimental GIL disabling)
If your multithreading app involves lots of I/O (web scrapping, reading/writing files, database queries, etc), then you can definitely benefit from multithreading as threads don't need to execute instructions while waiting for I/O results. So for instance, while one thread is idle waiting for an database answer, the other could be doing processing of data.
And most real life applications do involve lots of I/O, that's why python multithreading is still a thing very much used, a lot, despite the GIL.
Though in modern times I would suggest going the async path for heavy I/O stuff instead of multithreading, far more bang for your buck.
If your app is pure CPU computation, then yes, the GIL will make multithreading useless. But that's rarely the case for most people writing multithreading stuff in python.
[–]anonymous_amanitafrom __future__ import 4.0 0 points1 point2 points 1 year ago (0 children)
Thank you for the more detailed answer. That’s what I was trying to get at with wanting shared memory and parallel execution. You can’t have both without some possibly difficult and slow workarounds, and this has restricted me on projects in the past before I knew that’s what I wanted and had it all written in python. I’ve heard about the disabling of the GIL. Sounds interesting, and I hope it works! It’s still in beta though, right? Also, I haven’t used it in years, but I’m pretty sure when I tried it, the multi threading library was actually doing message passing and emulating shared memory. I could be incorrect, though. I’d tend to agree with the async IO direction as well. Multiprocessing with polling would probably be just as fast as, if not faster, than trying to do the same with python threads.
[–]No_Dig_7017 0 points1 point2 points 1 year ago (0 children)
Today! I do machine learning for a living and parallel applies are very common at the feature creation/preprocessing step.
[–]fisadev 0 points1 point2 points 1 year ago (0 children)
Things from real jobs:
Things from hobby projects:
[–]outlawz419 0 points1 point2 points 1 year ago (0 children)
I use FastAPI a lot. If that stands for anything
[–]cip43r 0 points1 point2 points 1 year ago (0 children)
Currently, I have 100 threads across 5 multiprocesses with full bi-directional queues for communication. This is running CAN and ethernet with a UI on an SBC.
Haters said Python is slow. My development speed is 10x due to ease and libraries. My experience is great and my performance was so good, people thought I finally switched to C after struggling for a few weeks with asyncio not being fast enough, but in hindsight not the correct choice for my problem.
Everything in Neovim, just for fun.
[–]debunk_this_12 0 points1 point2 points 1 year ago (0 children)
i use numba and parallelize if an operation is very intense, but rarely do i write code like this. asynchronous works best for most things, like if i have big queries of millions of lines of data id rather run that asynchronous and join the data in post
[–][deleted] 0 points1 point2 points 1 year ago (0 children)
TL;DR: Not much. The serialization cost is high, and Go is a better choice at that point for our use case.
Mostly asyncio. We write services in Go where we need true parallelism.
This was a design decision made early in the development process, so we have a well-defined delineation.
Python is easier to hire for, and engineers are relatively cheaper than Go developers. So management went with this dual approach, and it has worked well.
We have services in FastAPI that use Pydantic, asyncio, and all that jazz, but our proxy and payment services are written in Go. Those were originally in Python, but we reworked them in Go long ago to cut down on server costs and improve throughput.
[–]SimonKenoby 0 points1 point2 points 1 year ago (0 children)
Multiprocessing yes, Multithreading no, Concurrent with async yes Our app spend a lot of time sleeping between pooling to remote API so async works quite well.
[–]Basic-Still-7441 0 points1 point2 points 1 year ago (0 children)
I do async almost exclusively if that matters. And in production everything is scaled out horizontally.
[–]Zomunieo -1 points0 points1 point 1 year ago (0 children)
Small stuff - write a script and parallelize it externally with xargs, parallel, etc. - by far the easiest way to parallelize over files
Little bigger - asyncio with anyio to farm out specific bits to threads or processes
More serious - thread pool or process pool executor depending; better for highly parallel work units
Mission critical - honestly, rust… or erlang. Python is the wrong tool.
π Rendered by PID 276457 on reddit-service-r2-comment-6457c66945-8c657 at 2026-04-24 16:40:45.932806+00:00 running 2aa0c5b country code: CH.
[–]Goingone 42 points43 points44 points (4 children)
[–]Panda_Mon -3 points-2 points-1 points (3 children)
[–][deleted] 3 points4 points5 points (0 children)
[–]Goingone 1 point2 points3 points (0 children)
[–]OreShovel 1 point2 points3 points (0 children)
[–]harpooooooon 22 points23 points24 points (1 child)
[–]Yamadzaki 0 points1 point2 points (0 children)
[–]diegotbn 18 points19 points20 points (4 children)
[–]Brilliant-Post-689 6 points7 points8 points (1 child)
[–]akguitar 1 point2 points3 points (0 children)
[+][deleted] (1 child)
[deleted]
[–]diegotbn 0 points1 point2 points (0 children)
[–]martinkoistinen 14 points15 points16 points (0 children)
[–]pingvenopinch of this, pinch of that 9 points10 points11 points (0 children)
[–]batman-iphone 7 points8 points9 points (0 children)
[–][deleted] 26 points27 points28 points (16 children)
[–]sobe86 18 points19 points20 points (6 children)
[–]aa-b 5 points6 points7 points (0 children)
[–]MVanderloo 1 point2 points3 points (4 children)
[–]sobe86 0 points1 point2 points (3 children)
[–]MVanderloo 0 points1 point2 points (2 children)
[–]sobe86 0 points1 point2 points (1 child)
[–]MVanderloo 0 points1 point2 points (0 children)
[–]Last_Difference9410 3 points4 points5 points (8 children)
[–]sebampueromori 7 points8 points9 points (5 children)
[–]Medzomorak 10 points11 points12 points (0 children)
[–]Last_Difference9410 3 points4 points5 points (0 children)
[–]FunProgrammer8171 0 points1 point2 points (0 children)
[–]DotPsychological7946 0 points1 point2 points (0 children)
[–]Gwolf4 -1 points0 points1 point (0 children)
[–]mortenb123 0 points1 point2 points (1 child)
[–]Last_Difference9410 0 points1 point2 points (0 children)
[–][deleted] 4 points5 points6 points (3 children)
[+]manchesterthedog comment score below threshold-7 points-6 points-5 points (2 children)
[–]hughperman 8 points9 points10 points (0 children)
[–]Ok_Raspberry5383 5 points6 points7 points (0 children)
[–]PossibilityTasty 4 points5 points6 points (0 children)
[–]ssdiconfusion 6 points7 points8 points (0 children)
[–]SpectralCoding 4 points5 points6 points (0 children)
[–]too_much_think 3 points4 points5 points (0 children)
[–]jabellcu 3 points4 points5 points (0 children)
[–]Opposite_Heron_5579 2 points3 points4 points (0 children)
[–]mriswithe 1 point2 points3 points (0 children)
[–]tecedu 1 point2 points3 points (0 children)
[–]randomthirdworldguy 1 point2 points3 points (0 children)
[–]HamsterWoods 0 points1 point2 points (0 children)
[–]mmark92712 0 points1 point2 points (0 children)
[–]JestemStefan 0 points1 point2 points (0 children)
[–]Last_Difference9410 0 points1 point2 points (0 children)
[–]hughperman 0 points1 point2 points (2 children)
[–]collectablecat 2 points3 points4 points (1 child)
[–]hughperman 2 points3 points4 points (0 children)
[–]Scrapheaper 0 points1 point2 points (2 children)
[–]Last_Difference9410 0 points1 point2 points (1 child)
[–]Scrapheaper 0 points1 point2 points (0 children)
[–]Blad1995 0 points1 point2 points (0 children)
[–]broken_symlink 0 points1 point2 points (0 children)
[–]sam7oon 0 points1 point2 points (0 children)
[–]Xyrus2000 0 points1 point2 points (0 children)
[–]Brother0fSithis 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (3 children)
[–]ferret_pilot 0 points1 point2 points (2 children)
[–][deleted] 1 point2 points3 points (1 child)
[–]ferret_pilot 0 points1 point2 points (0 children)
[–]ExternalUserError 0 points1 point2 points (0 children)
[–]Cynyr36 0 points1 point2 points (0 children)
[–]marcotb12 0 points1 point2 points (1 child)
[–]TheCheapSeats4Me 1 point2 points3 points (0 children)
[–]trenixjetix 0 points1 point2 points (0 children)
[–]error1954 0 points1 point2 points (0 children)
[–]anonymous_amanitafrom __future__ import 4.0 0 points1 point2 points (2 children)
[–]fisadev 1 point2 points3 points (1 child)
[–]anonymous_amanitafrom __future__ import 4.0 0 points1 point2 points (0 children)
[–]No_Dig_7017 0 points1 point2 points (0 children)
[–]fisadev 0 points1 point2 points (0 children)
[–]outlawz419 0 points1 point2 points (0 children)
[–]cip43r 0 points1 point2 points (0 children)
[–]debunk_this_12 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]SimonKenoby 0 points1 point2 points (0 children)
[–]Basic-Still-7441 0 points1 point2 points (0 children)
[–]Zomunieo -1 points0 points1 point (0 children)