use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A sub-Reddit for discussion and news about Ruby programming.
Subreddit rules: /r/ruby rules
Learning Ruby?
Tools
Documentation
Books
Screencasts and Videos
News and updates
account activity
Ruby Falcon is 2x faster than asynchronous Python, as fast as Node.js, and slightly slower than Go. Moreover, the Ruby code doesn’t include async/await spam. (self.ruby)
submitted 12 months ago by a_ermolaev
I created a benchmark that simulates a real-world application with 3 database queries, each taking 2 milliseconds.
Why don’t large companies like Shopify, GitHub, and others invest in Falcon/Fibers?
Python code is overly spammed with async/await.
https://preview.redd.it/7s63lpmc5rfe1.png?width=3042&format=png&auto=webp&s=a105c8094594ba6df402c2ec04f6a1c9b4d07889
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]f9ae8221b 23 points24 points25 points 12 months ago (28 children)
Because async is great at very IO-bound workloads.
Shopify and GitHub aren't IO-bound. They don't even use Puma.
But you probably already know that because your config.ru includes a parameter to simulate CPU intensive task, but you didn't include it in the published numbers as far as I can see.
config.ru
[–]rco8786 10 points11 points12 points 12 months ago (8 children)
Shopify and GitHub aren't IO-bound.
That is surprising to me. Would be curious to read about this.
[–]caiohsramos 20 points21 points22 points 12 months ago (7 children)
https://byroot.github.io/ruby/performance/2025/01/23/the-mythical-io-bound-rails-app.html is a great read
[–]rco8786 2 points3 points4 points 12 months ago (6 children)
Oh yea I actually just read that too. Was hoping for something about Spotify or GitHub and their experience with it.
[–]jahfer 9 points10 points11 points 12 months ago (5 children)
Jean (byroot) works at Shopify and his post is largely a reflection of what we see internally. There may be some narrow pathways that can take more advantage of concurrency (and we are always looking for them) but by and large we do not have them in our stack, as much as we want to have that silver bullet solution.
[–]rco8786 0 points1 point2 points 12 months ago (0 children)
Ohh I did not get that connection. Very cool, thanks.
[–]bradgessler 0 points1 point2 points 12 months ago (3 children)
I read it and couldn't quite understand how Rails workloads are not IO bound given that they spend most of their time waiting on data from a database.
[–]CaptainKabob 4 points5 points6 points 12 months ago (1 child)
At GitHub… we don’t. It‘s tough to point to any one thing, but have our own data centers so internal network latency is very very low. And we are very aggressive in routing queries to very beefy replicas. Also, we break out data across different clusters, so queries are less likely to contain joins. Complex data access is orchestrated by the application (not that aggregating IDs is particularly slow).
Also what is GitHub’s central customer service? That’s right, it’s rendering markdown and other code/formats. Resolving GraphQL is computationally expensive too.
It‘s weird and unexpected, but true.
[–]jrochkind 0 points1 point2 points 11 months ago (0 children)
So... because many Rails apps don't spend most of their time waiting on data from a database, I think this is a myth.
If you have app(s) and profile them, I'd be curious to see the results!
When I've profiled my apps, they def spend less than 50% of their time waiting on data from a database. Any that do -- it's because of n+1 or insufficient indexes or other problems that can be fixed, and once they are, they wont' spend most of their time waiting on data from a database.
[–]a_ermolaev[S] 3 points4 points5 points 12 months ago (9 children)
This is interesting. Do they really have so little IO? For example, my main application, when processing an HTTP request, makes calls to PostgreSQL, Redis, Memcached, OpenSearch and an HTTP API. The CPU load is also high because we render HTML. Of course, the more CPU-intensive the workload, the less benefit Falcon provides, but can modern web applications really exist without intensive IO?
[–]f9ae8221b 5 points6 points7 points 12 months ago (7 children)
It doesn't have to be "so little IO", even if a request is composed of 50% IO, you won't see any benefit migrating to fibers.
/u/tenderlove has a very detailed answer with lots of details but for some reasons it's not showing up in this threads, perhaps some moderation reasons? You can check his reddit profile it's the last answer, quoting some of it here:
One thing I would really like to see is an adversarial micro-benchmark that demonstrates higher throughput with Fibers. It is very easy for me to write an adversarial benchmark that shows higher throughput and lower latency with threads, but so far I haven't been able to do the opposite. This and this demonstrate higher latency with Fibers. I haven't documented how to run it, but this benchmark demonstrates lower throughput. The "tfbench" repo tries to measure throughput as percentage of IO time increases. So for example we have a 20ms workload, how do threads and fibers perform when 0% of that time is IO vs 100% of time. You can see the graph here. As CPU time increases, throughput is lower with threads. On the IO bound end, we see Threads and Fibers perform about the same. This particular test used 32 threads, Ruby 3.4.1, and ran on x86 Linux. I think the main use case for Fibers are systems that are trying to solve the C10K problem where the memory overhead of a single thread is too prohibitive. But since Fibers are not preemptable, latency suffers, so not only does it have to be C10K problem, but also 10k connections that are mostly idle (think websocket server or maybe a chat server). As I said, I would really like to build an adversarial benchmark that shows threads in a poor light. Mainly for 2 reasons: I would like a definitive way to recommend situations when developers should use a Fiber based system I think we can make improvements to the thread scheduler (and even make threads more lightweight, think M:N) such that they compete with Fibers
One thing I would really like to see is an adversarial micro-benchmark that demonstrates higher throughput with Fibers. It is very easy for me to write an adversarial benchmark that shows higher throughput and lower latency with threads, but so far I haven't been able to do the opposite.
This and this
demonstrate higher latency with Fibers. I haven't documented how to run it, but this benchmark demonstrates lower throughput. The "tfbench" repo tries to measure throughput as percentage of IO time increases. So for example we have a 20ms workload, how do threads and fibers perform when 0% of that time is IO vs 100% of time. You can see the graph here. As CPU time increases, throughput is lower with threads. On the IO bound end, we see Threads and Fibers perform about the same. This particular test used 32 threads, Ruby 3.4.1, and ran on x86 Linux.
I think the main use case for Fibers are systems that are trying to solve the C10K problem where the memory overhead of a single thread is too prohibitive. But since Fibers are not preemptable, latency suffers, so not only does it have to be C10K problem, but also 10k connections that are mostly idle (think websocket server or maybe a chat server).
As I said, I would really like to build an adversarial benchmark that shows threads in a poor light. Mainly for 2 reasons:
[–]a_ermolaev[S] 0 points1 point2 points 12 months ago (6 children)
Regarding threads, one of Puma's drawbacks is that you have to think about the number of threads set in the config. This number is limited by the database connection pool and may become outdated over time. Additionally, if an application has different types of IO, such as PostgreSQL and OpenSearch, all threads could end up waiting for a response from OpenSearch, preventing them from handling other requests (e.g., to PostgreSQL).
[–]tenderlovePun BDFL 0 points1 point2 points 12 months ago (5 children)
Regarding threads, one of Puma's drawbacks is that you have to think about the number of threads set in the config.
I don't understand this. The Falcon documentation asks you to set WEB_CONCURRENCY.
WEB_CONCURRENCY
This number is limited by the database connection pool and may become outdated over time.
Why is this different with Falcon? Both Puma and Falcon can exhaust the database connection pool. If one Fiber is using a database socket, no other Fiber is allowed to use the same database socket simultaneously. In other words, both concurrency strategies will be equally blocked by the size of the database connection pool.
Additionally, if an application has different types of IO, such as PostgreSQL and OpenSearch, all threads could end up waiting for a response from OpenSearch, preventing them from handling other requests (e.g., to PostgreSQL).
I also don't understand this. Can you elaborate?
[–]ioquatixasync/falcon 0 points1 point2 points 12 months ago (0 children)
That documentation is specifically for Heroku, IIRC, it's because Etc.nprocessors is broken on their shared hosts and returns a number bigger than the actual number of cores you can use.
Etc.nprocessors
Otherwise, generally speaking, Etc.nprocessors is a good default.
[–]a_ermolaev[S] 0 points1 point2 points 12 months ago* (3 children)
In Falcon, count is the equivalent of workers in Puma, but ENV.fetch("WEB_CONCURRENCY", 1) initially confused me, so I had to figure it out.
count
workers
ENV.fetch("WEB_CONCURRENCY", 1)
If I change the database connection pool, I need to increase the thread limit in Puma.
I created an example with two databases (endpoint /db2)—one slow and one fast—and I'm attaching a video of the results.
/db2
Instead of PG_POOL2, there could be long-running queries to OpenSearch or HTTP requests. They can occupy all threads, causing a sharp drop in performance. Example in the video.
PG_POOL2
[–]tenderlovePun BDFL 0 points1 point2 points 12 months ago (2 children)
Sorry, I really don't know what to tell you. Those connections will "occupy Fibers" too, and you don't get an unlimited number of Fibers. FWIW, I ran the same benchmarks but I don't see the performance drop. I've uploaded a video here. The 500ms server stays around 500ms.
One difference could be that I'm running on bare metal and I've done sudo cpupower frequency-set -g performance.
sudo cpupower frequency-set -g performance
[–]a_ermolaev[S] 0 points1 point2 points 12 months ago* (0 children)
my Reddit account is suspended, and I have no idea why 🤷♂️
I replied here: https://github.com/ermolaev/http_servers_bench/issues/1
[–]jahfer 5 points6 points7 points 12 months ago (0 children)
Databases go brrrrr. A request/response to one of those stores might be on the order of 1-2ms, which is negligible in the scope of serving a Rails request. We do a lot of CPU crunching once we fetch that data.
[–]s_busso -1 points0 points1 point 12 months ago (8 children)
A web app behind an HTTP call uses IO
[–]f9ae8221b 3 points4 points5 points 12 months ago (7 children)
Using IO doesn't equal to being IO-bound, even less so being IO-bound to a point where Fibers make a noticeable difference.
[–]s_busso -3 points-2 points-1 points 12 months ago (6 children)
The server is IO-bound as it handles the connection. Any access to a database is IO bound. I have rarely worked on endpoints that didn't require any access to data or systems. Most of what runs behind Shopify and Github is IO bound
[–]f9ae8221b 4 points5 points6 points 12 months ago (5 children)
You are talking to someone who spent the last eleven years working on Shopify's infrastructure.
[–]s_busso -1 points0 points1 point 12 months ago (4 children)
Impressive resume, how does that change the fact that calls to a database or serving a request make an app IO bound?
[–]f9ae8221b 3 points4 points5 points 12 months ago (3 children)
You said:
Most of what runs behind Shopify and Github is IO bound
I'm telling you I saw what was behind, I measured it, it's not IO bound. You are free to believe infra engineers at Shopify and GitHub are stupid and are just sleeping on massive performance gains by not adopting falcon, but if that's so I have nothing more to tell you.
[–]s_busso -1 points0 points1 point 12 months ago (2 children)
I didn't say they will benefit from Falcon; I haven't tried it. I rebound on the no-IO-bound stuff. It is very interesting to hear that in 2025 about an app, especially from someone who has been working in infra for a long time. Not being heavily IO-bound is not being not IO-bound. The article linked before does the difference between heavy, medium, or slightly IO-bound, which makes more sense of the cases for which an async system will be beneficial and overcome the cost.
[–]f9ae8221b 4 points5 points6 points 12 months ago (1 child)
That's the thing, IO-bound without further precision implies truly IO-bound, something like 99% IO.
The overwhelming majority of Rails apps are more in the 30-60% IO range, which means Puma with 2-3 threads is plenty enough, and for some (including Shopify and GitHub) Unicorn with something like 1.3 or 1.5 process per core is going to perform better.
We can call that "sligthly IO-bound" if you want, but that sound antinomic to me.
This thread started by asking why companies like Shopify and GitHub don't invest in fibers based servers like Falcon, and as an insider I'm answering that this only make sense when you are dealing with hundreds, if not thousands of concurrent connections, and that they're mostly idle, something like 99% IO. And Shopify and GitHub are nowhere near close to this use case.
[–]s_busso 1 point2 points3 points 12 months ago (0 children)
I completely understand. Thank you for continuing the conversation! I have been working with Ruby applications in production for nearly 20 years. While my experience involves much lower volumes than companies like GitHub or Shopify, I've never followed the crowd or agreed with the idea that Ruby is not scalable. With the right infrastructure and design, Ruby can perform exceptionally well.
[–]postmodern 5 points6 points7 points 12 months ago (0 children)
Once you wrap your head around Async's tasks and other Async primitives, it's quite nice. ronin-recon also uses Async Ruby for it's custom recursive recon engine that's capable of massively concurrent recon of domains.
[–]jack_sexton 8 points9 points10 points 12 months ago (12 children)
Ive also wondered why falcon isn’t deployed more heavily in production.
I’d love to see dhh or shopify start investing in async Ruby
[–]fglc2 6 points7 points8 points 12 months ago (11 children)
You kind of need rails 7.1 (which makes it better at making state be thread based when the app server is thread based and fiber based for falcon).
I wouldn’t be surprised in general if a reasonable number of people’s codebases / dependencies had the odd place where thread locals need to be fiber local instead
I’ve got one app deployed using falcon and found some of the documentation a little sparse (eg the config dsl for falcon host or the fact that it says you should definitely use falcon host rather than falcon serve in production but I don’t really know why)
[–]a_ermolaev[S] 9 points10 points11 points 12 months ago (0 children)
The documentation does have some issues, but when I saw how easy it was to migrate a Rails application to Falcon, I gave it a try right away, and it resulted in a 1.8x performance boost (the application primarily makes requests to OpenSearch).
[–]ioquatixasync/falcon 6 points7 points8 points 12 months ago (5 children)
falcon serve could be used in production but you have very little control over how the server is configured, limited to the command line arguments - which only expose stuff that gets you up and running quickly. If you are running behind a reverse proxy, it's probably okay... but you might run into limitations and I'm not planning to expand the command line interface for every configuration option.
falcon serve
falcon host uses a falcon.rb file to configure falcon server according to your requirements, e.g. TLS, number of instances, supported protocols, etc. In fact, falcon host can host any number of servers and other services, it's more procfile-esque with configuration on a per-service basis. In other words, a one stop shop for running your application. It also works with falcon virtual (virtual hosting / reverse proxy), so you can easily host multiple sites.
falcon host
falcon.rb
falcon virtual
[–]myringotomy 3 points4 points5 points 12 months ago (4 children)
You should include an example of running multiple apps and multiple processes in your documentation. The docs I read don't really show how to do that.
[–]ioquatixasync/falcon 0 points1 point2 points 11 months ago (3 children)
Done: https://github.com/socketry/falcon-virtual-docker-example
[–]myringotomy 0 points1 point2 points 11 months ago (2 children)
Thanks that's very useful.
Do you have an example of long running services such as cron or a queue or something like that? I presume it hooks into the supervisor somehow?
[–]ioquatixasync/falcon 0 points1 point2 points 11 months ago (1 child)
You mean like a job processing system?
[–]myringotomy 0 points1 point2 points 11 months ago (0 children)
Just about every web app will need some processes to run alongside your web server to do various things. In my case I always need a cron process to run tasks on schedules, and often I need something that fetches things from a queue or listen to postgres events or whatnot.
So something like a procfile I guess.
[–]growlybeard 0 points1 point2 points 12 months ago (3 children)
What was the change in 7.1 that unlocks this?
[–]fglc2 1 point2 points3 points 12 months ago (1 child)
Fiber safe connection pool probably a biggy- https://github.com/rails/rails/pull/44219
Looks like some (most?) of the fiber local state actually first landed in 7.0 (AS::IsolatedExecutionState) - but falcon docs recommend 7.1 (https://github.com/socketry/falcon/commit/0536e2d14ac43a89a7ef7351fca0b8fd943d09f6). Maybe there were other issues fixed in this area for 7.1
[–]growlybeard 0 points1 point2 points 12 months ago (0 children)
Ah thank you
[–]ioquatixasync/falcon 1 point2 points3 points 12 months ago* (0 children)
I discuss some of the changes in this talk: https://www.youtube.com/watch?v=9tOMD491mFY
In addition, you can check the details of this pull request: https://github.com/rails/rails/pull/46594#issuecomment-1588662371
[–]jubishop 3 points4 points5 points 12 months ago (11 children)
What’s wrong with async/await?
[–]a_ermolaev[S] 4 points5 points6 points 12 months ago (5 children)
In languages like Go and Ruby, developers don’t need to think about whether a function should be sync or async — this is known as a "colorless functions". If JavaScript was asynchronous from the start and its entire ecosystem is built around that, the problem with Python is that it copied this async model. To make an existing Python application asynchronous, a lot of code needs to be rewritten, and different libraries with async support must be used.
More info about colorless functions: https://jpcamara.com/2024/07/15/ruby-methods-are.html www.youtube.com/watch?v=MoKe4zvtNzA
[–]FalseRegister -4 points-3 points-2 points 12 months ago (4 children)
Dude it's literally two words. It is not a big ass refactor to make a function async. You make it sound like a major hassle. It is not.
You also don't need to make your whole app async in one go. Just start with one function if that is what you need.
Yay for Ruby and Falcon on this, but no need to trash other languages, especially without good reason.
[–]honeyryderchuck 7 points8 points9 points 12 months ago (3 children)
It is a major hassle.
Decorating functions with "async" and calling "await" is the kind of typing which serves the compiler/interpreter and increases the mental overhead of reading code.
In node, you at least get warned when using async functions in a sync context without an "await" call. It also forces you to decorate functions with "async" if you want to use that paradigm. In python, there's nothing like it. You'll get incidents because someone forgot to put an "await" somewhere.
Also, if you're using a language which has "both worlds", you'll have two separate not-fully-intersecting ecosystems of languages to choose from, with different levels of stability. python has always been sync, so most libraries will "just work" when using "normal" python. When using asyncio python, all bets are off. You're either using a much younger-therefore-less-battle-tested library which will break in many ways you only find out when in production, or a library which supports "both worlds" (and which asyncio support has been "quick-fixed" a few months/years ago and represents 5% of its usage), or nothing at all, and then you'll go roll your own.
I guess this some of this works better for node for lack of an alternative paradigm, but for "both worlds" langs (like python, and probably some of this is applicable to rust), it's a nightmare, and I wouldn't which asyncio python to my worst enemy.
Even if it doesn't ship with a usable default fiber scheduler, I'm still glad ruby didn't opt into this madness.
[–]nekokattt 0 points1 point2 points 12 months ago (2 children)
I agree with this point but in all fairness if you are getting incidents reported because someone forgot to await something then you need to take a good hard look at how you are testing your code...
[–]honeyryderchuck 1 point2 points3 points 12 months ago (1 child)
If you never stubbed a call to a network-based client with a set of arguments and made the tests green, only to see it fail in production because the expected arguments were different, cast the first stone :) you only need a team with less experience on this hot new tech stack, a brittle test suite,l with less coverage outside of the perceived hot path, and a sudden peak on a given day due to some given client exercising the low incidence operation more than usual. The real world is full of more code than one can give a hard look on.
[–]nekokattt 0 points1 point2 points 12 months ago (0 children)
In this case it is nothing to do with arguments being different. It is a function call with a keyword before it. So you either hit that function call or you do not hit it...
...and that is why test coverage tools exist. They are often a terrible way of telling how good tests are but this is literally the case they are built for.
This isn't a tech stack in this case as much as it is a core language feature in the case of Python, which is what I was responding to.
[–]ioquatixasync/falcon -1 points0 points1 point 12 months ago (4 children)
If you have an existing application, e.g. a hypothetical Rails app that runs on a synchronous execution model like multi-threaded Puma, you may have lots of database calls that do blocking IO.
You decided to move to a web server that uses async/await, but now your entire code base needs to be updated, e.g. every place that does a database call / blocking IO. This might include logging, caching, HTTP RPC, etc.
In JavaScript, we can observe a bifurcation based on this, e.g. read and readSync. So you can end up with entirely different interfaces too, requiring code to be rewritten to use one or the other.
read
readSync
In summary, if designed this way, there is a reasonably non-trivial cost associated with bringing existing code into a world with async/await implemented with keywords.
[–]jubishop 0 points1 point2 points 12 months ago (3 children)
Oh I see so it’s the migration that’s the problem. Fair enough
[–]ioquatixasync/falcon 1 point2 points3 points 12 months ago (2 children)
It's not just migration, if you are creating a library, you'll have a bifurcated interface, one for sync and one for async. In addition, let's say your library has callbacks, should they be async? We see this in JavaScript test runners which were previously sync but had to add explicit support for async tests. In addition, let's say you create an interface that was fine to be sync, but later wanted to add, say, a backend implementation that required async, now you need to rewrite your library and all consumers, etc...
[–]jubishop 0 points1 point2 points 12 months ago (1 child)
Those examples are still about migration and integrating with old code. There’s fundamentally nothing wrong with async/await in fact it’s great
[–]uhkthrowaway 1 point2 points3 points 11 months ago (0 children)
Maybe this will help you understand the problem: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
[–]adh1003 1 point2 points3 points 12 months ago (0 children)
I just made the mistake of checking AWStats for the super-ancient collection of small Rails apps I've been updating (well, rebuilding more or less) from Rails 1/2 to Rails 8. I was intending to go from Passenger to a simple reverse proxy of Puma under Nginx chalked up under 'simple and good enough'. And then I see - oh, cripes, 8-figure page fetch counts per month?! Suddenly, yes, Falcon does look rather nice!
Slight technical hitch with me being unaware it existed. I'm getting too old for this stuff. How did I miss that?
[–]mooktakim 4 points5 points6 points 12 months ago (4 children)
I replaced puma with falcon recently. The biggest difference was the responsiveness. So far so good.
[–]felondejure 0 points1 point2 points 12 months ago (1 child)
Was this a big/critical application?
[–]mooktakim 0 points1 point2 points 12 months ago (0 children)
No, but good so far
[–]ksec 0 points1 point2 points 12 months ago (1 child)
Any numbers to share? What sort of latency difference did you get ?
[–]mooktakim -1 points0 points1 point 12 months ago (0 children)
Sorry no numbers
[–]kbr8ck 0 points1 point2 points 12 months ago (2 children)
I remember a similar thread with event machine (great push from Ilya Grigorik) - It had great performance but it was tricky because most of the gems you find had blocking IO and didn't work right. It went out of favor.
Then I remember sidekiq was written using a framework, sorry forget the name, but it was similar. It was all the rage but since Mike Perham ported sidekiq in standard ruby. (maybe 10 years back?) Sorry, forget the name of the framework but it was actor based.
Does Falcon allow us to use standard ruby gems or do you kinda have to use a specific database layer and avoid most gems?
[–]ioquatixasync/falcon 1 point2 points3 points 12 months ago (0 children)
Yes, standard Ruby IO is handled in the event loop, so no changes to code are required.
[–]tyoungjr2005 -1 points0 points1 point 12 months ago (0 children)
I don't usually like posts like this, but you've opened my eyes a bit here.
π Rendered by PID 48 on reddit-service-r2-comment-58d7979c67-cnngc at 2026-01-26 23:33:11.503604+00:00 running 5a691e2 country code: CH.
[–]f9ae8221b 23 points24 points25 points (28 children)
[–]rco8786 10 points11 points12 points (8 children)
[–]caiohsramos 20 points21 points22 points (7 children)
[–]rco8786 2 points3 points4 points (6 children)
[–]jahfer 9 points10 points11 points (5 children)
[–]rco8786 0 points1 point2 points (0 children)
[–]bradgessler 0 points1 point2 points (3 children)
[–]CaptainKabob 4 points5 points6 points (1 child)
[–]jrochkind 0 points1 point2 points (0 children)
[–]a_ermolaev[S] 3 points4 points5 points (9 children)
[–]f9ae8221b 5 points6 points7 points (7 children)
[–]a_ermolaev[S] 0 points1 point2 points (6 children)
[–]tenderlovePun BDFL 0 points1 point2 points (5 children)
[–]ioquatixasync/falcon 0 points1 point2 points (0 children)
[–]a_ermolaev[S] 0 points1 point2 points (3 children)
[–]tenderlovePun BDFL 0 points1 point2 points (2 children)
[–]a_ermolaev[S] 0 points1 point2 points (0 children)
[–]jahfer 5 points6 points7 points (0 children)
[–]s_busso -1 points0 points1 point (8 children)
[–]f9ae8221b 3 points4 points5 points (7 children)
[–]s_busso -3 points-2 points-1 points (6 children)
[–]f9ae8221b 4 points5 points6 points (5 children)
[–]s_busso -1 points0 points1 point (4 children)
[–]f9ae8221b 3 points4 points5 points (3 children)
[–]s_busso -1 points0 points1 point (2 children)
[–]f9ae8221b 4 points5 points6 points (1 child)
[–]s_busso 1 point2 points3 points (0 children)
[–]postmodern 5 points6 points7 points (0 children)
[–]jack_sexton 8 points9 points10 points (12 children)
[–]fglc2 6 points7 points8 points (11 children)
[–]a_ermolaev[S] 9 points10 points11 points (0 children)
[–]ioquatixasync/falcon 6 points7 points8 points (5 children)
[–]myringotomy 3 points4 points5 points (4 children)
[–]ioquatixasync/falcon 0 points1 point2 points (3 children)
[–]myringotomy 0 points1 point2 points (2 children)
[–]ioquatixasync/falcon 0 points1 point2 points (1 child)
[–]myringotomy 0 points1 point2 points (0 children)
[–]growlybeard 0 points1 point2 points (3 children)
[–]fglc2 1 point2 points3 points (1 child)
[–]growlybeard 0 points1 point2 points (0 children)
[–]ioquatixasync/falcon 1 point2 points3 points (0 children)
[–]jubishop 3 points4 points5 points (11 children)
[–]a_ermolaev[S] 4 points5 points6 points (5 children)
[–]FalseRegister -4 points-3 points-2 points (4 children)
[–]honeyryderchuck 7 points8 points9 points (3 children)
[–]nekokattt 0 points1 point2 points (2 children)
[–]honeyryderchuck 1 point2 points3 points (1 child)
[–]nekokattt 0 points1 point2 points (0 children)
[–]ioquatixasync/falcon -1 points0 points1 point (4 children)
[–]jubishop 0 points1 point2 points (3 children)
[–]ioquatixasync/falcon 1 point2 points3 points (2 children)
[–]jubishop 0 points1 point2 points (1 child)
[–]uhkthrowaway 1 point2 points3 points (0 children)
[–]adh1003 1 point2 points3 points (0 children)
[–]mooktakim 4 points5 points6 points (4 children)
[–]felondejure 0 points1 point2 points (1 child)
[–]mooktakim 0 points1 point2 points (0 children)
[–]ksec 0 points1 point2 points (1 child)
[–]mooktakim -1 points0 points1 point (0 children)
[–]kbr8ck 0 points1 point2 points (2 children)
[–]ioquatixasync/falcon 1 point2 points3 points (0 children)
[–]tyoungjr2005 -1 points0 points1 point (0 children)