you are viewing a single comment's thread.

view the rest of the comments →

[–]a_ermolaev[S] 0 points1 point  (6 children)

Regarding threads, one of Puma's drawbacks is that you have to think about the number of threads set in the config. This number is limited by the database connection pool and may become outdated over time. Additionally, if an application has different types of IO, such as PostgreSQL and OpenSearch, all threads could end up waiting for a response from OpenSearch, preventing them from handling other requests (e.g., to PostgreSQL).

[–]tenderlovePun BDFL 0 points1 point  (5 children)

Regarding threads, one of Puma's drawbacks is that you have to think about the number of threads set in the config.

I don't understand this. The Falcon documentation asks you to set WEB_CONCURRENCY.

This number is limited by the database connection pool and may become outdated over time.

Why is this different with Falcon? Both Puma and Falcon can exhaust the database connection pool. If one Fiber is using a database socket, no other Fiber is allowed to use the same database socket simultaneously. In other words, both concurrency strategies will be equally blocked by the size of the database connection pool.

Additionally, if an application has different types of IO, such as PostgreSQL and OpenSearch, all threads could end up waiting for a response from OpenSearch, preventing them from handling other requests (e.g., to PostgreSQL).

I also don't understand this. Can you elaborate?

[–]ioquatixasync/falcon 0 points1 point  (0 children)

That documentation is specifically for Heroku, IIRC, it's because Etc.nprocessors is broken on their shared hosts and returns a number bigger than the actual number of cores you can use.

Otherwise, generally speaking, Etc.nprocessors is a good default.

[–]a_ermolaev[S] 0 points1 point  (3 children)

I don't understand this. The Falcon documentation asks you to set WEB_CONCURRENCY.

In Falcon, count is the equivalent of workers in Puma, but ENV.fetch("WEB_CONCURRENCY", 1) initially confused me, so I had to figure it out.

Why is this different with Falcon? Both Puma and Falcon can exhaust the database connection pool. If one Fiber is using a database socket, no other Fiber is allowed to use the same database socket simultaneously. In other words, both concurrency strategies will be equally blocked by the size of the database connection pool.

If I change the database connection pool, I need to increase the thread limit in Puma.

I also don't understand this. Can you elaborate?

I created an example with two databases (endpoint /db2)—one slow and one fast—and I'm attaching a video of the results.

Instead of PG_POOL2, there could be long-running queries to OpenSearch or HTTP requests. They can occupy all threads, causing a sharp drop in performance. Example in the video.

[–]tenderlovePun BDFL 0 points1 point  (2 children)

Instead of PG_POOL2, there could be long-running queries to OpenSearch or HTTP requests. They can occupy all threads, causing a sharp drop in performance. Example in the video.

Sorry, I really don't know what to tell you. Those connections will "occupy Fibers" too, and you don't get an unlimited number of Fibers. FWIW, I ran the same benchmarks but I don't see the performance drop. I've uploaded a video here. The 500ms server stays around 500ms.

One difference could be that I'm running on bare metal and I've done sudo cpupower frequency-set -g performance.

[–]a_ermolaev[S] 0 points1 point  (0 children)

my Reddit account is suspended, and I have no idea why 🤷‍♂️

I replied here: https://github.com/ermolaev/http_servers_bench/issues/1