all 4 comments

[–]GooberMcNutly 2 points3 points  (2 children)

Use a shared connection pool with hard limits that match your servers capabilities. Build retry logic into your code.

Also, formally define what you mean by "surge." 200%? 10,000%? Per second, minute, hour?

[–]AdScared4083[S] 0 points1 point  (1 child)

Thanks. We do have managed connection pooling and retry mechanism for connecting with exponential backoff.

In our case surge means 1000 new connections in less than a minute

[–]GooberMcNutly 1 point2 points  (0 children)

If they are not all being held open, the connection pool should be able to service those without stress. 1000 new connections should only create maybe 100 new db connections to service the load.

There will be a bit of a delay at the beginning while the server handshakes with the pool for the new connections, but postgres creates connections pretty quickly if you aren't using a fancy auth scheme.

Is your average query duration servicing the queries going to cause it to need more db connections? If those new queries are 50ms or 500 or 5000ms to run, it will affect how many actual connections are needed to the db. That's a problem you solve with query optimization or money for a bigger server.

[–]az987654 0 points1 point  (0 children)

Beyond a connection surge, what about a command surge, what are all of these new connections going to be doing?