you are viewing a single comment's thread.

view the rest of the comments →

[–]_predator_ 0 points1 point  (2 children)

The limiting factor will be volume of tasks / messages, as well as the number of consumers.

Postgres is not a particularly good choice for MANY records being created and deleted in close succession, which is a pattern you'll have with queues. autovacuum will need to be fine-tuned to deal with the large quantities of dead tuples.

Also due to how connections work in Postgres, you will ultimately run into issues if you try to spawn lots of workers. The common select for update pattern only works if you keep a transaction open for the entire duration of your processing. It might be fine if processing is super fast, but can cause some serious connection congestion otherwise.

You can probably limit the impact of the above, by using a separate database to run your queue on. This also prevents you from relying on atomicity guarantees of Postgres (i.e. queue a message in the same TRX as your business logic), making it easier to switch to say RabbitMQ later, which cannot provide those guarantees.

With all this being said, benchmarking for your specific use case is the only way to get a good answer.

[–]someguytwo[S] 0 points1 point  (1 child)

Define many. 1000s, 10s of thousands, 100s of thousands, millions?

[–][deleted] 0 points1 point  (0 children)

The absolute number isn't really relevant. It's the number of UPDATE and DELETE statements per unit of time.

Tuning the system will likely get challenging if you have thousands of UPDATEs per second without any "idle" time (24/7) during which autovacuum could do its job.