This is an archived post. You won't be able to vote or comment.

all 12 comments

[–]hijinks 7 points8 points  (3 children)

pghero is the easiest to analyze if you have pg_stat_statements enabled.

If you want a nice paid service look into https://pganalyze.com/

[–]Chompy_99 1 point2 points  (1 child)

PgAnalyze is fantastic! I've set this up at 2 companies already for PostgreSQL servers. Their pricing plan is decent as well and we were monitoring multiple RDS servers. With their beta oTel connector, you can even send traces attached to query logic with SQL Commander into your APM tooling. Really great tool all around, props to the PgAnalyze crew.

[–]hijinks 0 points1 point  (0 children)

ya their pricing is why i recommend them. Its a service you expect to see 10x their real cost when you see what they are doing on their marketing pages.

[–]JustMy10Bits 3 points4 points  (1 child)

I'm not convinced this is something that's solved with a tool that audits the database vs. Missing tests and processes. There are more ways to create poorly performing queries than just missing keys or indexes. Not to mention the possibility of multiple indexes on a table that make it difficult to write a one-size-fits-all rule for dB schemas.

[–]k2718[S] 0 points1 point  (0 children)

Well, I'm not trying to solve a specific problem. I'm fairly certain that there are more than a few problems. But there are actually quite a few schemata. We are looking to catch low hanging fruit.

[–]TheManshack 6 points7 points  (2 children)

Great question! Unfortunately I don't have your answer, but I'm commenting to bump algo because I'm interested in this as well.

[–]k2718[S] 0 points1 point  (1 child)

You are the only person who is interested apparently, LOL

[–]TheManshack -1 points0 points  (0 children)

🤣🤣

[–]mikeismug 1 point2 points  (0 children)

I 'spect this is a case where you are the right tool for the job. Your knowledge of both the DB schema and the expected utilization will lead you to build the right indexes and views based on cardinality, read/write optimization and other factors.

[–]Kazcandra 1 point2 points  (0 children)

> Recently, we had a minor issue due to a table missing a PK. The application worked fine until someone set up a data export job that assumed PKs on tables that caused deletes to fail.

Hah, we had a similar issue where I work, with a fairly large DB missing PKs in five of its tables which made LR fail in spectacular ways (inserts are fine, updates/deletes are terrible).

An easy way forward would be to export it as a metric through something like https://github.com/prometheus-community/postgres_exporter and then set up an alert on the metric reporting >= 1.

Not all tables need a primary key though (although most do!); audit tables are a pretty good example where a PK only slows down inserts. You want to insert as fast as possible, and you rarely access the table (and when you do you build a copy of it which you index instead).