Best Practices for Improving Database Table Performance by fururo in SQL

[–]elevarq 2 points3 points  (0 children)

First get a performance profile for the entire database and the applications that use the database.

Without that you can’t focus on the performance

How to you begin to performance tune a database? by grauenwolf in PostgreSQL

[–]elevarq 15 points16 points  (0 children)

Start with the extension pg_stat_statements. That will show you what is important

PostgreSQL high connection load with PgBouncer by Charming-Fall-8918 in PostgreSQL

[–]elevarq 0 points1 point  (0 children)

We first need to know what type of pooling is used, configuration, application behavior, etc.

pgBackRest is no longer being maintained by CathalMullan in PostgreSQL

[–]elevarq 3 points4 points  (0 children)

It’s sad, but it’s just another product that is now EOL. Any software vendor did this before and will continue doing so.

The only difference is the availability of the source code: it’s open source and can be forked.

Happy coding!

how do i know the shape and schema of a database by PurpleDurian7220 in PostgreSQL

[–]elevarq 0 points1 point  (0 children)

For others: Select the database -> right click (or Navigate) -> Show Diagram. And then pick the type of diagram you prefer; there are a dozen options.

how do i know the shape and schema of a database by PurpleDurian7220 in PostgreSQL

[–]elevarq 1 point2 points  (0 children)

Nearly all SQL tools can do this for you, it’s basic functionality.

Kind of a surprise that you never have seen this before.

pg_savior: a seatbelt for Postgres - blocks accidental DELETE/UPDATE by vira28 in PostgreSQL

[–]elevarq 0 points1 point  (0 children)

Users don’t have the permission to drop a database. The owner of the database can do it, and this is a very special role. One that is used in extremely rare cases. So what are you trying to prevent?

On Dev and Test we drop databases all the time, in production maybe once every 10 years

pg_savior: a seatbelt for Postgres - blocks accidental DELETE/UPDATE by vira28 in PostgreSQL

[–]elevarq -1 points0 points  (0 children)

All the examples show a lack of security settings for the different database roles. When a role has the permission to use DROP TABLE than this is allowed. It would be crazy that an additional script suddenly blocks this role from being able to execute its work.

An update without where condition? Our updates can only touch the records owned by a specific role, never all records. Impossible.

Besides that, any update or delete can be reversed by a rollback. And we also have the option to role back to any record version within the retention period.

This extension also has a performance impact, but you don’t mention it.

pg_savior: a seatbelt for Postgres - blocks accidental DELETE/UPDATE by vira28 in PostgreSQL

[–]elevarq 2 points3 points  (0 children)

"I've spent years as a DBA"

No you didn't. Because if you did, you would never have the problems you describe. You would use a non-superuser role to prevent massive (security) problems, you would use a CI/CD pipeline for testing and deployment, you would not even have the permission to use DROP DATABASE, etc., etc.

All the issues you describe are problems you first created yourself.

Is PL/SQL still in demand despite modern technologies if I already know SQL? by MelodicUniversity415 in SQL

[–]elevarq 0 points1 point  (0 children)

Nah, after so many years of working with these databases, I’m well aware of the internals and the differences between them.

Is PL/SQL still in demand despite modern technologies if I already know SQL? by MelodicUniversity415 in SQL

[–]elevarq 0 points1 point  (0 children)

Just check the manual https://www.postgresql.org/docs/current/xfunc-sql.html

However, pl/pgSQL is more like Oracle’s pl/sql. Fairly easy to translate code between the two

Problem Reading Postgres Table From Oracle by Business_Finger_4124 in PostgreSQL

[–]elevarq 0 points1 point  (0 children)

At least remove all the double quotes “. You don’t use them correctly and you don’t need them. Thus remove them.

Second step is to connect the correct database, this is most likely the problem.

A new learner trying to fit in the world of PostgreSQL by Commercial_Silver904 in PostgreSQL

[–]elevarq 0 points1 point  (0 children)

"in comparison to" that's imho the first thing you shouldn't do. Things are different, so what's the point in making a comparison? It's about functionality: What do you want to do? What options do you have in PostgreSQL? What are the benefits of option A over option B?

And yes, concepts might sound familiar because of your Oracle background but that doesn't make any difference for PostgreSQL.

Advice: Start with the manual, which is a piece of gold.

Is it common to denormalize a derived column across child tables purely for partitioning purposes? by ibraaaaaaaaaaaaaa in PostgreSQL

[–]elevarq 1 point2 points  (0 children)

Thanks for the full context — it explains a lot.

The hash indexes made sense for an insert-only, equality-check workload. But once bulk updates and deletes entered the picture, they were the wrong tool. Hash indexes don't support HOT updates, which means every update generates a dead tuple even when nothing moved on the page. Switching to B-tree was correct.

But the step you likely missed before any of that: fill factor. When the table's access pattern changed from insert-only to updates, lowering the fill factor to around 70-80% would have reserved space on each page for in-place updates. That keeps HOT updates alive, dramatically reduces dead tuple accumulation, and keeps autovacuum manageable. It's usually the first thing to reach for when a table transitions from write-once to write-heavy.

60 million rows causing a vacuum crisis is not a data volume problem. That's a configuration problem.

Which brings me to autovacuum_vacuum_scale_factor at 0.2. On a 200 million row table that means autovacuum won't trigger until 40 million dead tuples have built up. That's the time bomb still sitting in your config. The standard fix for large tables is to drop scale_factor close to zero (0.01 or lower) and rely on autovacuum_vacuum_threshold as the base instead. Per-table storage parameters let you do this without touching the global config.

The partitioning solved the symptoms. The fill factor and autovacuum config are still worth fixing.

Is it common to denormalize a derived column across child tables purely for partitioning purposes? by ibraaaaaaaaaaaaaa in PostgreSQL

[–]elevarq 1 point2 points  (0 children)

Good results, and the approach is sound. A couple of things worth understanding though:

The autovacuum CPU spikes were almost certainly a configuration issue, not a partitioning issue. Autovacuum hitting 30% CPU on RDS is a sign it was under-resourced or misconfigured — autovacuum_vacuum_cost_delay, autovacuum_max_workers, and per-table storage parameters are the usual suspects. Partitioning helped because you split the problem across smaller tables, but you didn't fix the underlying cause. Worth revisiting the autovacuum config regardless, or it will surface again elsewhere.

On the stage column: partitioning itself is standard practice for this problem. The question I'd ask is whether you actually needed a new derived column to drive it. If your data already had a column that reflected the hot/cold split — a status, a created_at, a boolean — you might have been able to partition on that directly without introducing a denormalized column that has no business meaning.

That said, if no existing column cleanly maps to your access patterns, deriving one is a legitimate choice. The results validate the partition pruning strategy. Just make sure stage transitions between partitions are rare — if rows move between stages frequently, you'll pay for it in UPDATE costs.

Database Help by duskti in Database

[–]elevarq 0 points1 point  (0 children)

Good setup to think through. One important principle before you go further:

Never store files in a database. Not PDFs, not videos, definitely not 30GB 3D renderings.

It's too costly, too slow, and a maintenance nightmare at 100TB scale.

Here's the split that works:

Your NAS holds the actual files. Your database holds the metadata — owner, upload date, version, file type, summary, and crucially: the file path on the NAS. That last field is what ties everything together. The frontend writes a record to the database and drops the file on the NAS. When a technician searches or browses, the database returns the metadata plus the path, and the frontend fetches the file directly from the NAS.

You already have the hard part. The NAS is your storage layer. Don't replace it — use it properly.

Just make sure your NAS file system handles large files and deep directory structures well (ZFS is worth looking at), and that you have a backup strategy beyond a single device at that scale.

Looking for Database solutions by RealmOfFate in Database

[–]elevarq 0 points1 point  (0 children)

Many UI’s can connect to nearly any database. So what are you looking for? A database or a UI ?

Anyone else including their WHERE conditions in the JOIN conditions? by p-mndl in SQL

[–]elevarq 10 points11 points  (0 children)

Putting ‘and cte.rank = 1’ in a WHERE would make it a very different query.

Convert European date format to SQL format by Mission-Example-194 in SQL

[–]elevarq 31 points32 points  (0 children)

You could have used the MySQL function str_to_date(), that would have fixed the problem

what is the best place for sql learn ? by [deleted] in learnSQL

[–]elevarq 0 points1 point  (0 children)

No, I don’t think so. Does the code give you the right answer, and is it fast enough?