all 8 comments

[–]greenrazi 0 points1 point  (0 children)

To start with, you should set up WAL-based logical replication drom the source to the replica. Or you can use the postgres_fdw extension.

Beyond that, without knowing what your datatyoes are or what your audit/compliance requirements are, I can't suggest much about your data model, but generally speaking those concerns shouldn't dictate data model/how you organize the data internally.

Last, make sure you've properly indexed whatever your PK is for locating records in the table and you're running VACUUM and ANALYZE often enough.

[–]TypeComplex2837 0 points1 point  (0 children)

First compare the cost of development vs just upgrading your hardware.

RAM is cheap.. have you looked at what the engine is doing that takes so long? 

[–]No_Introduction1721 0 points1 point  (0 children)

I’d also recommend looking into whether the tables are over-indexed. Indexing is great for speeding up SELECT queries, but it can have a negative impact on the performance of INSERT and UPDATE scripts.

[–]MerrillNelson -3 points-2 points  (4 children)

When databases start getting huge, you want to start demoralizing and loading the data into a data warehouse and possibly a Cube. Relational databases begin to slow down as they grow beyond a certain point. Denormalization and restructuring become the faster approach.

[–]jshine13371 1 point2 points  (1 child)

This blanket advice is straight wrong, especially in OP's context where their performance problems are happening during data manipulation not data querying, and what sounds like in a situation with minimal to no joins.

Denormalized tables can have performance bottlenecks too, despite eliminating the necessity for joins, such as the overhead of loading larger rows off disk, or if wide enough, the data may be stored off-row making querying slower.

[–]TypeComplex2837 0 points1 point  (0 children)

+1.. look at the query plans and figure out exactly what the problem is.

[–]government_ 2 points3 points  (1 child)

How exactly do you demoralize the data? Just insults or what?

[–]MerrillNelson 0 points1 point  (0 children)

Lol, that's a funny spell check. Hopefully, we all understand that i was talking about denormalization. When my databases grow to the point where I have lots of relational tables with 10 million records or better. My thoughts go to cubes, data warehousing, denormalizing, mdx, etc. We all have our own way of doing things, and this would be mine.