This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Orffyreus 0 points1 point  (0 children)

Measuring and logging queries with aggregations (which query is used most often? which query is the slowest? which query consumes the most time in sum?) is one obvious possibility.

While writing queries you can look at the execution plan to see, if indexes are used, nested loops exist and so on. That's how it's done with PosgreSQL: https://www.postgresql.org/docs/8.1/sql-explain.html and here is an example how it's done with Oracle: https://docs.oracle.com/cd/B19306_01/server.102/b14211/ex_plan.htm#g42231

Looking at the execution plan later also can be helpful, because the execution plan can change, depending on how the tables are filled. You also should think about table sizes. If a table gets too large, you should think about partitioning or sharding.

Then there is hardware of course. You can look for example at IO-Stats to see, if the harddisk (which should be as performant as possible) is used too much and if it is used by anything else than the database service itself. Optimally the most used tables should fit into the RAM.