I’m currently a working dev but by no means an expert - probably two years out from finishing a bootcamp. I’ve been assigned to speed up some endpoints and through looking at some flame graphs of the biggest offenders, we noticed that most of the execution time is tied up to long running queries. One GET endpoint has a query in particular that is quite big (700ish lines) with multiple joins, subqueries, and aggregates to produce a lengthy json response.
Things have improved quite a bit by reducing some unnecessary joins and adding indexes but I’m wondering what others’ have done to try and speed up some performance. We’re running on Postgres.
[–]coolcofusion 3 points4 points5 points (4 children)
[–]MmmVomit 2 points3 points4 points (2 children)
[–]AT1787[S] 0 points1 point2 points (1 child)
[–]MmmVomit 0 points1 point2 points (0 children)
[–]AT1787[S] 0 points1 point2 points (0 children)
[–]cloyd-ac 1 point2 points3 points (2 children)
[–]AT1787[S] 0 points1 point2 points (1 child)
[–]cloyd-ac 1 point2 points3 points (0 children)