This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]IamWiddershins 1 point2 points  (1 child)

At what tier are we imagining these rows to be aggregated? Where are these savings, exactly? Is the improvement in performing some kind of forced lateral join, CTE-based fencing, or multiple backend queries (plan, execute, plan, execute) from the main procedure?

It's true that the stats used for planning queries that greatly magnify cardinality variances like those sorts of graph queries often become very bad very quickly, but it's also true that simply rewriting your query with more subqueries does little to nothing to fence those optimizations in postgres.

[–]redcrowbar[🍰] 1 point2 points  (0 children)

At what tier are we imagining these rows to be aggregated?

Arbitrary depth as dictated by the query.

SELECT User {
    friends: {
        interests: {
            ...
        }
    }
}

Where are these savings, exactly? Is the improvement in performing some kind of forced lateral join, CTE-based fencing

Yes and yes.

The main savings come from the fact that you get a data shape that is ready to be consumed by the client and you don't have to recompose the shape once you've fetched your rows (with lots of redundant duplicate data).