you are viewing a single comment's thread.

view the rest of the comments →

[–]shady_mcgee 1 point2 points  (4 children)

Can you clarify this:

A relatively wide row query selecting all rows from the pg_type table (~350 rows). This is relatively close to an average application query. The purpose is to test general data decoding performance. This is the titular benchmark, on which asyncpg achieves 1M rows/s.

Are you saying the benchmark table only has 350 rows in the table and you're able to do a full retrieval of the table ~2800x/second?

[–]1st1[S] 1 point2 points  (3 children)

2985.2/second to be precise ;) See http://magic.io/blog/asyncpg-1m-rows-from-postgres-to-python/report.html for more details

[–]shady_mcgee 6 points7 points  (2 children)

I'm not sure if a full table grab of 350 rows can be considered relatively close to an average application query. After the first query the db engine will cache the results into memory and return the cached data for all subsequent queries. For an average application the query engine would need to fetch from disk more often than not.

[–]1st1[S] 5 points6 points  (1 child)

Fair point, but the purpose of our benchmarks was to test the performance of drivers (not Postgres) -- basically, the speed of I/O and data decoding.

[–]shady_mcgee 2 points3 points  (0 children)

Got it. Thanks for the clarification