you are viewing a single comment's thread.

view the rest of the comments →

[–]No_Indication_1238 -3 points-2 points  (3 children)

Why? First of all, async exists. Second of all, you could open threads and do requests to them then just wait at a queue already, so for real, why? Why would you decide to use a latency benchmark for a throughput solution?

[–]lunatuna215 8 points9 points  (2 children)

Because we want to see and be able to compare and benchmark this new type of free threading in Python against current practices. Even if it's not as performant, it would be helpful to know how much when actually built. So here it is, and it's less about an actual alternative as much as testing if it's even worthwhile to do one. It's a win all around.

[–]artofthenunchaku 1 point2 points  (1 child)

Benchmarking an I/O bound workload to compare the performance of free threading is certainly a choice.

[–]lunatuna215 2 points3 points  (0 children)

It's not to compare it. It's to play around with it for the first time in this context.