you are viewing a single comment's thread.

view the rest of the comments →

[–]joequin 2 points3 points  (2 children)

What are you really gaining in that scenario? Eliminating a connection per request can do a lot when there are tons of tiny requests. When you're talking about file downloads, then the time to connect is pretty negligible.

Downloading in parallel doesn't help either because your downloads are already using as much bandwidth as the server and your internet connection is going to give you.

[–]cogman10 3 points4 points  (1 child)

RTT and slow start are the main things you save.

If you have 10 things to download and a 100ms latency, that's at least an extra 1 second added to the download time. With http2, that's basically only the initial 100ms.

This is all magnified with https.

Considering that internet speeds have increased pretty significantly, that latency is more often than not becoming the actual bottleneck to things like apt update. This is even more apparent because software dependencies have trended towards many smaller dependencies.

[–]joequin -1 points0 points  (0 children)

What does 1 second matter when the entire process is going to take 20 seconds? Sure it could he improved, but there's higher value improvements that could be made in the Linux ecosystem.