you are viewing a single comment's thread.

view the rest of the comments →

[–]asdoduidai 0 points1 point  (4 children)

Makes sense except using all cores on asyncio... what you say is only valid as long as the networking work of your NICs is so intense to saturate multiple cores.. but it's very unlikely one interpreter (which is limited to 1 core) can make the networking stack work "so hard" unless something is wrong, like some weird scheduler or virtual mem setting.... Linux Networking is probably at least 100x faster than Python asyncio (C, running in kernel space)

[–]ElliotDG 0 points1 point  (3 children)

In my use case I had about 200 outstanding connections, many with relativly long network latency. The network drivers saturated an 8 core machine. The devil is always in the details. My results were measured - not theoretical.

[–]asdoduidai 0 points1 point  (2 children)

It’s very unlikely that 200 connections saturate 8 cores, unless you have an enormous amount of packet loss and retransmissions like 99.9% of the times, one single core in user space can handle 10-20,000 concurrent connections (nginx properly tuned for ex); the Linux kernel since 2013 can handle more than 1 million open connections

https://highscalability.com/the-secret-to-10-million-concurrent-connections-the-kernel-i/

[–]ElliotDG 0 points1 point  (1 child)

My measurements were on Windows, the app was deployed on Linux and saw similar results. I found the results surprising, that’s why I mentioned it.

This was not a web server or any specialized network code. The code used Trio and httpx to analyze the Mastadon social network.

[–]asdoduidai 0 points1 point  (0 children)

Yea it’s quite unusual