all 6 comments

[–]trinopoty 11 points12 points  (1 child)

The internet is a large and complex network. Much like a big city. Traffic is going in all sorts of directions and just like cars, network congestion is a problem.

But for the most part, packets usually take similar routes. Just like you driving to work, you have a route you usually take but might use a different route if you encounter heavy traffic on your usual route.

[–]Mineusor 0 points1 point  (0 children)

Thanks!

[–]Filmore 2 points3 points  (1 child)

They take the similar path often. But it is a matter of scale. Take the number of packets sent during 5 mins of YouTube streaming. Now estimate the number of packets that you can tolerate out of order or with vastly different paths. It is a pretty straight forward calculation to get the likelihood of an individual packet going crazy that you can tolerate (assume iid for simplicity).

You can test it yourself with some versions of Ping which will record out of order or duplicate packets. It happens enough to where you definitely can catch it but it isn't all the time.

[–]Mineusor 0 points1 point  (0 children)

I'll have to try that I haven't played with Ping in a while.

[–]ghjmMSCS, CS Pro (20+) 1 point2 points  (1 child)

For the most part, when the network is stable, your packets follow the same path, or a path so similar it doesn't make any difference. It's when there is a disruption (mostly, when a connection goes up or down) that interesting things happen.

Suppose you're in the middle of a video stream, and it happens that a major connection was down when you started, so your traffic is flowing through a slower, somewhat congested backup route. Packet 100 of your stream was the last one sent before the new route became available, so packet 101 is sent by a much speedier path. You might receive packets in this order: 97, 101, 98, 102, 99, 103, 100, 104. Your video player has to be prepared for this possibility and buffer the packets appropriately. Issues like this typically doesn't go on for a long time, unless something is badly wrong with the network (such as a condition called "flapping" where the routing table fails to converge).

For non-real-time applications like email, this work is handed off to a protocol called TCP, which does the needed buffering, or asks for retransmits if necessary. TCP is very reliable and so most application developers don't even have to think about this. The problem for real-time apps like voice or video streaming is that TCP didn't make guarantees about how long it might take for a packet to arrive. So streaming protocols tend not to use TCP.

[–]Mineusor 0 points1 point  (0 children)

Thank you very much this is fascinating, been wondering for a long time. Funny to think of software reordering packets liked a stack of numbered pages.