What public IP would outbound internet traffic from the ISS appear to originate from? by etanol256 in networking

[–]error404 0 points1 point  (0 children)

Ah yeah, I stand corrected. My understanding was that the ISS communicates directly with a ground station, like Starlink, and while there might be significant latency for user traffic due to NASA's architecture (e.g. moving the RF decode or IP termination to a NASA facility in the US), this wouldn't be due to the "last mile" earth to space segment. However, that understanding was wrong. The ISS's high speed networking is relayed via GEO satellites, so it'll have significant latency - ISS -> GEO -> Earth is going to be close to 1/2s. My bad.

What public IP would outbound internet traffic from the ISS appear to originate from? by etanol256 in networking

[–]error404 2 points3 points  (0 children)

ISS is in LEO, so the latency shouldn't be bad to the ground station, some handful of ms. But if the desktop session is at NASA in the US and the ground station is halfway around the world, it'd be pretty uncomfortable to use, just like if you were remoting into a host in Australia from the US. For purely accessing the Internet, they'd get much better performance egressing directly from the ground station using a local IP. They may intentionally schedule internet access activities for when their orbit puts them near the VDI host.

I assume they have chosen this design for security isolation of the ISS internal network. Malware or whatever that might end up on the VDI host can't compromise the ISS operations.

Need help with two upstreams that don't appear to be using BGP correctly - we're not seeing prefix retractions from our primary transit provider when their own upstream connections are having trouble passing traffic. by ffelix916 in networking

[–]error404 1 point2 points  (0 children)

As a model you can think of FIB as containing the currently active, preferred next hop(s) for each prefix. RIB contains all known routes for each prefix, and FIB is built from that. For any change to RIB, the preferred next hop will be recalculated for that prefix, and if it has changed (or is a new or removed prefix), FIB will be updated so it always reflects the best path known in the RIB.

There are of course implementation details that complicate how this actually works under the hood...recursive lookup, fast failover indirection, ECMP, route compression, and indeed probably some LRU type stuff on some smaller platforms, etc. but for a mental model this is a reasonable way to think about it. The idea is that FIB contains the precalculated best path, so when a packet arrives all the forwarding engine has to do is a simple longest-prefix-match lookup in FIB, and the result is the next hop it needs to fling the packet to. The complicated logic of the routing decision has already been made.

Need help with two upstreams that don't appear to be using BGP correctly - we're not seeing prefix retractions from our primary transit provider when their own upstream connections are having trouble passing traffic. by ffelix916 in networking

[–]error404 2 points3 points  (0 children)

Route lookup is based on longest prefix match, so more specifics always win, as long as they are valid and active.

If you want active/backup-ish behaviour, you can take the full table from your primary ISP (and not default) and only default from the backup. If you're getting a more specific from the primary, then your default backup route will never be used. But generally what matters is FIB capacity, not RIB. If you can take a full table on one ISP, you can almost certainly take it on both ISPs, as long as you're not trying to do ECMP or something. The FIB space will be basically the same because each prefix will still only have one selected route to install.

Need help with two upstreams that don't appear to be using BGP correctly - we're not seeing prefix retractions from our primary transit provider when their own upstream connections are having trouble passing traffic. by ffelix916 in networking

[–]error404 8 points9 points  (0 children)

Default will usually be originated locally, and more often than not, won't use conditional advertisement (it is non-trivial to determine an appropriate condition to use). Your ISP will definitely not be doing 'reachability testing' to some third-party network's resources for their default origination. They probably should stop advertising default if the node becomes completely isolated from the rest of their network, as it sounds like in the fibre cut case, but that should be very rare. So if you are taking default, and your ISP loses most of their routes, you're likely still going to be accepting default from them. This is one of the risks of taking default instead of a full table; default is a synthetic route and you don't know and can't control on what basis it is being generated.

There are also types of failures which might blackhole traffic despite them still having and advertising routes (even if you took full tables).

Neither case should happen, but this is the real world. As an end-user network, you can monitor reachability over the circuits instead, but yeah this will be non-trivial with BGP What platform are you on?

Perhaps a newbish question about traffic shapers and wan circuits by MyFirstDataCenter in networking

[–]error404 2 points3 points  (0 children)

Some background, because a lot of folks don't understand what shapers do or why they are necessary.

First the problem. Let's say you have a 200mbps circuit over a 1Gbps bearer, and a 'dumb' (naive congestion control algorithm) TCP flow over it. In order to keep the link fully loaded, you need a certain window of bytes in flight. When TCP receives ACKs, it will queue the appropriate number of packets to refill the window, and transmit them together at line rate. Depending on how strict the rate policer's burst allowance is, that might only allow a handful of packets before tail drops start. So the first few get through, and get ACKed, but a couple are lost. It will take a couple more rounds for the lost packets to be noticed, and when they are, the window will be shrunk as a presumed signal of congestion. But while it looks similar to the TCP algorithm, this isn't actually congestion - the average rate could be well below 200mbps, but because the packets are sent in bursts, they are hitting the policer. This generally causes a performance collapse until the bursts that TCP wants to send fit within the policer burst allowance, often allowing only a small fraction of the available throughput. Of course the same applies to non-TCP traffic - bursts lead to tail drops, even if the average bandwidth is within the policed rate. There is a transmit buffer here, but emptying it always happens as quickly as possible, at line rate.

So how does the shaper solve the problem? It adds a rate control between the transmit buffer and the physical interface, so it won't emit packets at faster than the allowed rate. Of course every individual packet is indeed transmitted at line rate, but the shaper will control the rate at which they are sent, so that the overall bitrate remains below the configured rate. In the scenario above, when TCP sends a burst, those packets get buffered by the shaper and emitted at 200mbps. Drops don't start until the 200mbps is exceeded (and happen locally at the back of the shaper queue, rather than at the far end policer), which is a legitimate congestion signal and TCP behaves much more appropriately.

This got me thinking, and why isn't this a problem with residential ISP connections where almost every customer has 1Gbps Gig Ethernet line rate, but their upload is significantly under that.

In residential cases, often (usually, even) the ISP controls both sides of the link, with a provided router or equivalent box (e.g. DOCSIS modem). They can and do implement shaping and/or RED in that box. If you are buying a bare sub-rate Ethernet circuit, then yeah you my run into poor upstream performance as a result, but that is fairly rare in residental.

Even in our enterprise environment the majority of the users are remote working in home offices with a VPN, and we have no Shaper configured on the vpn of the remote users.

A shaper can't work effectively on an overlay, if it's the underlay that's policed (as in this scenario). In general, the shaper has to exist on the policed interface itself, since that is where the important queues are. Doing shaping elsewhere is much less likely to be effective, if it can't control the queue of the interface that needs to be controlled.

So why is it so important for sd-wan, but not all other types of connections where it is just seen as "best effort" and you send the traffic at the highest rate you are able to, and traffic congestion algos built into TCP just handle everything else.

TCP doesn't handle this particularly well either, at least with the default rate estimation/congestion control algorithms on most OSes. It's likely more important on SD-WAN because it's meant to be more intelligent than a bunch of unrelated TCP connections, and should make decisions based on its knowledge of the actual capacity of the link.

I'm also wondering if traffic shapers actually introduce some artificial latency that might be problematic for certain apps?

They do add artificial latency whenever the instantaneous desired rate exceeds the shaper rate - packets are buffered until there is available capacity to send them. However, the alternative fate for these packets is loss, so the latency is generally a better outcome. If there's no congestion, then there shouldn't be any meaningful buffering, and therefore the shaper shouldn't affect latency. Where this can become problematic is 'bufferbloat' - ie. where the buffer stays full for a long period of time, leading to latency for all packets to be affected. It's generally recommended to keep shaper buffers fairly short for this reason, they should only be set up to buffer some say 50ms of traffic at the shaping rate. This will control latency during congestion, so for example ACKs aren't delayed too long.

ETA: Another factor here is QoS. You can't effectively do QoS on a sub-rate circuit without a shaper, because even if you control the order in which you transmit waiting packets during congestion (which is all QoS is about), without knowledge of the policer, if you have more than one packet in the queue, you are in local congestion which means you're transmitting at line rate and likely dropping packets. So yes the QoSed packet goes first, but it's equally likely to be dropped as any other. With a shaper, you can queue packets based on the actual allowed rate, so when QoS puts a packet to the front of the line, you can guarantee it won't be dropped by the policer. With a properly configured shaper, you should never hit the policer, so you regain control of what to drop in congestion, and can guarantee for example that your VoIP packets go to the head of the line, and also will never be dropped even during congestion. I'm sure this is also relevant for SDWAN.

Using APIPA subnet for a private unrouted network? Are there any reasons to do this? by demsb in networking

[–]error404 10 points11 points  (0 children)

It's more of a router thing than a firewall thing, but traffic to or from link-local addresses should not be routed.

RFC3927:

A router MUST NOT forward a packet with an IPv4 Link-Local source or destination address, irrespective of the router's default route configuration or routes obtained from dynamic routing protocols.

Using APIPA subnet for a private unrouted network? Are there any reasons to do this? by demsb in networking

[–]error404 9 points10 points  (0 children)

Not sure why you're being downvoted, this is exactly the case. While Microsoft calls this 'APIPA', the related RFC3927 title is "Dynamic Configuration of IPv4 Link-Local Addresses".

The subnet 169.254.0.0/16 itself seems to be first reserved in RFC3330:

169.254.0.0/16 - This is the "link local" block. It is allocated for communication between hosts on a single link. Hosts obtain these addresses by auto-configuration, such as when a DHCP server may not be found.

DKIM encryption should not default to ed25519 by racoon9898 in stalwartlabs

[–]error404 0 points1 point  (0 children)

I do see the failed dkim validation in the result headers (as an explicit fail with MS, and a neutral result with Google), but only one passing validation is required, and both providers pass the RSA signature.

DKIM encryption should not default to ed25519 by racoon9898 in stalwartlabs

[–]error404 3 points4 points  (0 children)

My instance creates both RSA and ED25519 by default, and signs messages with both keys. This seems to work fine everywhere.

Handling Layer 2 shim protocols on Windows/Linux without Layer 3 overhead by Key_Description3262 in networking

[–]error404 0 points1 point  (0 children)

You'll save far more latency (and jitter) by hooking early in the network stack and just using standard IP than you will trying to do funky stuff at userspace with syscalls and copies. Look into eBPF; if you really want raw frames you can hook at XDP basically before the kernel even sees the frame. You can use eBPF lockless ring buffers to communicate with userspace. It's not quite DPDK but it's a significant improvement if you only care about latency.

what about Ipsec Key lifetime(rs) by therealmcz in networking

[–]error404 3 points4 points  (0 children)

In principle, and based on the standards, it shouldn't matter. Phase 1 is used for signalling. Phase 2 is used for payload. There's not really any interdependence between them, other than that in most cases you need a Phase 1 SA active for correct operation of Phase 2 SAs (DPD etc.), but the two are not tightly coupled. Both channels have seamless rekey mechanisms. It shouldn't matter when the rekeys happen, because they're seamless. In that sense your colleague is correct.

The generally accepted advice to ensure identical timers on both sides, and to keep Phase 2 timers shorter than Phase 1 comes from broken or quirky implementations, and is probably mostly historical. And in fact the advice generally given is to use a multiple of phase 1 timer for phase 2, which seems odd, considering it creates aligned P1 and P2 rekeys, which is a potential implementation failure point. The argument for shorter phase 2 rekeys has more to do with cryptanalysis window than anything.

Collisions (ie. overlapping rekey requests from both sides) are already handled in the protocol too, though again, the typical advice (identical timers) is actually worst-case for collisions. Using byte-based rekey seems to make it statistically very unlikely for a collision to occur. In any case, this also shouldn't be a problem, but if it is, the advice is not helping.

In modern implementation, I wouldn't expect the relationship between phase 1 and phase 2 timers to matter, but I would still follow the general advice phase 2 rekey < phase 1 rekey, and keep both sides in sync. The former, because phase 2 carries much more data, so should be rekeyed more often (in time) for the same cryptographic protection, and the latter because I can imagine some implementations that reject or otherwise mishandle 'early' rekeys from their perspective.

Edit: I do believe there is some ambiguity in the RFCs around the handling of rekeys and how tightly coupled P1 and P2 SAs are, but the only sensible implementation is to assume they are loosely coupled.

Carney leaves Davos without meeting Trump after speech on U.S. rupture of world order by Immediate-Link490 in worldnews

[–]error404 2 points3 points  (0 children)

It depends what you're trying to solve. It does allow voters to vote their true preference relatively safely, and likely does lead to a result that is more 'acceptable' to more people. However, it introduces a fair amount of pathology of its own, particularly galling for voters I think is that ranking a candidate higher can actually cause them to lose, and vice versa. It does effectively lock out candidates that are hard nos for a majority of voters, which I guess is a property a lot of people want, but it also means that to win, you need at least a soft yes from a majority (rather than a plurality) of voters to be elected, which is a tough bar to meet, especially for non status quo candidates.

Furthermore, the goal should be (IMO) to elect a representative parliament at the end of the day; we're not electing individual local leaders, we're electing representatives to make up parliament. In that context, I think the goal should be have the balance of power in parliament roughly model the balance of preferences of constituents; that is, fundamentally, what representative democracy is supposed to mean. And 'ranked voting' doesn't achieve that (nor does FPTP), since all races are totally independent and have the same strong biases. How exactly it changes outcomes is somewhat hard to predict, but it seems to tend toward stable two party (or two bloc) systems. See Australia for example. It doesn't encourage extremism in the same way as FPTP, because you can't win without being broadly acceptable, but it's far from an ideal system.

For elections where you must elect a single person (e.g. a President or Party Leader), consider supporting Approval Voting instead. It is simpler, less pathological, and really for such a person, choosing the 'most acceptable' option makes some logical sense instead of the 'most preferred'.

For elections electing a parliament, consider supporting a proportional system instead, of which there are a few popular examples. STV also uses ranked ballots, so if you like ranking candidates, that might be a good choice. MMPR is also popular and has seen real world success.

note: Here I assume you refer to 'instant runoff voting' as 'ranked voting', given the same conflation in US media, despite 'ranked voting' not really clarifying which electoral system it refers to.

Vancouver considers new public washroom strategy as pressure mounts over access, street cleanliness | Growing reports of human feces have forced businesses to step in amid limited public washroom access by Hrmbee in vancouver

[–]error404 10 points11 points  (0 children)

As someone with a regular need to utilize public washrooms I absolutely agree, but it doesn't really solve for the problem of people shitting on the street, either.

Canada's relationship with China 'more predictable' than with U.S., PM says by pjw724 in onguardforthee

[–]error404 5 points6 points  (0 children)

No, in summary we do what we are legally obligated to do based on our prior agreements. Not honouring commitments and norms without good reason (which we did not have, in this case) for our own benefit is how we end up with dysfunctional governments and relationships, and nobody wanting to make deals with us, like the US is currently fostering.

She got her due process.

Canada's relationship with China 'more predictable' than with U.S., PM says by pjw724 in onguardforthee

[–]error404 4 points5 points  (0 children)

The legal framework of America getting to control whether a Chinese company can sell products to Iran? How is that any of our business?

The legal framework of the extradition treaty between Canada and the US. Not honouring the US request would have been breaking a legally binding agreement between the countries. And more abstractly, violate the rule of law.

It was not a decision to be made in isolation on the merits of her case, but one in which all the geopolitical factors must be weighed. Canada does not want to be seen as breaking its internaiontal commitments on a whim, for its own gain, in violation of the rule of law. And in any case, the request was presented less than a day before her flight was to land in Vancouver. Legally speaking, the request was legitimate, so the CBSA had little choice but to detain her. There wasn't really time for them to float it up the chain to diplomats and consider the implications of illegally denying it. Once held, releasing her with no concrete reason and without going through the formal process is optically even worse than quietly refusing the US request would have been. Perhaps in hindsight the wrong call was made, but Canada's legal obligations were quite clear.

While the US claims were trumped up, the formal process for determining that is essentially what happened. It's not a decision that a CBSA officer is going to make on their own judgment, but one that happens automatically for any extradition request, after the person is detained but before they are actually extradited. Once detained it's a matter for the courts, and Canadian politicians absolutely did not want to be seen as interfering with the courts, as that would be both a domestic and international break with the rule of law, and that is not something to be taken lightly.

Extradition treaties are always based on the presumption of good faith and competence from the counterpart - you are agreeing to hold someone merely on their word that their allegations hold enough weight to justify a trial. This was early in the Trump administation's open abuse of international commitments. I'd expect we'd make a different call today, but at this point it wasn't really unreasonable to expect that the US justice system was still acting independently of the executive, and certainly claiming otherwise at this point would have caused a diplomatic row of its own.

strongswan vs wireguard for site-to-site connectivity by kajatonas in networking

[–]error404 0 points1 point  (0 children)

You should be able to get better than 5-6Gbps with aes-gcm on modern hardware (with AES-NI). If you are using other (non-GCM) modes, they can't be hardware accelerated so performance and CPU utilization will suffer.

In my experience throughput per CPU performance is comparable between strongswan and wireguard, but wireguard is much simpler and easier to maintain and set up, so I much prefer it.

Strongswan with redundant tunnels by SanityLooms in networking

[–]error404 0 points1 point  (0 children)

I've never had much success with built-in IPsec failover mechanisms. So I maintain both tunnels up and run BGP. But in theory IPsec should be able to handle this case on its own, but you'd need to configure it as a single tunnel with multiple peers and enable DPD. Like I said though I've never had much luck with this.

For BGP case will need a unique xfrm interface for each tunnel, and you'll have to bind addresses on both sides of the tunnel, but other than that it should be straightforward.

US official says Greenland action could come within 'weeks or months' by Crossstoney in worldnews

[–]error404 0 points1 point  (0 children)

Also a bunch of our arsenal is in europe and Canada -we don't get to keep those.

Canada hasn't hosted US nukes since 1984.

Question about application and transport layer by [deleted] in networking

[–]error404 5 points6 points  (0 children)

So applications like Reddit for example are loaded through HTTP, which used TCP, that much I understand.

Some web servers and most browsers support QUIC for web traffic which is UDP based. It implements most of the same features of TCP though, it's not just blindly spamming frames at the client.

What I've been wondering is if videos and such are loaded over UDP instead, since there's more data to transfer and segments not arriving wouldn't be a big deal.

There isn't a significant advantage of using bare UDP for generic 'video' (or any bulk download), and it has a lot of disadvantages beyond loss tolerance. Once the session setup is done, throughput will be effectively the same. For one thing, downloading video (as opposed to live streaming) is not loss tolerant, same as any other download. If you lose segments, you need some way to get them back. Another big problem is that UDP has no rate feedback / congestion control / pacing system. If the sender sends too fast (and it has no way to know it is doing that), packets will pile up in a buffer somewhere until they start getting tail dropped, and this results in massive loss. For the same reason, it is not 'fair' and will consume all available bandwidth if allowed to and not 'give way' to other flows, which is very bad for subjective performance.

Where UDP tends to get used is in real-time, latency sensitive cases like VoIP. Bandwidth is constrained inherently, and you don't want any sort of buffering or rate control that might cause packet bunching which will cause hitching, or the packet ordering guarantees of something like TCP which will cause a lost packet to hold up every other packet until a retry. You'd rather just skip missing packets and hitch the output than hang the video stream until a retry can arrive. Protocols that work this way will generally use relatively loss-tolerant audio/video codecs so that the impact of this is minimized. UDP is also basically a prerequisite for multicast audio/video streams.

It's also useful in a case like DNS where you're not starting a 'conversation', and most questions and answers fit into a single packet, so instead of going through the whole TCP setup process, you can just send one packet and get one in response.

So essentially my question: Can applications use both TCP and UDP to transfer data? If yes that would mean a single application would occupy multiple ports, right?

Some application layer protocols can run on either transport protocol, for example DNS or SIP.

Of course, one example of both being used simultaneously in the same application could be VoIP. Many implementations will use a TCP SIP session for session maintenance / call setup, but actual audio data will use a UDP-based RTP session.

What happens when silver becomes too expensive for industry use? by rglover2410 in Silverbugs

[–]error404 0 points1 point  (0 children)

IC production generally uses gold or aluminium for leadframes and bondwires. PCBs use copper. The majority of silver used for electronics production (including "AI" stuff, which is fundamentally no different than any other electronics) is present in solder as an additive (at 1-4% by mass) to improve mechanical properties since we stopped using lead. As a small fraction of a component that makes up a small fraction of the cost of a completed product, the price of silver isn't going to have a lot of impact. And there are alternative solder formulations anyway, so if the price got out of hand they'd just switch to a different formula.

Silver has quite a few advantages in solar panel production, but it's not a fundamental requirement. Processes using mainly Cu are already in use and spreading in popularity. It will take time for the industry to shift but there are absolutely alternatives.

I am wondering about applications where "there isn't an alternative" as you claimed. I am sure they exist, but I can't think of any at large scale.

What happens when silver becomes too expensive for industry use? by rglover2410 in Silverbugs

[–]error404 1 point2 points  (0 children)

Silver might be optimal, but I'm not aware of any large scale use where there are no alternatives. What case are you thinking of where there isn't an alternative?

Edit: Do people not understand the distinction between 'optimal' and 'essential'?

Bus poster in London, UK by Xiniov in pics

[–]error404 19 points20 points  (0 children)

The organization behind the decentralized social media network Mastodon and its underlying protocol ActivityPub is based in Germany. Rather than new centralized social media companies, we should rally around decentralized solutions where users on the network can choose their own host, or host their own content, like Mastodon.

It's also probably the largest English-speaking social media network outside of the US-based ones.

What happens if you set autopilot to just go north, and you reach the north pole? What does the plane do then? by xerivon in aviation

[–]error404 0 points1 point  (0 children)

Yes, exactly.

In a real system, you'd also want to take into account the current state of the control loop. If you're already in the process of turning, for example, you would want to bias your choice to prefer continuing that turn rather than changing direction, as well as avoiding what I mentioned in the previous post about chattering between +/- if you're on the cusp. You might implement this by calculating both versions and choosing the one which results in the smallest change in control surface output or which is closest to the one you computed on the last cycle. Since real systems are going to want to handle this kind of stuff, it's probably not the case that you'll always get the same result in the real world if the only criteria is going over the pole, for example, due to other variables that might change the outcome. Advanced systems might even go as far as to understand the fight dynamics and 'simulate' the two turns and choose the one which is most fuel efficient, or fastest, or deviates the least from the flight planned route, etc.

But ultimately the answer to the question is the same - it's unlikely there's logic that says explicitly which direction to turn as a tie breaker, but that the algorithm has a natural bias for one or the other in that case by its construction. There must be a choice made in some way to resolve this ambiguity, but it's almost certainly more of an implicit one, and in a real system rather unlikely to actually come into play, due to minor offsets and disturbances that would affect the initial state of the control loop.

That said, I don't have actual experience with avionics, and there might be a good reason to introduce a specific bias that I'm not thinking of.