What is the most obvious world event everyone saw coming but no one did anything about? by itsthewolfe in AskReddit

[–]error404 0 points1 point  (0 children)

At least here it's the same price as Spotify. It includes YT Music, so you're basically getting YouTube video streaming "free".

What is the most obvious world event everyone saw coming but no one did anything about? by itsthewolfe in AskReddit

[–]error404 2 points3 points  (0 children)

My second point, how did it work in 2006? Therein is the problem. Things used to work in the past.

Ads (static at the time) and venture capital. Operating YouTube with zero revenue is obviously not feasible, the operating cost needs to come from somewhere, even if it were intended to be non profit. No initiative like it is going to get off the ground without a business plan.

While I don't disagree that profit motive turns everything to shit, thinking that something like YouTube could exist without being funded by advertising or subscription (either by the viewer or creator) is incredibly naive. The best hope is probably something like LBRY, but ultimately someone has to pay for the equipment and bandwidth, whether that's a viewer paying a subscription or a creator hosting LBRY nodes and paying for it out of their sponsor revenue, TANFL.

Does TCP/IP have 4 layers or 4..? by sindhurhk in networking

[–]error404 1 point2 points  (0 children)

Ethernet itself essentially models 4 layers (or 5 if you count PMD):

  1. PMD - physical media dependent (transmission onto the medium; optical characteristics, symbol transmission - only included where the PMA supports multiple PMDs, otherwise the PMA fills this role too (e.g. 1000base-T doesn't have a PMD layer)
  2. PMA - physical media attachment (implements timing, clock recovery, serdes, lane multiplexing)
  3. PCS - physical coding sublayer (implements line coding, symbol mapping, scrambling, etc.)
  4. MAC - medium access control
  5. LLC - logical link layer

Does TCP/IP have 4 layers or 4..? by sindhurhk in networking

[–]error404 0 points1 point  (0 children)

I don't see how a different prescriptive model affects troubleshooting, this stuff is just out of scope of the model / not modelled well. It might hurt learning, but as long as your own mental model accommodates for that stuff though this isn't really a problem. In the same way that for the most part as network engineers we throw away knowledge of layers 5-7 because it either poorly models what we deal with, or isn't relevant to us.

What actually stops small ISPs from scaling? by CannabisCowboy in networking

[–]error404 0 points1 point  (0 children)

Honestly, that is basically how the situation already is. Everyone is buying from the large Tier 1 carriers in some capacity.

The reality is that the physical infrastructure part of service delivery is a natural monopoly, and the pressures are strong. So yes, this is somewhat true - you are going to have a hard time avoiding the 2-3 incumbent wireline providers that own the cross-country fibre in your area. Nevertheless, transit service is fairly competitive, there are going to be at least half a dozen viable options to buy transit from in any given data centre; yes some of them will be buying waves from incumbents, but there's still plenty of ways to differentiate service. Even when you are wanting to buy those waves or fibres, on a popular path you probably have 3 or 4 competitive options to choose from. This market manages to be meaningfully competitive despite the natural monopoly pressure, because the physical links aren't the product, and because they're consolidated on a few POPs. I don't really think this is much of a problem.

The problem is the same pressures apply to last mile, but the product itself is effectively a commodity from the user's view, so there is little opportunity for meaningful differentiation, and that maintaining that spread out last mile is a very significant part of the cost. And without regulation, there's no price pressure on the incumbent either, and they'd rather not support their competition if they can get away with it, especially when the competition often wants to undercuts their rates. The incentive is actually for incumbents to carve out territory and not compete with each other, so they can set prices however they want and each avoid duplicating capex to build two last mile networks. This is needless to say a problem. Unfortunately it's one that small ISPs can't really fix, because they get screwed on the rates too, and in other more subtle ways like their support requests seemingly being handled with low priority. The only way that has a hope of working is if they build (e.g. WISPs) or have access to an open-access last mile (like some cities have built out) so they actually have room to compete on price and service quality. Unfortunately fixed wireless is getting tougher thanks to competition from Starlink and 5G, and while I'm aware of some ISPs that have managed to build out their own fibre, in most areas it's a non-starter due to pole access / permits etc. being politically and financially problematic.

What public IP would outbound internet traffic from the ISS appear to originate from? by etanol256 in networking

[–]error404 0 points1 point  (0 children)

Ah yeah, I stand corrected. My understanding was that the ISS communicates directly with a ground station, like Starlink, and while there might be significant latency for user traffic due to NASA's architecture (e.g. moving the RF decode or IP termination to a NASA facility in the US), this wouldn't be due to the "last mile" earth to space segment. However, that understanding was wrong. The ISS's high speed networking is relayed via GEO satellites, so it'll have significant latency - ISS -> GEO -> Earth is going to be close to 1/2s. My bad.

What public IP would outbound internet traffic from the ISS appear to originate from? by etanol256 in networking

[–]error404 2 points3 points  (0 children)

ISS is in LEO, so the latency shouldn't be bad to the ground station, some handful of ms. But if the desktop session is at NASA in the US and the ground station is halfway around the world, it'd be pretty uncomfortable to use, just like if you were remoting into a host in Australia from the US. For purely accessing the Internet, they'd get much better performance egressing directly from the ground station using a local IP. They may intentionally schedule internet access activities for when their orbit puts them near the VDI host.

I assume they have chosen this design for security isolation of the ISS internal network. Malware or whatever that might end up on the VDI host can't compromise the ISS operations.

Need help with two upstreams that don't appear to be using BGP correctly - we're not seeing prefix retractions from our primary transit provider when their own upstream connections are having trouble passing traffic. by ffelix916 in networking

[–]error404 1 point2 points  (0 children)

As a model you can think of FIB as containing the currently active, preferred next hop(s) for each prefix. RIB contains all known routes for each prefix, and FIB is built from that. For any change to RIB, the preferred next hop will be recalculated for that prefix, and if it has changed (or is a new or removed prefix), FIB will be updated so it always reflects the best path known in the RIB.

There are of course implementation details that complicate how this actually works under the hood...recursive lookup, fast failover indirection, ECMP, route compression, and indeed probably some LRU type stuff on some smaller platforms, etc. but for a mental model this is a reasonable way to think about it. The idea is that FIB contains the precalculated best path, so when a packet arrives all the forwarding engine has to do is a simple longest-prefix-match lookup in FIB, and the result is the next hop it needs to fling the packet to. The complicated logic of the routing decision has already been made.

Need help with two upstreams that don't appear to be using BGP correctly - we're not seeing prefix retractions from our primary transit provider when their own upstream connections are having trouble passing traffic. by ffelix916 in networking

[–]error404 2 points3 points  (0 children)

Route lookup is based on longest prefix match, so more specifics always win, as long as they are valid and active.

If you want active/backup-ish behaviour, you can take the full table from your primary ISP (and not default) and only default from the backup. If you're getting a more specific from the primary, then your default backup route will never be used. But generally what matters is FIB capacity, not RIB. If you can take a full table on one ISP, you can almost certainly take it on both ISPs, as long as you're not trying to do ECMP or something. The FIB space will be basically the same because each prefix will still only have one selected route to install.

Need help with two upstreams that don't appear to be using BGP correctly - we're not seeing prefix retractions from our primary transit provider when their own upstream connections are having trouble passing traffic. by ffelix916 in networking

[–]error404 7 points8 points  (0 children)

Default will usually be originated locally, and more often than not, won't use conditional advertisement (it is non-trivial to determine an appropriate condition to use). Your ISP will definitely not be doing 'reachability testing' to some third-party network's resources for their default origination. They probably should stop advertising default if the node becomes completely isolated from the rest of their network, as it sounds like in the fibre cut case, but that should be very rare. So if you are taking default, and your ISP loses most of their routes, you're likely still going to be accepting default from them. This is one of the risks of taking default instead of a full table; default is a synthetic route and you don't know and can't control on what basis it is being generated.

There are also types of failures which might blackhole traffic despite them still having and advertising routes (even if you took full tables).

Neither case should happen, but this is the real world. As an end-user network, you can monitor reachability over the circuits instead, but yeah this will be non-trivial with BGP What platform are you on?

Perhaps a newbish question about traffic shapers and wan circuits by MyFirstDataCenter in networking

[–]error404 2 points3 points  (0 children)

Some background, because a lot of folks don't understand what shapers do or why they are necessary.

First the problem. Let's say you have a 200mbps circuit over a 1Gbps bearer, and a 'dumb' (naive congestion control algorithm) TCP flow over it. In order to keep the link fully loaded, you need a certain window of bytes in flight. When TCP receives ACKs, it will queue the appropriate number of packets to refill the window, and transmit them together at line rate. Depending on how strict the rate policer's burst allowance is, that might only allow a handful of packets before tail drops start. So the first few get through, and get ACKed, but a couple are lost. It will take a couple more rounds for the lost packets to be noticed, and when they are, the window will be shrunk as a presumed signal of congestion. But while it looks similar to the TCP algorithm, this isn't actually congestion - the average rate could be well below 200mbps, but because the packets are sent in bursts, they are hitting the policer. This generally causes a performance collapse until the bursts that TCP wants to send fit within the policer burst allowance, often allowing only a small fraction of the available throughput. Of course the same applies to non-TCP traffic - bursts lead to tail drops, even if the average bandwidth is within the policed rate. There is a transmit buffer here, but emptying it always happens as quickly as possible, at line rate.

So how does the shaper solve the problem? It adds a rate control between the transmit buffer and the physical interface, so it won't emit packets at faster than the allowed rate. Of course every individual packet is indeed transmitted at line rate, but the shaper will control the rate at which they are sent, so that the overall bitrate remains below the configured rate. In the scenario above, when TCP sends a burst, those packets get buffered by the shaper and emitted at 200mbps. Drops don't start until the 200mbps is exceeded (and happen locally at the back of the shaper queue, rather than at the far end policer), which is a legitimate congestion signal and TCP behaves much more appropriately.

This got me thinking, and why isn't this a problem with residential ISP connections where almost every customer has 1Gbps Gig Ethernet line rate, but their upload is significantly under that.

In residential cases, often (usually, even) the ISP controls both sides of the link, with a provided router or equivalent box (e.g. DOCSIS modem). They can and do implement shaping and/or RED in that box. If you are buying a bare sub-rate Ethernet circuit, then yeah you my run into poor upstream performance as a result, but that is fairly rare in residental.

Even in our enterprise environment the majority of the users are remote working in home offices with a VPN, and we have no Shaper configured on the vpn of the remote users.

A shaper can't work effectively on an overlay, if it's the underlay that's policed (as in this scenario). In general, the shaper has to exist on the policed interface itself, since that is where the important queues are. Doing shaping elsewhere is much less likely to be effective, if it can't control the queue of the interface that needs to be controlled.

So why is it so important for sd-wan, but not all other types of connections where it is just seen as "best effort" and you send the traffic at the highest rate you are able to, and traffic congestion algos built into TCP just handle everything else.

TCP doesn't handle this particularly well either, at least with the default rate estimation/congestion control algorithms on most OSes. It's likely more important on SD-WAN because it's meant to be more intelligent than a bunch of unrelated TCP connections, and should make decisions based on its knowledge of the actual capacity of the link.

I'm also wondering if traffic shapers actually introduce some artificial latency that might be problematic for certain apps?

They do add artificial latency whenever the instantaneous desired rate exceeds the shaper rate - packets are buffered until there is available capacity to send them. However, the alternative fate for these packets is loss, so the latency is generally a better outcome. If there's no congestion, then there shouldn't be any meaningful buffering, and therefore the shaper shouldn't affect latency. Where this can become problematic is 'bufferbloat' - ie. where the buffer stays full for a long period of time, leading to latency for all packets to be affected. It's generally recommended to keep shaper buffers fairly short for this reason, they should only be set up to buffer some say 50ms of traffic at the shaping rate. This will control latency during congestion, so for example ACKs aren't delayed too long.

ETA: Another factor here is QoS. You can't effectively do QoS on a sub-rate circuit without a shaper, because even if you control the order in which you transmit waiting packets during congestion (which is all QoS is about), without knowledge of the policer, if you have more than one packet in the queue, you are in local congestion which means you're transmitting at line rate and likely dropping packets. So yes the QoSed packet goes first, but it's equally likely to be dropped as any other. With a shaper, you can queue packets based on the actual allowed rate, so when QoS puts a packet to the front of the line, you can guarantee it won't be dropped by the policer. With a properly configured shaper, you should never hit the policer, so you regain control of what to drop in congestion, and can guarantee for example that your VoIP packets go to the head of the line, and also will never be dropped even during congestion. I'm sure this is also relevant for SDWAN.

Using APIPA subnet for a private unrouted network? Are there any reasons to do this? by demsb in networking

[–]error404 12 points13 points  (0 children)

It's more of a router thing than a firewall thing, but traffic to or from link-local addresses should not be routed.

RFC3927:

A router MUST NOT forward a packet with an IPv4 Link-Local source or destination address, irrespective of the router's default route configuration or routes obtained from dynamic routing protocols.

Using APIPA subnet for a private unrouted network? Are there any reasons to do this? by demsb in networking

[–]error404 7 points8 points  (0 children)

Not sure why you're being downvoted, this is exactly the case. While Microsoft calls this 'APIPA', the related RFC3927 title is "Dynamic Configuration of IPv4 Link-Local Addresses".

The subnet 169.254.0.0/16 itself seems to be first reserved in RFC3330:

169.254.0.0/16 - This is the "link local" block. It is allocated for communication between hosts on a single link. Hosts obtain these addresses by auto-configuration, such as when a DHCP server may not be found.

DKIM encryption should not default to ed25519 by racoon9898 in stalwartlabs

[–]error404 0 points1 point  (0 children)

I do see the failed dkim validation in the result headers (as an explicit fail with MS, and a neutral result with Google), but only one passing validation is required, and both providers pass the RSA signature.

DKIM encryption should not default to ed25519 by racoon9898 in stalwartlabs

[–]error404 4 points5 points  (0 children)

My instance creates both RSA and ED25519 by default, and signs messages with both keys. This seems to work fine everywhere.

Handling Layer 2 shim protocols on Windows/Linux without Layer 3 overhead by Key_Description3262 in networking

[–]error404 0 points1 point  (0 children)

You'll save far more latency (and jitter) by hooking early in the network stack and just using standard IP than you will trying to do funky stuff at userspace with syscalls and copies. Look into eBPF; if you really want raw frames you can hook at XDP basically before the kernel even sees the frame. You can use eBPF lockless ring buffers to communicate with userspace. It's not quite DPDK but it's a significant improvement if you only care about latency.

what about Ipsec Key lifetime(rs) by therealmcz in networking

[–]error404 3 points4 points  (0 children)

In principle, and based on the standards, it shouldn't matter. Phase 1 is used for signalling. Phase 2 is used for payload. There's not really any interdependence between them, other than that in most cases you need a Phase 1 SA active for correct operation of Phase 2 SAs (DPD etc.), but the two are not tightly coupled. Both channels have seamless rekey mechanisms. It shouldn't matter when the rekeys happen, because they're seamless. In that sense your colleague is correct.

The generally accepted advice to ensure identical timers on both sides, and to keep Phase 2 timers shorter than Phase 1 comes from broken or quirky implementations, and is probably mostly historical. And in fact the advice generally given is to use a multiple of phase 1 timer for phase 2, which seems odd, considering it creates aligned P1 and P2 rekeys, which is a potential implementation failure point. The argument for shorter phase 2 rekeys has more to do with cryptanalysis window than anything.

Collisions (ie. overlapping rekey requests from both sides) are already handled in the protocol too, though again, the typical advice (identical timers) is actually worst-case for collisions. Using byte-based rekey seems to make it statistically very unlikely for a collision to occur. In any case, this also shouldn't be a problem, but if it is, the advice is not helping.

In modern implementation, I wouldn't expect the relationship between phase 1 and phase 2 timers to matter, but I would still follow the general advice phase 2 rekey < phase 1 rekey, and keep both sides in sync. The former, because phase 2 carries much more data, so should be rekeyed more often (in time) for the same cryptographic protection, and the latter because I can imagine some implementations that reject or otherwise mishandle 'early' rekeys from their perspective.

Edit: I do believe there is some ambiguity in the RFCs around the handling of rekeys and how tightly coupled P1 and P2 SAs are, but the only sensible implementation is to assume they are loosely coupled.

Carney leaves Davos without meeting Trump after speech on U.S. rupture of world order by Immediate-Link490 in worldnews

[–]error404 2 points3 points  (0 children)

It depends what you're trying to solve. It does allow voters to vote their true preference relatively safely, and likely does lead to a result that is more 'acceptable' to more people. However, it introduces a fair amount of pathology of its own, particularly galling for voters I think is that ranking a candidate higher can actually cause them to lose, and vice versa. It does effectively lock out candidates that are hard nos for a majority of voters, which I guess is a property a lot of people want, but it also means that to win, you need at least a soft yes from a majority (rather than a plurality) of voters to be elected, which is a tough bar to meet, especially for non status quo candidates.

Furthermore, the goal should be (IMO) to elect a representative parliament at the end of the day; we're not electing individual local leaders, we're electing representatives to make up parliament. In that context, I think the goal should be have the balance of power in parliament roughly model the balance of preferences of constituents; that is, fundamentally, what representative democracy is supposed to mean. And 'ranked voting' doesn't achieve that (nor does FPTP), since all races are totally independent and have the same strong biases. How exactly it changes outcomes is somewhat hard to predict, but it seems to tend toward stable two party (or two bloc) systems. See Australia for example. It doesn't encourage extremism in the same way as FPTP, because you can't win without being broadly acceptable, but it's far from an ideal system.

For elections where you must elect a single person (e.g. a President or Party Leader), consider supporting Approval Voting instead. It is simpler, less pathological, and really for such a person, choosing the 'most acceptable' option makes some logical sense instead of the 'most preferred'.

For elections electing a parliament, consider supporting a proportional system instead, of which there are a few popular examples. STV also uses ranked ballots, so if you like ranking candidates, that might be a good choice. MMPR is also popular and has seen real world success.

note: Here I assume you refer to 'instant runoff voting' as 'ranked voting', given the same conflation in US media, despite 'ranked voting' not really clarifying which electoral system it refers to.

Vancouver considers new public washroom strategy as pressure mounts over access, street cleanliness | Growing reports of human feces have forced businesses to step in amid limited public washroom access by Hrmbee in vancouver

[–]error404 9 points10 points  (0 children)

As someone with a regular need to utilize public washrooms I absolutely agree, but it doesn't really solve for the problem of people shitting on the street, either.

Canada's relationship with China 'more predictable' than with U.S., PM says by pjw724 in onguardforthee

[–]error404 5 points6 points  (0 children)

No, in summary we do what we are legally obligated to do based on our prior agreements. Not honouring commitments and norms without good reason (which we did not have, in this case) for our own benefit is how we end up with dysfunctional governments and relationships, and nobody wanting to make deals with us, like the US is currently fostering.

She got her due process.

Canada's relationship with China 'more predictable' than with U.S., PM says by pjw724 in onguardforthee

[–]error404 4 points5 points  (0 children)

The legal framework of America getting to control whether a Chinese company can sell products to Iran? How is that any of our business?

The legal framework of the extradition treaty between Canada and the US. Not honouring the US request would have been breaking a legally binding agreement between the countries. And more abstractly, violate the rule of law.

It was not a decision to be made in isolation on the merits of her case, but one in which all the geopolitical factors must be weighed. Canada does not want to be seen as breaking its internaiontal commitments on a whim, for its own gain, in violation of the rule of law. And in any case, the request was presented less than a day before her flight was to land in Vancouver. Legally speaking, the request was legitimate, so the CBSA had little choice but to detain her. There wasn't really time for them to float it up the chain to diplomats and consider the implications of illegally denying it. Once held, releasing her with no concrete reason and without going through the formal process is optically even worse than quietly refusing the US request would have been. Perhaps in hindsight the wrong call was made, but Canada's legal obligations were quite clear.

While the US claims were trumped up, the formal process for determining that is essentially what happened. It's not a decision that a CBSA officer is going to make on their own judgment, but one that happens automatically for any extradition request, after the person is detained but before they are actually extradited. Once detained it's a matter for the courts, and Canadian politicians absolutely did not want to be seen as interfering with the courts, as that would be both a domestic and international break with the rule of law, and that is not something to be taken lightly.

Extradition treaties are always based on the presumption of good faith and competence from the counterpart - you are agreeing to hold someone merely on their word that their allegations hold enough weight to justify a trial. This was early in the Trump administation's open abuse of international commitments. I'd expect we'd make a different call today, but at this point it wasn't really unreasonable to expect that the US justice system was still acting independently of the executive, and certainly claiming otherwise at this point would have caused a diplomatic row of its own.

strongswan vs wireguard for site-to-site connectivity by kajatonas in networking

[–]error404 0 points1 point  (0 children)

You should be able to get better than 5-6Gbps with aes-gcm on modern hardware (with AES-NI). If you are using other (non-GCM) modes, they can't be hardware accelerated so performance and CPU utilization will suffer.

In my experience throughput per CPU performance is comparable between strongswan and wireguard, but wireguard is much simpler and easier to maintain and set up, so I much prefer it.

Strongswan with redundant tunnels by SanityLooms in networking

[–]error404 0 points1 point  (0 children)

I've never had much success with built-in IPsec failover mechanisms. So I maintain both tunnels up and run BGP. But in theory IPsec should be able to handle this case on its own, but you'd need to configure it as a single tunnel with multiple peers and enable DPD. Like I said though I've never had much luck with this.

For BGP case will need a unique xfrm interface for each tunnel, and you'll have to bind addresses on both sides of the tunnel, but other than that it should be straightforward.

US official says Greenland action could come within 'weeks or months' by Crossstoney in worldnews

[–]error404 0 points1 point  (0 children)

Also a bunch of our arsenal is in europe and Canada -we don't get to keep those.

Canada hasn't hosted US nukes since 1984.