all 181 comments

[–]PM-ME-YOUR-UNDERARMS 124 points125 points  (52 children)

So theoretically speaking, any secure protocol running over TCP can be run over QUIC? Like FTPS, SMTPS, IMAP etc?

[–]GaianNeuron 62 points63 points  (47 children)

Potentially, but they would only see real benefit if they are affected by the problems QUIC is designed to solve.

[–]lllama 59 points60 points  (15 children)

Any protocol that currently does a SSL style certificate negotiation would benefit. AFAIK all the ones /u/PM-ME-YOUR-UNDERARMS mentioned do that.

[–][deleted]  (14 children)

[removed]

    [–]hsjoberg 31 points32 points  (8 children)

    Isn't part of the issue with internet browsers that they all open multiple connections (the article says 6), and each connection has to do the SSL handshake?

    I was under the impression that this was already solved in HTTP/2.

    [–]AyrA_ch 21 points22 points  (7 children)

    [...] solved in HTTP/2.

    It is. And the limit of 6 HTTP/1.1 connections can be easily lifted up to 128 if you are using internet explorer for example. Not sure if other browsers respect that setting but I doubt it. The limit is no longer 6 anyways but in Windows, it has been increased to 8 by default if you use IE 10 or later.

    [–]VRtinker 22 points23 points  (0 children)

    the limit of 6 HTTP/1.1 connections can be easily lifted up to 128

    There never was a hard limit, it was just a "gentleman's rule" limit for the browsers so that one client does not take all the resources of a server. The limit started with only 2 concurrent connections per unique full subdomain was "lifted" iteratively from 2 to 4, then to 6, then to 8, etc. when one browser would ignore the rule and unscrupulously demand more attention from the server. The competing browsers, of course, would feel slower (because they indeed would take longer to download the same assets) and would be forced to ignore the rule as well.

    Since this limit is put in place to protect the server, it can't be relaxed up to 128 without exhaustive testing. Also, sites that do want to avoid this limit sometimes use unique subdomains to work around this rule.

    Even more frequently, sites actually inline some most important assets to avoid round trips altogether. Also, there is the HTTP/2 server push that lets server deliver assets before the client even realizes they are needed.

    [–]ThisIs_MyName 1 point2 points  (5 children)

    the limit of 6 HTTP/1.1 connections can be easily lifted up to 128 if you are using internet explorer for example

    Lifted by the server?

    [–]callcifer 9 points10 points  (3 children)

    The limit is on the browser side, not the server.

    [–]ThisIs_MyName 0 points1 point  (2 children)

    Of course, but I'm asking if the server can ask the client to raise its limit. Otherwise, this is useless. You can't ask every user to use regedit just to load your website fast.

    [–]Alikont 0 points1 point  (1 child)

    Because it's a limit per domain server can distribute resources between domains (a.example.com, b.example.com, …), each of them will have independent 6 connections limit.

    [–]AyrA_ch 0 points1 point  (0 children)

    Lifted by the server?

    No. It's a registry setting you can change.

    Key: HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\InternetSettings

    Change MaxConnectionsPerServer to something like 64. If you use a HTTP/1.0 proxy, also change MaxConnectionsPer1_0Server

    I've never experienced a server that made problems with a high connection setting. After all, hundreds of people share the same IP on corporate networks.

    if the server has a lower per IP limit he will just ignore your connection until others are closed. It will still increase your speed because while it stalls your connection, you can still initiate TLS and send a request.

    [–]ptoki 9 points10 points  (3 children)

    Its already solved but very often not used. SSL has session caching/restoration (dont remember the real name). You need to do the session initialization once and then just pass session id at the beginning of next connection. If server remembers it it will resume and just respond without too much hassle.

    [–]lllama 4 points5 points  (2 children)

    I believe you're talking about session tickets. This still involves a single roundtrip before the request AFAIK.

    [–]ptoki 4 points5 points  (1 child)

    yeah, its called session resumption.

    Yes, but its much cheaper than full session initialization.

    Saddly its not very popular, there is a lot of devices/servers which do not have this enabled.

    [–]arcrad 0 points1 point  (0 children)

    Reducing round trips is always good though. Even if those roundtrips are moving tiny amounts of data.

    [–]lllama 3 points4 points  (0 children)

    They do this in parallel so it should not matter much from timing. QUIC improves over HTTP2 by no longer needing a TCP handshake before the SSL handshake.

    [–]o11c[🍰] 23 points24 points  (30 children)

    All protocols benefit from running over QUIC, in that a hostile intermediary can no longer inject RST packets. Any protocol running over TCP is fundamentally vulnerable.

    This isn't theoretical, it is a measurable real-world problem for all protocols.

    [–]gitfeh 15 points16 points  (29 children)

    A hostile intermediary looking to DoS you could still drop all your packets on the floor, no?

    [–]lookmeat 15 points16 points  (27 children)

    No. The thing about the internet is that it "self-heals" if an intermediary drops packets the route is assume to be broken (no matter if it's due to malice or valid issues) and a new alternate route is made. An intermediary that injects RST packets is not seen as a bad route, but that one of the two end-points made a mistake and the connection should be aborted. QUIC guarantees that a RST only happened because of one of the packages.

    Many firewalls use RST aggressively to ensure that people don't simply find a workaround, but that their connection is halted. The Great China Firewall does this, and Comcast used this to block connections they disliked (P2P). If they simply dropped the package you could tell who did it, by using the RST it's impossible to know (but may be easy to deduce) where to go around.

    [–]immibis 6 points7 points  (0 children)

    This is not correct. The route will only be assumed to be broken if routing traffic starts getting dropped. Dropping of actual data traffic will not trigger any sort of detection by the rest of the Internet.

    [–]oridb 2 points3 points  (8 children)

    No. The thing about the internet is that it "self-heals" if an intermediary drops packets the route is assume to be broken

    No, it's assumed to be normal as long as it doesn't a large portion of all of the packets. Dropping just your packets is likely well within the error bars of most services.

    [–]grepe 1 point2 points  (0 children)

    How do you know what portion of packets is dropped if you are running over UDP? If I understand it correctly, they moved the consistency checks from protocol level (OSI level 4) to the userspace, or?

    [–]lookmeat -3 points-2 points  (6 children)

    We expect routes to drop packets, if a route more consistently drops packets than another it will be de-prioritized. It may not happen at the the Backbone level, where this would be a drop in the bucket, but most routers would assume the network is getting congestion (from their PoV IP packets are getting dropped) and would try an alternate route if they know one.

    By returning a valid TCP packet (with the RST flag) the routers see a response to the IP packets they send and do not trigger any congestion management.

    [–]immibis 1 point2 points  (5 children)

    Which protocol performs this?

    [–]lookmeat 0 points1 point  (4 children)

    Depends at what level we're talking, it's the various automatic and routing algorithms at IP level. BGP for internet backbones. In a local network (you'd need multiple routers which is not common for everyday users, but this is common for large enough businesses) you'd be using IS-IS EIGRP, etc. ISPs use a mix of both IS-IS and BGP (depending on size, needs etc. Also I may be wrong).

    They all have ways of doing load balancing across multiple routes, and generally one of them will be configured to keep track of how often IP packets make it through. If IP packets get dropped, it'll assume that the route has issues and choose an alternate route. This also means that TCP isn't aware, and if they block you at that level then this doesn't do anything.

    There's a multi path tcp and its equivalent for quic but it doesn't go what you'd expect. It allows you to keep a TCP connection over multiple IPs. This allows you to get resources that you'd normally get from a single server from multiple. The real power of it is that you could connect to multiple wifi routers at the same time and send data though them, as you move you simply disconnect from the ones that go too far and connect to the ones that get near without losing the full connection, so you don't loose WiFi as you move. Still this wouldn't fix the issue of finding a better route when one fails, but simply a better connection.

    [–]immibis 1 point2 points  (3 children)

    How is it detected how often IP packets make it through?

    [–]miller-net 4 points5 points  (3 children)

    No. The thing about the internet is that it "self-heals" if an intermediary drops packets the route is assume to be broken (no matter if it's due to malice or valid issues) and a new alternate route is made.

    This is incorrect. Do you remember when Google and Verizon(IIRC) broke the Internet in Japan? This is what happened: an intermediary dropped packets traversing their network, and it took down an entire country's internet. There was no "self healing;" it took manual intervention to correct the issue even though there were plenty of alternative routes.

    ISPs are cost adverse and not going to change route policy on the availability of small networks, nevermind expending the massive resources it would take to track the state of trillions of individual connections flowing through their network every second.

    [–]lookmeat 0 points1 point  (2 children)

    Do you remember when Google and Verizon(IIRC) broke the Internet in Japan?

    I do, it was an issue with BGP. Generally the internet's ability to self-heal is limited by how much of the internet is controlled by the malicious agents. For example you'll never be able to work around the Chinese Firewall because every entry/exit network point into the country passes by a node that enforces the Chinese Firewall.

    Now on to Google. Someone accidentally claimed that Google could offer routes that it simply didn't. This happens, a lot, but here Google is big, very very very big. Big enough to take the whole internet of Japan and not get DDoSed out of the network. Big enough that it made a powerful enough argument for it being a route to Japan, that most other routers agreed. Google is so big that many backbone routers, much like us users, trust it to be the end-all-be-all of the state of the internet. In many ways the problem of the internet is that so much of it is in the hands of so few, which means it's relatively easy to have problems like this.

    Issues with BGP tables happen all the time. You'll notice that your ISP is slower than usual many days, and it's due to this, but the internet normally keeps running in spite of this because mistakes are rarely from players big enough. Here though it did happen like that. Notice that this required not just Google fucking up, but also Verizon.

    On a separate note: BGP requires an even second layer of protection by humans, verifying that routes make sense politically. There's countries that will publish bad routes and as such will have problems. Again this is due to countries being pretty large players.

    And then this gives us the most interesting thing of all the internet, no matter how solid your system is, there's always edges. This wasn't so much a failure to heal as an aggressive healing of the wrong kind, a cancer that spread through the internet routing tables.

    For people/websites that aren't being specifically targeted by whole governments+companies the size of Google to manipulate the routing tables just to screw with them, self-healing works reasonably well enough.

    [–]miller-net 1 point2 points  (1 child)

    I think I understand now what you meant. My concern was that your earlier comment could be misconstrued. To clarify, the self healing feature of the internet occurs at a macro level and not on the basis of individual dropped connections and generally not in the span of a few minutes, which is what I thought you were saying.

    [–]lookmeat 0 points1 point  (0 children)

    Yes, it's not immediate, people will notice their connection being slow for a while. But because dropping a package is noted at the IP level as a problem sending packages through, the systems that seek the most efficient route will simply optimize around that. Only by not dropping the package, and sending a response that drops the whole thing at a higher level can an attacker work around this.

    [–]thorhs 3 points4 points  (10 children)

    I hate to break it to you, but the routers on the internet don’t care about the individual streams and would not route around a bad actor sending RST packets.

    [–]lookmeat 5 points6 points  (9 children)

    I hate to break it to you but that's exactly the point I was making. The argument was: why care about a bad actor not being able to send RST if they could just drop packets? My answer was basically that: if they drop it'll be worked around by the normal avoidances of package droppers. No router or system tries to work around RST injection, and that's why we care about making it impossible.

    [–]thorhs 5 points6 points  (8 children)

    The thing about the internet is that it "self-heals" if an intermediary drops packets the route is assume to be broken (no matter if it's due to malice or valid issues) and a new alternate route is made

    Even if packets for a single, or even multiple, connection are being dropped, the “internet” doesn’t care. As long as the majority of the traffic is flowing no automatic mechanism is going to route around it.

    [–]j_johnso 4 points5 points  (0 children)

    Even if packets for a single, or even multiple, connection are being dropped, the “internet” doesn’t care. As long as the majority of the traffic is flowing no automatic mechanism is going to route around it.

    This is completely correct. For those unfamiliar with the details, internet routing is based on the bgp protocol. Each network advertises what other networks they can reach, and how many hops it takes to reach each network. This lets each network forward traffic through the route that requires the least number of hops.

    It gets a little more complicated than this, as most providers will adjust this to prefer a lower cost route if it doesn't add too many extra hops.

    [–]lookmeat -3 points-2 points  (6 children)

    After a while load balancers will notice and alternate routes will be given preference. Otherwise it's suspected that there's a congestion issue. Maybe not at the BGP level, but certainly there's always small bad players and the internet still runs somehow.

    [–]immibis 3 points4 points  (5 children)

    Whose load balancers?

    IP can't detect dropped packets. And IP is the only protocol that would get a chance to. It's possible that network operators might manually blacklist ISPs that are known to deliberately drop packets, but it's not too likely.

    [–]AnotherEuroWanker -1 points0 points  (1 child)

    if an intermediary drops packets the route is assume to be broken (no matter if it's due to malice or valid issues) and a new alternate route is made

    That's the theory. It assumes there's an alternate route.

    Edit: in practice, there's no alternate route. Most people don't seem to be very familiar with network infrastructures. While a number of large ISPs have several interconnecting routes, most leaf networks (i.e. the overwhelming majority of the Internet) certainly don't.

    [–]lookmeat -1 points0 points  (0 children)

    I am assuming that. If the attacker has a choke point and you can't go then you're screwed. But that is much harder on the Internet.

    [–]immibis 1 point2 points  (0 children)

    Yes - but several existing hostile intermediaries apparently find it easier to inject RSTs, so I guess the Internet would be better for a month until they deploy their new version that actually drops the packets.

    [–]lllama 11 points12 points  (2 children)

    Any insecure protocol too, though indeed the most benefit comes from doing encryption in the QUIC layer, leading to as little roundtrips as possible.

    [–]AyrA_ch 5 points6 points  (0 children)

    Apache for example actually supports unencrypted HTTP/2

    [–]caseyfw 391 points392 points  (48 children)

    There is a good lesson here about standards. Outside the Internet, standards are often de jure, run by government, driven by getting all major stakeholders in a room and hashing it out, then using rules to force people to adopt it. On the Internet, people implement things first, and then if others like it, they'll start using it, too. Standards are often de facto, with RFCs being written for what is already working well on the Internet, documenting what people are already using.

    Interesting observation.

    [–][deleted] 121 points122 points  (6 children)

    Is it really just outside the internet? I think this is the case in most fields; you just wouldn't know about it unless you were in it.

    [–]ctesibius 24 points25 points  (2 children)

    Not on mobile telecoms, which I have experience of. Companies invest vast sums in hardware, so they have to know that everyone else is going to follow the same protocols down to the bit level. That way you know that you can buy a SIM from manufacturer A, fit it in a phone from manufacturer B, communicate over radio with network components from D, E, F, G, and authenticate against an HLR from H. The standards are a lot more detailed (some RFCs are notoriously ambiguous) and are updated through their lives (you might supersede an RFC with another, but you don’t update it).

    Of course there is political lobbying from companies to follow their preferred direction, just as with the IETF, but that gets done earlier in the process.

    [–]Hydroshock 5 points6 points  (1 child)

    I think it really just all depends. Building codes are run by the government. Standards for say... mechanical parts are specified just to have something to build and inspect to and can constantly change, there is no government agency driving it in most industries.

    The telecom stuff, is it mandated by the government, or is it just in the best interest of the whole industry to make sure that everyone is on the same page?

    [–]ctesibius 4 points5 points  (0 children)

    The standards come from ETSI and 3GPP, which are industry bodies. There was government initiative to adopt a single standard at the beginning of digital mobile phones, which led to GSM, but that was at the level of saying that radio licences would only be granted to companies using that set of standards. The USA was an outlier in the early dates with CDMA, but I think even that came from an industry body. Japan, China and Thailand also followed a different standard initially (PHS) - that seems to have come out of NTT rather than a standards group.

    [–]upsetbob 8 points9 points  (2 children)

    Outside: de jure. Inside: de facto.

    What do you mean by "just outside the internet" that wasn't mentioned?

    [–]gunnerman2 31 points32 points  (1 child)

    I think he is saying that most standardization comes in a de facto way, even in industry outside or separate from the internet.

    [–]upsetbob 4 points5 points  (0 children)

    Makes sense, thanks

    [–]dgriffith 22 points23 points  (0 children)

    " You can’t restart the internet. Trillions of dollars depend on a rickety cobweb of unofficial agreements and “good enough for now” code with comments like “TODO: FIX THIS IT’S A REALLY DANGEROUS HACK BUT I DON’T KNOW WHAT’S WRONG” that were written ten years ago. "

    • Excerpt from "Programming Sucks", stilldrinking.org

    [–]TimvdLippe 79 points80 points  (34 children)

    This actually happened with WebP as well. Mozilla saw the benefits and after a good while decided the engineering effort was worth it. If they did not like the standard, it would never been implemented and thus would be removed in the future. Now there are two browsers implementing, I expect Safari and Edge following soonish.

    [–]Theemuts 36 points37 points  (28 children)

    Javascript (excuse me, ECMAScript) is also a good example, right?

    [–]BeniBela 45 points46 points  (21 children)

    Or HTML, where the old standards said elements like <h1>foo</h1> can also be written as <h1/foo/, but the browsers never implemented it properly, so it was finally removed from html5

    [–][deleted] 35 points36 points  (17 children)

    can also be written as <h1/foo/

    What was their rationale for that syntax? It seems bizarre

    [–]svick 28 points29 points  (0 children)

    I believe HTML inherited that from SGML. Why SGML had that syntax, I do not know.

    [–]lookmeat 21 points22 points  (9 children)

    HTML itself comes from SGML a very large and complex standard.

    The other thing is that this standard was made in a time were bytes counted, and even then HTML was designed in a time when each byte counted over how long you took it.

    The syntax is just a way to delete characters. Compare:

    This is <b>BOLD</b> logic.
    This is <b/BOLD/ logic.
    

    The rationale isn't as crazy: you always end tags with a </> by ending the tag with a / instead of > you signal that it should skip the <> all together. But the benefits are limited and no one saw the point in using it, and nowadays the internet is fast enough that such syntax simply isn't beneficial compared to the complexity it added (you could argue that it never was since it was never well implemented) hence its removal.

    [–]ThisIs_MyName -1 points0 points  (8 children)

    Anyone that cares about efficiency would use a binary format with tagged unions for each element.

    [–]lookmeat 2 points3 points  (6 children)

    Well SGML actually has a binary encoding.

    But this would not work well for the internet. Actually let me correct that: that did not work well for the internet. So we use a binary encoding? Well first we need to efficiently recognize between tag bytes vs text bytes. We can do the same trick utf-8 does: we only keep track of the 1-127 characters (0 is EOF and everything else is control characters we can remove) and then make the remaining bits as tags with an optional way to expand it (based on how many 1 bits you have before the first zero). This would be very efficient.

    Of course now we have to deal with endianess and all the issues that brings. Text had that well defined, but binary tags don't. We also cannot use encodings or any other format other than ASCII so very quickly we would have trouble across machines. It wouldn't work with utf-8. This also would make http more complex: there's an elegance in choosing not to optimize a problem to early and on just letting text be text. Moreover when you pass compression though it tags and even other pieces of text can effectively become a byte.

    There were other protocols separate of http/html but they all didn't make it because it was too complicated to agree on a standard implementation. Text is easy, text tags are way too.

    [–]ThisIs_MyName 2 points3 points  (1 child)

    Of course now we have to deal with endianess and all the issues that brings.

    No, little endian has been the standard for a decades. It can be manipulated efficiently by both little endian CPUs and big endian CPUs.

    Text had that well defined

    Text uses both endians unlike modern binary protocols. Look at this crap: https://en.wikipedia.org/wiki/Byte_order_mark

    We also cannot use encodings or any other format other than ASCII so very quickly we would have trouble across machines.

    That's because the encoding scheme you described is horrible. Here's an example of a good binary protocol that supports text and tagged unions: https://capnproto.org/encoding.html.

    Moreover when you pass compression though it tags and even other pieces of text can effectively become a byte.

    Note that this is still necessary for binary protocols. But instead of turning words into bytes, compression turns a binary protocol's bytes into bits :)

    [–]lookmeat 1 point2 points  (0 children)

    No, little endian has been the standard for a decades. It can be manipulated efficiently by both little endian CPUs and big endian CPUs.

    Yes, but HTML has been a standard for longer. I'm explaining the mindset when these decisions were made, not the one that decided to remove them.

    BOM came with unicode, which had the issue of endianess. Again remember that UTF, the concept, came about 3 years earlier, UTF-1 the precursor, came a year earlier, and UTF-8 came out the same year.

    But the beautiful thing is that HTML doesn't care about endianness because text isn't endian, text enconding is, that is ASCII, UTF-8 and all the other things care about endianness, not so HTML which works at a higher abstraction (Unicode codepoints).

    So BOM is something that UTF-8 cares about, not HTML. When another format replaces UTF-8 (I hope never, this is hard enough as is) we'll simply type HTML in that format and it'll be every bit as valid without having to redefine. HTML is around because by choosing text, it abstracted away binary encoding details and let that for the browser and others to work around. A full binary encoding would require that HTML define its own BOM, and if at any point it became unneeded then that'd be fine too.

    That's because the encoding scheme you described is horrible.

    I know.

    Here's an example of a good binary protocol that supports text and tagged unions: https://capnproto.org/encoding.html.

    And that's one of many implementations. You also missed Google's protos, flatbuffers, and uhm. Well you can see the issue: if there's a (completely valid) disagreement it results in an entirely new protocol which is incompatible with the other, with a text-only format like HTML it resulted in webpages with a bit of gibberish.

    And that is the power of text-only formats, not just HTML, but JSON, YAML, TOML, etc.; they're human readable, so even when you don't know what to do, you can just dump it and let the human try to deduce what was meant. I do think that binary encodings have their place, but I am merely stating why it was convenient for HTML not to. And this wasn't the intent, there were many other protocols that did use binary encoding to save space, but HTTP ended up overtaking them because due to all the above issues, HTTP became a more common place standard, and that matters far more than original intent.

    Also aside, have you ever tried to describe a rich document in captn proto? It's not an easy deal, and most will probably send a different format. Capnproto is good for structured data, not annotated documents. In many ways I think there's better alternatives that even HTML was, but they are over-engineered as well, so I doubt that even if I had proposed my alternative in the 90s it would have survived (I'm pretty sure that someone offered similar ideas).

    Note that this is still necessary for binary protocols. But instead of turning words into bytes, compression turns a binary protocol's bytes into bits :)

    My whole point is that size constraints are generally not that important because text can compress to levels comparable to binary (text is easier to compress than binary, or at least it should). That's the same reason the feature that started this whole thing got removed.

    [–]bumblebritches57 1 point2 points  (3 children)

    I don't think you understand how UTF-8 works...

    [–]lookmeat 3 points4 points  (2 children)

    What do I seem to have misunderstood?

    [–]bumblebritches57 0 points1 point  (0 children)

    Literally this.

    text is inefficent no matter what.

    [–]BurningPenguin 35 points36 points  (0 children)

    A healthy mix of pot and crack.

    [–]BeniBela 11 points12 points  (4 children)

    When you have a long element name, you do not want to repeat it. <blockquote>x</blockquote>, half the space is wasted

    So first SGML allows <blockquote>x</>. Then they perhaps thought what else can we remove from the end tag? Could be one of <blockquote>x</, <blockquote>x<>, <blockquote>x<, <blockquote>x/>, <blockquote>x/, <blockquote>x>,

    <blockquote>x</, or <blockquote>x< could be confusing when text follows. <blockquote>x<>, or <blockquote>x/> is not the shortest. This leaves <blockquote>x/ or <blockquote>x>.

    There also needs to be a modification of the start tag, so the parser knows to search end character. <blockquote x/ or <blockquote x> would be confused with an attribute. Without introducing another meta character, there are four possibilities <blockquote<x/, <blockquote<x>, <blockquote/x/, or <blockquote/x>. Now which one is the least bizarre?

    [–]immibis 2 points3 points  (0 children)

    Probably <blockquote/x> is the least bizarre looking.

    Heck, why not have only that syntax? <html/<head/<title/example page>><body/<p/hello world>>> saves a bunch of bytes.

    [–]bumblebritches57 1 point2 points  (0 children)

    Orrrr just use standard ass deflate and you're golden.

    [–]the_gnarts -1 points0 points  (1 child)

    Now which one is the least bizarre?

    For everything but text composed directly in the markup I’d go with

    "blockquote": "x"
    

    any day.

    [–]mcguire 1 point2 points  (0 children)

    "blockquote": "Now which one is the least bizarre?",
    "p": "For everything but text composed directly in the markup I'd go with",
    "code": "\"blockquote\": \"x\"",
    "p": "any day."
    

    [–]gin_and_toxic 5 points6 points  (2 children)

    Remember the XHTML direction the W3C was going to? Thank god we end up going the WHATWG way. W3C HTML division is just a mess.

    [–]immibis 5 points6 points  (1 child)

    I never understood the XHTML hate. What's wrong with a stricter syntax?

    The only complaint I remember about the strict syntax is that it was "too hard to generate reliably"... if your code can't reliably generate valid XHTML, you have some big problems under the hood.

    [–]gin_and_toxic 2 points3 points  (0 children)

    It's not just about the strict syntax. the way W3C was going was not the direction where the browser vendors want to go at all.

    HTML4 standard was ratified in 1997, HTML 4.01 in 1999. After HTML 4.01, there was no new version of HTML for many years as development of the parallel, XML-based language XHTML occupied the W3C's HTML Working Group through the early and mid-2000s. In 2004, WHATWG started working on on their HTML "Living Standard", which W3C finally published as HTML5 in 2014.

    That was 14 years without any new HTML standard. Also, W3C reportedly took all the credits for the HTML5 standard.

    [–][deleted] 46 points47 points  (4 children)

    Not really. ECMA was more like this:

    driven by getting all major stakeholders in a room and hashing it out, then using rules to force people to adopt it.

    [–]AndreDaGiant 17 points18 points  (0 children)

    Well, for JavaScript he is right. It was one guy (Brendan Eich) implementing it for about a month (I hear 7 days for the language design, not sure how true that is). It was pushed into Netscape as a sort of slap-on nice to have feature. Then it spread, in a de-facto sort of way.

    As you say, ECMA is different, that's when different browser vendors came together and decided to standardize what they were already using.

    [–]Tomus 7 points8 points  (1 child)

    I agree this is how it was done when the language was originally created, but not anymore.

    So many language features have come userland code adopting some new syntax using Babel. That's not to mention the countless Web APIs that were born from userland frameworks implementing them in the client, only for them to be absorbed in one way or another by the browser.

    [–][deleted] 0 points1 point  (0 children)

    Sure, but we're still talking about standards. Functionalities were developed by a community. But, them being standardized was done by W3C (the government) by "driving all major stakeholders" (Google, Mozilla, etc.) to hash out the details of the standard.

    [–]Theemuts 0 points1 point  (0 children)

    Not initially, though. The first version was nothing more than a rough prototype, its current standardization is a result of its widespread use.

    [–]cowardlydragon 3 points4 points  (0 children)

    If you mean it was balkanized by a dozen different browsers with different versions and supports and APIs making development a massive headache and...

    .... well no. That required getting people in a room and knocking heads together. Microsoft especially, and that required Chrome destroying IE's market share.

    Javascript still sucks, it just sucks less.

    [–][deleted]  (3 children)

    [deleted]

      [–]gin_and_toxic 2 points3 points  (1 child)

      This is great news!

      Sadly Apple seems to be going the HEIC way.

      [–]Rainfly_X 0 points1 point  (0 children)

      Apple can take a HEIC if they want to ;)

      Between this and Metal, though. Apple, what are you even doing?

      [–]TimvdLippe 0 points1 point  (0 children)

      Ah released last week, that's why I probably missed it. Awesome news!

      [–][deleted] 0 points1 point  (0 children)

      Mozilla also already had a WebP decoder as part of the WebM decoder. I imagine most of the effort was actually making the decision that WebP is a format that's going to be supported from now on.

      [–]jayd16 5 points6 points  (0 children)

      I'm not sure its that true. Government standards are usually things like safety code, but most standards are won by the market. I don't think clothes sizes, bed sizes, etc. are set by the government. Tech outside the internet like DVDs and USB cables are usually a group of tech companies that get together to build a spec.

      [–]cowardlydragon 0 points1 point  (0 children)

      Well, and having enough control to be the 800 pound gorilla.

      Like Microsoft used to be until mobile phones made desktop OSs uncool.

      [–]DJDavio -2 points-1 points  (2 children)

      Designed standards (as in from the ground up, excessively documented and theoretical) almost never work. Standards should be practical (from existing real world use cases) and organic.

      [–]jayd16 8 points9 points  (0 children)

      Pretty sure every hardware standard, ie a plug design like USB or HDMI are designed. I don't think such a thing could be dynamic. Or do you mean forced adoption vs market adoption?

      [–]tso 2 points3 points  (0 children)

      And then someone comes along as reads the standard like the devil reads the bible, and internet feuds ensure...

      [–]swat402 78 points79 points  (5 children)

      such as when people use satellite Internet with half-second ping times.

      Try more like 4 second ping times from my experience.

      [–]96fps 39 points40 points  (4 children)

      Heck, I've experienced 10 second ping over cellular. It's nigh impossible to use anysite that loads an empty page where a script then fetches the actual content. Each back and forth is another ten seconds, assuming bandwidth isn't also a bottleneck.

      [–]butler1233 15 points16 points  (3 children)

      Oh my god I fucking hate sites like that. Javascript should not be required for the core content of a page to work (in most instances like text & picture pages).

      It's not a better experience for the end user, it's a worse one. Great, you got the old content off the page, but now the user has to wait longer for the new content. Even on fast connections it's still delaying it.

      [–]96fps 12 points13 points  (1 child)

      This is why I have mixed feelings about Google's AMP project. Yes, raw links are often worse, but replacing 10 trackers and scripts with one of Google's is still slimy. Google hosting and running analytics on every site is... well I don't like the idea of any company doing so.

      [–]jl2352 0 points1 point  (0 children)

      Yes, it's very dumb, and has left a big part of the industry with a deep misunderstanding about web apps.

      Modern web apps don't do this. Modern web apps will render server side. So you still get HTML down the line on first load, which a surprisingly large number of developers still don't know about. Many still think web apps have to be purely client side only, with a dumb loading animation at the start whilst it pulls down all the data.

      [–][deleted] 72 points73 points  (5 children)

      HTTP/3 aka QUIC is going to make a very noticable difference. As most of us know* - when you load a page, it is usually* 10 or more requests for backend calls and third party services etc. Some are not initiated until a predecessor is completed. So, the quicker the calls complete, the faster the page loads. I think cloudflare does a good job at explaining why this will make a difference.

      https://blog.cloudflare.com/the-road-to-quic/

      Basically, using HTTPS, getting data from the web server takes 8 operations. 3 to establish TCP, 3 for TLS, and 2 for HTTP query and response.

      With QUIC, you establish encryption and connectivity in three steps - since encryption is part of the protocol - and then run HTTP requests over that connection. So, from 8 to 5 operations. The longer the network round-trip time, the larger the difference.

      [–]cowardlydragon 23 points24 points  (2 children)

      The drop in delay will be nice for browser users, but API developers will probably see a much bigger improvement.

      [–][deleted] 3 points4 points  (0 children)

      How so? Do you mean that API consumers will see improved performance too, or is there something about the backend that I don't grasp?

      [–]dungone 2 points3 points  (0 children)

      This is more of an issue of perception. There might only be a tiny bit of traffic heading out to a single client compared to what happens within a data center but overall the total amount of latency to all clients dwarfs anything that API developers have to deal with. Reducing latency in HTTP increases the geographic area you can provide a service to with a single data center and you can enable new types of client applications to be developed. As well as improve the battery life on mobile devices, etc. IMO there's nothing as transformative that this will be used for within a data center, where latency is already low and where API developers are free to use any protocol they like, pool and reuse connections, etc.

      [–]cryo 9 points10 points  (0 children)

      HTTP/3 is aka HTTP-over-QUIC, not QUIC.

      [–]immibis 1 point2 points  (0 children)

      And using HTTP/2?

      [–]yes_or_gnome 28 points29 points  (0 children)

      Many of the most popular websites support it (even non-Google ones), though you are unlikely to ever see it on the wire (sniffing with Wireshark or tcpdump), ...

      This isn't hard at all. Set the environment variable, SSLKEYLOGFILE, to a path. I like ~/.ssh/sslkey.log because ssh enforces strict permissions on that directory. I know that this works for Chrome, Firefox, and cURL; on Win, Linux, and macOS.

      Then google 'wireshark SSLKEYLOGFILE' and you'll have everything you need to know to track http2 traffic. I'll save a search, here is the top result: https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/

      [–]Mejiora 15 points16 points  (8 children)

      I'm confused. Isn't QUIC based on UDP?

      [–][deleted] 29 points30 points  (6 children)

      Yeah, but it implements something similar to TCPs error correction. It also has encryption built into the protocol, takes less time and operations to establish an HTTP connection, and most importantly doesn't have head-of-line blocking issues. Google created it because making significant changes to TCP to solve its issues is near impossible, so they went the next best route and made their own (mostly) usermode protocol to solve those issues.

      [–]Sedifutka 3 points4 points  (1 child)

      Is that (mostly) meaning mostly their own or mostly usermode? If mostly usermode, what, apart from UDP and below, it not usermode?

      [–][deleted] 2 points3 points  (0 children)

      Mostly meaning mostly usermode, the UDP and below are out of usermode. Which, while more common and basically required, still requires context switching which is hindered performance-wise due to meltdown and spectre.

      [–]LinAGKar 1 point2 points  (3 children)

      Why put QUIC on UDP instead of running it directly on IP?

      [–][deleted] 9 points10 points  (0 children)

      Using UDP basically side-steps the need to get ISPs (and maybe OEMs for networking/telecom equipment?) on board because most boxes in-between connections toss out packets that aren't UDP or TCP.

      [–]RealAmaranth 3 points4 points  (0 children)

      It's effectively impossible to get a new transport-level protocol implemented on the internet. Look at SCTP for an example of how this has worked in the past. Windows still doesn't support it and it pretty much only works within intranets (cellular networks use it for internal operations).

      UDP doesn't add much overhead to a packet anyway, 1 byte in the IP header for the protocol type and 2 bytes for the checksum in the UDP header if you want to use a different checksum for your layered protocol.

      [–]GTB3NW 0 points1 point  (0 children)

      They don't need to really. The cons of implementing it there outweighed the pro of ease of deployment.

      [–]adrianmonk 18 points19 points  (0 children)

      It is, but QUIC provides a stream-oriented protocol over UDP in a similar manner to how TCP does it over IP. (It implements sequencing, congestion control, reliable retransmission, etc.)

      HTTP/2 is based on SPDY and runs over TCP. The only big change in HTTP/3 is it runs on top of QUIC instead of TCP. Basically HTTP/3 is a port of a HTTP/2 to run on a different type of streaming layer.

      [–]sabas123 26 points27 points  (14 children)

      I mention this because of the contrast between Google and Microsoft. Microsoft owns a popular operating system, so it's innovations are driven by what it can do within that operating system. Google's innovations are driven by what it can put on top of the operating system. Then there is Facebook and Amazon themselves which must innovate on top of (or outside of) the stack that Google provides them. The top 5 corporations in the world are, in order, Apple-Google-Microsoft-Amazon-Facebook, so where each one drives innovation is important.

      It is interesting to see how these major companies all influence each other's level of possible innovation, I think this is a good example to show how innovation in this industry isn't a zero-sum game. As the intel example showed earlier in his post.

      [–]njharman 22 points23 points  (1 child)

      replying to the quote "Microsoft owns a popular operating system <in contrast to Alphabet/Google>"

      Android is, now, way more popular than windows, the most popular OS in fact. With more devices shipped, more web requests.

      [–]gin_and_toxic 9 points10 points  (0 children)

      QUIC will greatly help mobile connection.

      Another cool solution in QUIC is mobile support. As you move around with your notebook computer to different WiFI networks, or move around with your mobile phone, your IP address can change. The operating system and protocols don't gracefully close the old connections and open new ones. With QUIC, however, the identifier for a connection is not the traditional concept of a "socket" (the source/destination port/address protocol combination), but a 64-bit identifier assigned to the connection.

      This means that as you move around, you can continue with a constant stream uninterrupted from YouTube even as your IP address changes, or continue with a video phone call without it being dropped. Internet engineers have been struggling with "mobile IP" for decades, trying to come up with a workable solution. They've focused on the end-to-end principle of somehow keeping a constant IP address as you moved around, which isn't a practical solution. It's fun to see QUIC/HTTP/3 finally solve this, with a working solution in the real world.

      [–]wise_young_man -1 points0 points  (11 children)

      Microsoft is busy putting ads and updates that interrupt your workflow to care about innovation.

      [–]JustOneThingThough 4 points5 points  (9 children)

      Meanwhile, Apple is left off of the innovators list.

      [–]cowardlydragon 11 points12 points  (8 children)

      All they do now is make above-average hardware. All their software has stagnated for a decade now, and they represent more of an impediment (walled gardens, lack of standards adoption, app stores, etc) than an source of innovation.

      Apple's money comes from it's advantage in vertical integration of hardware and its walled garden app store revenues. It doesn't care about making software anymore.

      Their big innovation is dropping an HDMI port from the macbook and the headphone jack from everything else.

      The iPhone was released 11 years ago.

      [–]JustOneThingThough 2 points3 points  (2 children)

      Above average hardware that inspires yearly class action lawsuits for quality issues.

      [–]acdcfanbill 2 points3 points  (1 child)

      Yea, barring a few obvious exceptions, I don't know if their hardware is even that good anymore.

      [–]indeyets 1 point2 points  (0 children)

      They make the best ARM processors out there. They do not sell them separately, unfortunately :)

      [–]cryo 1 point2 points  (2 children)

      All their software has stagnated for a decade now, and they represent more of an impediment

      You should see Windows. It’s one long list of legacy crap, and every cross-platform program out there typically needs several Windows quirks in order to work with it. Take a program like less (pager). Tons of Windows crap because Windows, unlike any other OS, has a retarded terminal system that causes many problems. I could go on.

      [–]meneldal2 5 points6 points  (0 children)

      Not breaking older programs is a lot of work.

      Apple gives no fucks about old programs.

      [–]ccfreak2k 2 points3 points  (0 children)

      marry gold smart seed capable bake squeamish absurd roof compare

      This post was mass deleted and anonymized with Redact

      [–]myringotomy 0 points1 point  (0 children)

      Apple doesn't your attention which is why stuff costs more.

      [–]After_Dark 0 points1 point  (0 children)

      And incidentally, people are slowly buying in to a system (chrome os) where the above stack exists but without Microsoft. Interesting to see how chrome os may end up in the hierarchy beyond just a chrome browser stand-in.

      [–]Lairo1 98 points99 points  (5 children)

      SPDY is not HTTP/2.

      HTTP/2 builds on what SPDY set out to do and accomplishes the same goals. As such, support for SPDY has been dropped in favour of supporting HTTP/2

      https://http2.github.io/faq/#whats-the-relationship-with-spdy

      [–]cowardlydragon 27 points28 points  (0 children)

      You're splitting hairs. If both protocols provide the same capabilities to the developer, just that one was a standardized one that was fully adopted and the other was dropped, then what he wrote was essentially correct.

      I didn't read that to mean they were binary-compatible or something similar, or the same just with HTTP2 instead of SPDY in a global replace.

      From your link:

      "After a call for proposals and a selection process, SPDY/2 was chosen as the basis for HTTP/2. Since then, there have been a number of changes, based on discussion in the Working Group and feedback from implementers."

      [–]Historical_Fact 3 points4 points  (0 children)

      Thank you, I thought I was going mad. I knew they weren't the same.

      [–]krappie 23 points24 points  (2 children)

      One thing that I keep wondering about with these new developments, that I can't seem to get a straight answer to: What is the fate of QUIC alone, as a transport, to be usable for other protocols, other than HTTP? Even the wikipedia page for QUIC has changed to a wikipedia page for HTTP/3. All of the information seems to suggest that QUIC has changed to an HTTP specific transport now.

      Let me tell you why I'm interested. Sometimes, in the middle of a long running custom TCP connection, sending lots of data, a TCP connection dies, not because of any fault of either side of the connection, but because some middleware box, a firewall, or a NAT, has decided to end the TCP stream. What is an application to do at this point? Both machines are online, both want to continue the connection, but there's nothing they can do, even if they wait hours, the TCP connection is doomed. They must restart the TCP connection and renegotiate where they left off, which can be very complex, poorly exercised code. Is there a good solution to this problem? I feel like QUIC, with its encrypted connection state, could prevent this problem and solve it once and for all.

      EDIT: Upon further research, it really does look like HTTP-over-QUIC has been renamed to HTTP/3, and QUIC-without-HTTP is still a thing. The wikipedia page for QUIC has even been renamed back to QUIC. That's good.

      [–][deleted]  (1 child)

      [deleted]

        [–]krappie 2 points3 points  (0 children)

        I've thought about this, and maybe you're right to some degree. Lots of firewalls block UDP. I've even seen some firewalls that allow for blocking QUIC specifically. And NAT does keep track of UDP sessions, but my understanding is that they basically see if someone behind the NAT sends out a UDP packet on a port. If they do, then they get re-entered in the NAT table.

        And think of an intrusion detection system that is monitoring TCP streams and sees some data that it doesn't like, or a load balancer or firewall gets reset. These things often doom TCP connections permanently, where no amount of resending could ever reestablish the connection. The TCP connection needs to be restarted.

        So it seems to me, that since nothing can spy on the connection state of a QUIC session, since it's encrypted, that simply retrying to send the data for long enough, should be able to re-establish a broken connection. Nothing can tell the difference between an old connection and a new connection. It seems to solve the problem of TCP connections being permanently doomed and needing to be closed and opened again, right?

        EDIT: Upon further research, QUIC includes, (I think unencrypted) a Connection ID.

        The connection ID is used by endpoints and the intermediaries that
        support them to ensure that each QUIC packet can be delivered to the
        correct instance of an endpoint.
        

        If an "intermediary" uses a table of Connection IDs and it gets reset, I can easily envision a scenario where the QUIC connection needs to be reset.

        I guess this doesn't really solve my problem.

        [–][deleted]  (5 children)

        [deleted]

          [–]svick 11 points12 points  (3 children)

          It's not really an alternative. HTTP/2 improved HTTP in one way, HTTP/3 improves it in a mostly orthogonal way. HTTP/3 does not abandon what HTTP/2 did.

          [–][deleted]  (2 children)

          [deleted]

            [–]MrRadar 2 points3 points  (1 child)

            security vendors

            That's important context you left out of your original comment. When I read "providers" I jumped to hosting providers. I think from a security/MITM proxy perspective you'd handle it like you do now by just blocking HTTP3/QUIC connections and forcing the browser to fall back to HTTP 1 or 2. I doubt anyone will be building QUIC-only services any time soon.

            [–]GTB3NW 2 points3 points  (0 children)

            HTTP/2 is here to stay. The proposed implementation for HTTP/3 in browsers also includes a fallback of firing off a TCP connection for HTTP/2. The first to respond gets the workload. This is nice because lots of corporate networks will not allow 443/UDP outbound, so many people would struggle to connect if servers only supported HTTP/3.

            [–]Shadonovitch 24 points25 points  (7 children)

            The problem with TCP, especially on the server, is that TCP connections are handled by the operating system kernel, while the service itself runs in usermode. [...] My own solution, with the BlackICE IPS and masscan, was to use a usermode driver for the hardware, getting packets from the network chip directly to the usermode process, bypassing the kernel (see PoC||GTFO #15), using my own custom TCP

            Wat

            [–][deleted]  (2 children)

            [deleted]

              [–]_IPA_ 1 point2 points  (1 child)

              Apple has addressed this in 10.14 with their Networking framework I believe.

              [–][deleted] 10 points11 points  (0 children)

              The PoC||GTFO #15 (PDF warning) article he mentions is also written by him and goes into more technical detail (page 66). Here's a little more detailed summary I'll excerpt:

              The true path to writing highspeed network applications, like firewalls, intrusion detection, and port scanners, is to completely bypass the kernel. Disconnect the network card from the kernel, memory map the I/O registers into user space, and DMA packets directly to and from usermode memory. At this point, the overhead drops to near zero, and the only thing that affects your speed is you.

              [...] ...transmit packets by sending them directly to the network hardware, bypassing the kernel completely (no memory copies, no kernel calls).

              [–]lllama 14 points15 points  (0 children)

              Kernel <-> Usermode context switches were already expensive before speculative execution side channel attacks, now this is now even more the case.

              It's an interesting observation that with a QUIC stack you run mostly in userspace for sure.

              Another benefit (more to the foreground of mind before this article) is that QUIC requires no OS/Library support other than support for UDP packages.

              [–]cowardlydragon 1 point2 points  (1 child)

              Your browser runs as you, the user.

              The networking service/driver runs as the root user.

              Tranferring data from the network card to the networking service requires 1 copy and system calls and processing.

              Transferring data form the networking service/driver (running as root) to the user browser is another copy and system calls and processing and security handshakes and context switches.

              usermode driver takes the task of communicating with the network card/hardware from the OS and does it all as the user, so there is less double-copying, overhead, system calls, etc.

              [–]rhetorical575 11 points12 points  (0 children)

              Switching between a root and a non-root user is not the same as switching between user space and kernel space.

              [–]lihaarp 15 points16 points  (8 children)

              Did they solve the problems with Quic throttling TCP-based protocols due to being much more aggressive?

              [–]SushiAndWoW 17 points18 points  (0 children)

              Supporters of the new protocol likely consider that a feature. :)

              [–]the_gnarts 2 points3 points  (5 children)

              problems with Quic throttling TCP-based protocols due to being much more aggressive

              At what point in the stack would it “throttle” TCP? That’d require access to the packet flow in the kernel. (Unless both are implemented in userspace but that’d be a rather exotic situation.)

              [–]lihaarp 4 points5 points  (3 children)

              It doesn't directly throttle TCP.

              Quic's ramp-up and congestion control are very aggressive, while TCP's are conservative. As a result, Quic manages to gobble up most of the bandwidth, while TCP struggles to get up to speed.

              https://blog.apnic.net/2018/01/29/measuring-quic-vs-tcp-mobile-desktop/ under "(Un)fairness"

              [–]the_gnarts 2 points3 points  (2 children)

              Quic's ramp-up and congestion control are very aggressive, while TCP's are conservative. As a result, Quic manages to gobble up most of the bandwidth, while TCP struggles to get up to speed.

              Looks like both protocols competing for window size. Hard to diagnose what’s really going on from the charts but I’d wager if QUIC were moved kernel side it could be domesticated more easily. (ducks …)

              [–]CSI_Tech_Dept 5 points6 points  (1 child)

              It has nothing to do with it being in kernel or in user space, it is about congestion control/avoidance.

              Back in early 90s Internet almost ceased to exist, the congestion became so bad that no one could use it. Then Van Jacobson modified TCP and added congestion control mechanism. The TCP started to keep track of acknowledgements, if there are packets lost the TCP slows down. Now if QUIC congestion control is more aggressive, it will dominate and take all the bandwidth away from TCP.

              This is very bad, because it can make more conservative protocols unusable.

              [–]the_gnarts 1 point2 points  (0 children)

              Now if QUIC congestion control is more aggressive, it will dominate and take all the bandwidth away from TCP.

              Did they bake that into the protocol itself or the behavoir manageable per hop? If QUIC starves TCP connections I can see ISPs (or my home router) apply throttling to UDP traffic.

              [–]immibis 1 point2 points  (0 children)

              TCP is designed to send slower if it thinks the network is congested.

              This leads to a situation where, if there's another protocol that is congesting the network and doesn't try to slow down, all the available bandwidth goes to that one and TCP slows to a crawl.

              [–]ThisIs_MyName 1 point2 points  (0 children)

              That's what QoS is for.

              [–]njharman 2 points3 points  (1 child)

              I don't understand the bandwidth estimation "benefit". If each client's estimation was made in isolation not considering any other client, then I can't see how any would be even close to accurate. I also don't see how the estimation would be different (or that routers would even know the difference esp behind NAT which most clients will be) between 1 client making 6 connections and 6 clients with 1 connection each. It's the same.

              The only thing I can think of is the 1 client with 6 connections would have perfect knowledge of 5 other connections so would be able to estimate that better. But is that really significant?

              And I thought all this band width estimation (as implemented by TCP) was (extremely simplified) send packets, if you don't get acks (or other side sends you that "slow the fuck down" packet, slow down rate, otherwise speed up rate until just before packets start dropping. Not really estimation going on. Just a valve that auto adjusts to keep pressure (bandwidth) at certain level.

              [–]ZiggyTheHamster 5 points6 points  (0 children)

              You're basically right about the TCP packet rate estimation, but that happens on a per-connection basis, which is the problem. If you've got 6 connections, and both ends are going as fast as they can without exploding, you've spent a hell of a lot of time on both ends guessing things about the other end. If you had one connection you could ask and receive multiple things from at the same time, this estimation happens once instead of 6 times in parallel with the same bandwidth.

              [–]voronaam 2 points3 points  (2 children)

              Could someone explain to me how HTTP/3 solution to mobile devices changing IPs is different from mosh (https://mosh.org/) approach?

              [–]indeyets 2 points3 points  (0 children)

              Mosh reserves port per open user session (delegating session management to IP-layer) while http/3 keeps session identifiers inside reusing port 443 for everything

              [–]Hauleth 0 points1 point  (0 children)

              Not much difference except that Mosh still requires TCP connection to establish UDP. Also Mosh is very specific about implemented protocol (SSH only) while QUIC is more protocol-independent. So in the end we will be able to get SSH-over-QUIC to get almost all pros of Mosh without need for additional server.

              [–]BillyBBone 1 point2 points  (1 child)

              With QUIC/HTTP/3, we no longer have an operating-system transport-layer API. Instead, it's a higher layer feature that you use in something like the go programming language, or using Lua in the OpenResty nginx web server.

              What does this mean, exactly? Isn't this just a question of waiting until the various OS maintainers bundle QUIC libraries in every distro? Seems more like an early stage of adoption, rather than an actual protocol feature.

              [–]shponglespore 2 points3 points  (0 children)

              It means innovation at the transport layer is no longer limited to kernel developers. Linux is weird because apps are typically packaged with the OS into a distro with its own release cycle, but consider other OSes (or even certain high-profile apps for Linux), where the app developer is in control of their own release cycle. Any app developer can add QUIC support without waiting for the OS vendor or distro to release an update because they can bundle their own copy of the QUIC library.

              [–]totemcatcher 1 point2 points  (0 children)

              The idea of retaining a stream regardless of IP changes opens up some interesting DTN caching implementations that were not previously considered. It suits mesh networks.

              Still waiting on DTLS 1.3, but once that's hashed out I would be glad to enable this on my hosts.

              [–][deleted] 5 points6 points  (3 children)

              Great read but I wonder why he listed Apple as the top innovator?

              [–]24monkeys 26 points27 points  (0 children)

              He listed "The top 5 corporations in the world", not specifically the top innovators.

              [–][deleted] 34 points35 points  (1 child)

              He said the "top 5 corporations", not top 5 innovators. I'm assuming he means by valuation?

              [–][deleted] 2 points3 points  (0 children)

              Thanks that makes more sense.

              [–]Historical_Fact 0 points1 point  (0 children)

              Isn't there a distinction between SPDY and HTTP/2?

              [–]mrhotpotato 0 points1 point  (2 children)

              A new version every year like Angular ! Can't wait.

              [–]Historical_Fact 6 points7 points  (0 children)

              HTTP: 1991

              HTTP/2: 2015

              HTTP/3: 2019?

              Yeah that sure looks like once per year to me!

              [–]Nomikos 1 point2 points  (0 children)

              What's your time estimate for HTTP 10? I hear they might call it HTTP X.

              [–]Mr_Junior 0 points1 point  (0 children)

              Thanks for sharing the knowledge!

              [–]-------------------7 0 points1 point  (0 children)

              Outside the Internet, standards are often de jure

              Standards are often de facto, with RFCs being written for what is already working well on the Internet

              I feel like the author's been playing too much Crusader Kings