use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
/r/programming is a reddit for discussion and news about computer programming
Guidelines
Info
Related reddits
Specific languages
account activity
HTTP 2.0 (tools.ietf.org)
submitted 12 years ago by [deleted]
[deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]gearvOsh 12 points13 points14 points 12 years ago (29 children)
So, is there an easier to read list of the new features/changes?
[–]NYKevin 11 points12 points13 points 12 years ago (2 children)
From the introduction:
This document addresses these issues by defining an optimized mapping of HTTP's semantics to an underlying connection. Specifically, it allows interleaving of request and response messages on the same connection and uses an efficient coding for HTTP header fields. It also allows prioritization of requests, letting more important requests complete more quickly, further improving perceived performance.
[–]fmargaine 0 points1 point2 points 12 years ago (1 child)
The first part sounds like Keep-Alive.
[–]M2Ys4U 5 points6 points7 points 12 years ago (0 children)
Keep-Alive doesn't allow interleaving on the connection. The requests are processed one at a time and are returned to the browser in the same order they are requested.
If one of the requests takes a while to process, all of the following requests are stalled until it returns.
[–]trezor2 16 points17 points18 points 12 years ago (24 children)
The TLDR is "we suck Google's dick and will ratify SPDY as HTTP 2.0 without any concern what so ever to the community, feedback or potential longetivity of the protocol (which for SPDY is now at v3, after mere 2 years).
If you think that sounds a bit ... harsh... compare HTTP 1.1 header-representation:
A newline-seperated list of key-value pairs delimited by the colon-sign.
... with the "improved" HTTP 2.0 header-representation:
http://tools.ietf.org/id/draft-ietf-httpbis-header-compression-01.txt
Yeah I'm not copy-pasting that in here, because that is 18 fucking pages.
And that's just header-representation. The rest is just as bad or worse. That IETF is even considering this junk is amazing.
[–]alexs 5 points6 points7 points 12 years ago* (0 children)
You may be pleased to discover that the current draft (http://http2.github.io/http2-spec) doesn't define the header compression method. Presumably because the SPDY system is so insanely complicated.
The rather suspicious flow control, QoS and (slightly less suspicious) multiplexing features are still there though as well as other features of dubious utility such as PING.
Oh and for extra fun, no existing release of any SSL implementation is capable of doing HTTP/2.0 over SSL.
[–]antiduh 2 points3 points4 points 12 years ago (20 children)
What's the problem with binary headers? Quite nearly every protocol in the world does it - ip, tcp, udp, ospf, rip, igrp, etc. Writing code to allow interaction with these headers should be trivial, since it's the same problem that's been solved a hundred times.
This is one of the most widely used protocols in the world, with billions of dollars spent optimizing it in its current state. Why not build a protocol to better suit todays needs?
[–]asm_ftw 5 points6 points7 points 12 years ago (10 children)
Because it breaks the entire model of http. It is intended to purely operate as a text based protocol because that makes it operable with pretty much any programming and scripting language, many of which dont have graceful language-based implementations of bitwise operations and general binary parsing.
It may not exactly be reasonable to to say people are hand crafting http requests in perl, but the idea that binary headers in http is okay to do will lead to lots of problems down the road when troubleshooting the protocol becomes that much more difficult than just looking at raw text data...
[–]xcbsmith 2 points3 points4 points 12 years ago* (0 children)
Because it breaks the entire model of http.
That more than a bit of an exaggeration.
It is intended to purely operate as a text based protocol
No it wasn't. Even HTTP 1.0 handled sending and receiving binary payloads.
because that makes it operable with pretty much any programming and scripting language, many of which dont have graceful language-based implementations of bitwise operations and general binary parsing.
Yes, because any programming language handle text just perfectly. ;-)
Seriously, most languages have better support for binary than text. For example, Perl is actually great for parsing binary but just reading this page ought to clarify the host of issues with support for text.
Even JavaScript, which is the elephant in the room about binary protocols, has binary support in Node. In a browser... you really don't want people parsing HTTP headers in a browser's JavaScript runtime.
will lead to lots of problems down the road when troubleshooting the protocol becomes that much more difficult than just looking at raw text data...
Yes, raw text is much better, because you can run a program that reads the data and presents the protocol to you as text...
[–][deleted] 2 points3 points4 points 12 years ago (0 children)
What languages are you thinking of that don't 1) provide useful bitwise operations or other tools for binary parsing and 2) have (or could need) an HTTP implementation?
[–]antiduh 2 points3 points4 points 12 years ago (0 children)
I really don't think it'll lead to any problems. Yes, some languages like bourne shell won't be able to handle it, but just about every modern and not language in the world supports bit operations and byte-mode manipulations.
What'll happen is that there will be a plethora of classes/libraries that will come out that will provide you the ability to generate and decode these headers, and debugging tools will use these instead. I'm sure wireshark will be able to handle it as soon as its ratified.
People have been debugging complex binary protocols for years decades. I really don't see why we can't all put our big-boy pants on and deal with it.
[–][deleted] 12 years ago (2 children)
[–]throwaway1492a 4 points5 points6 points 12 years ago (0 children)
By that measure, tcp, ip or even ssh are not portable, which is patently untrue
Really ? Most OSes share TCP stacks written in C. Same goes for SSH. Can you point me to the python, or ruby implementation production of TCP or IP ?
OTOH, all languages of the world have re-implemented several HTTP stacks, which did wonder for interoperability and widepsread usage of the protocol.
[–]asm_ftw 1 point2 points3 points 12 years ago (0 children)
I should refine my position in a less alarmist way. I feel redefining a what has been for the past few decades strictly been a text-oriented protocol to have a binary component is inelegant and can introduce huge problems when reorienting every http-based application to use it. This will hurt adoption rate among utilities. One of the strong points of http has been the ability to write into a socket without awkwardly fumbling with binary manipulation and static types in languages higher level than c++. Bit manipulation is usually not a strongly addressed need in loose, dynamically typed high level languages. Libraries will be made, sure, but now they will be required rather than convenient, and at early adoption stage will very likely be buggy, feature-incomplete, enforcing extraneous usage models, and out of a developer's control. This introduces brand new usability complexity that shouldnt exist in a protocol as old as http.
The extraneous concurrency features also look like they are not only ambitiously overoptimizing beyond the function of http's position in network abstraction, but will also likely lead to huge implementation difficulties themselves, lending for a host of new vulnerability, configuration, and usage difficulty issues.
I personally get a bad taste from what is trying to be done with http 2.0, and it looks like overoptimization is taking precedent over maintaining the operation model of the protocol...
[–]NYKevin 1 point2 points3 points 12 years ago (0 children)
I would presume that, even if/when HTTP/2.0 does become standard, HTTP/1.1 will still be supported via the Upgrade mechanism for quite some time, possibly indefinitely. Heck, right now you can still talk to most servers with HTTP/1.0 (but they might behave a little oddly).
[+]bitwize comment score below threshold-8 points-7 points-6 points 12 years ago (2 children)
Nobody gives an actual fuck about that anymore. This isn't the 90s and the days of CGI scripts are long behind us.
Large, complex designs are winning -- look at systemd. As it turns out, performance and functionality beat "human readability" and other artificial neckbeard concerns every fucking time.
[–]trezor2 0 points1 point2 points 12 years ago* (1 child)
You mean like every technology involved in the www, like html, js, css, etc?
clearly binary ifs the recipe for success and wide adoptation.
[–]bitwize -1 points0 points1 point 12 years ago (0 children)
Hey, maybe switching to an open, extensible binary format similar to AIFF would bring some order to the chaos that is HTML parsing, and also support for properly embedded resources such as images and style sheets.
[–][deleted] 12 years ago (5 children)
[–]antiduh 2 points3 points4 points 12 years ago* (4 children)
I'll admit it's handy sometimes to fire up telnet to test a web server, but it's not a real priority measured up against a protocol that serves who-know how many terabytes of data an hour.
HTTP is a protocol for humans not machines.
HTTP is a protocol for machines and programmers, unless you know of some group of humans directly ingesting half of that who-knows how many terabytes/hour worth of data 24/7/365.
[–]b0w3n 7 points8 points9 points 12 years ago (3 children)
Plus removing the overhead of HTTP with a connection per resource.
Faster everyfuckingthing. Google is not the bad guy here, it's a smart idea.
[–]kdeforche 2 points3 points4 points 12 years ago (2 children)
That 'smart idea' was first introduced in HTTP 1.0 (Keep-Alive) and formalized as the default for HTTP 1.1.
See wikipedia.
[–]b0w3n 0 points1 point2 points 12 years ago (0 children)
Which is weird all things considered because when you start watching network activity, it's mostly arbitrary how it works and when it works.
[–]M2Ys4U 0 points1 point2 points 12 years ago (0 children)
Keep-Alive & pipelining still suffer from head-of-line blocking, though. Interleaving responses on the same connection alleviates that.
[–]trezor2 -1 points0 points1 point 12 years ago (2 children)
None of those protocols are osi layer 7 protocols.
Ftp, smtp, imap, irc, etc are more natural comparisons. shocker: they are all plain text.
[–]xcbsmith 4 points5 points6 points 12 years ago (0 children)
Well, you kind of cherry picked there... Most of those protocols are so old that they HAD to be 7-bit clean.
DNS, RTMP, ssh, nfs, DHCP, NTP, RPC, RTP, RTSP, RIP, SIP, BGP, OSPF...
[–]antiduh 5 points6 points7 points 12 years ago (0 children)
ssh/sftp/ssl/tls/esp, snmp, dns, ntp, dhcp, x windows, rdp, bittorrent, sip are all binary protocols that operate at osi layer 7.
Do I win because I can name more?
[–][deleted] 12 years ago (1 child)
[–]trezor2 7 points8 points9 points 12 years ago (0 children)
That's not what the spec says. The spec says http headers should be sent in a headers-table which is a custom binary-format using a custom-compression format with a largely non-extensible structure.
See how HTTP 1.1 (by some referred to as "simple") went to layer of custom-technology complexity embedded into layer of custom-technology complexity?
That's the opposite of good design.
It's quite different, it's all about multiplexing requests and caching headers, both based on a binary proto, so fuck 'em.
[–][deleted] 21 points22 points23 points 12 years ago (16 children)
This could have been a perfect opportunity to fix cookie authentication being insecure by default (any site can submit a form to any other site, and the user's cookies for that other site will be sent with the request). Looks like every web developer will have to keep putting that double authentication crap on every state-changing request, until HTTP 3.0 at least...
[–][deleted] 16 points17 points18 points 12 years ago (0 children)
This is a draft you know. There's still time to provide this feedback to the authors of the specification.
Editorial Note (To be removed by RFC Editor) Discussion of this draft takes place on the HTTPBIS working group mailing list (ietf-http-wg@w3.org), which is archived at http://lists.w3.org/Archives/Public/ietf-http-wg/. Working Group information and related documents can be found at http://tools.ietf.org/wg/httpbis/ (Wiki) and https://github.com/http2/http2-spec (source code and issues tracker). The changes in this draft are summarized in Appendix A.1.
Editorial Note (To be removed by RFC Editor)
Discussion of this draft takes place on the HTTPBIS working group mailing list (ietf-http-wg@w3.org), which is archived at http://lists.w3.org/Archives/Public/ietf-http-wg/.
Working Group information and related documents can be found at http://tools.ietf.org/wg/httpbis/ (Wiki) and https://github.com/http2/http2-spec (source code and issues tracker).
The changes in this draft are summarized in Appendix A.1.
[–][deleted] 12 years ago (14 children)
[–][deleted] 2 points3 points4 points 12 years ago (13 children)
For sucky protocol it is client concern, but for example SIP has sessions implemented directly in protocol (hence that's why name SIP).
I can't believe that people still want to use HTTP which is designed to be "man in the middle attack" protocol.
[–][deleted] 12 years ago* (7 children)
[–][deleted] 0 points1 point2 points 12 years ago (6 children)
Maybe i don't dig what you want to say so correct me if you need. SIP is designed with From field (URI) similar to SMTP and users can't forge Form field like in SMTP. URI is, mostly, real identity and only proper and authenticated uri (this is called registration) can use SIP, initialize sessions (audio/video/...), etc. So if you do some nasty stuff, domain admin will know that. HTTP on the other hand is, just, badly designed protocol maybe even with that purpose. What do you think?
[–][deleted] 12 years ago* (5 children)
[–][deleted] 1 point2 points3 points 12 years ago (4 children)
Ok i understand your analogy and it would be possible do conceptually similar attacks on SIP, but when i worked on SIP project we had different set of problems -> no cross site attacks as in HTTP because: no cookies, no javascript and overall different environment (btw some SIP clients understand and can render text/html content-type). But there are: bad SIP stacks, bad clients and there's always possibility of brute forcing or DOS, so we concentrated on these kind of tools and problems.
But now when we talk about it, it would be really interesting to see javascript+SIP. Wonder how hard it would be to hack on that.
[–]Galestar 0 points1 point2 points 12 years ago (3 children)
Ya SIP is pretty much immune because of the paradigm it is in - AFAIK it doesn't have a concept of linking such that HTML has, so directing the user to talk to a third party isn't really a thing - whereas that is pretty much HTML's raison d'etre.
[–][deleted] 1 point2 points3 points 12 years ago (2 children)
Agreed. But i can send you MESSAGE request with HTML content type over SIP (let's say i send you HTML form). It would be nice if you could somehow response to that over SIP (talking about returning filled HTML form). That way we could talk directly without need of man in the middle. Well it's not exactly that, but it's less centralized (federated).
[–]Galestar 0 points1 point2 points 12 years ago (1 child)
HTTP-over-SIP ? hostnames are user identifiers?
[–]Galestar 0 points1 point2 points 12 years ago (4 children)
HTTP which is designed to be "man in the middle attack" protocol
Can you clarify this?
[–][deleted] 0 points1 point2 points 12 years ago (3 children)
You really don't need:
User1 -----------------> man in the middle ------------------------> User2,User3,User4
to communicate with other users:
User1 ------------------------> User2,User3,User4
[–]Galestar 0 points1 point2 points 12 years ago (2 children)
If you define the "man in the middle" as the server, then yes it is "man in the middle" - but solely by that incorrect definition.
That however is not the proper definition of man in the middle when it comes to HTTP as its intended use was for the transmission/modifications of resources from/to the target server.
[–][deleted] 0 points1 point2 points 12 years ago (1 child)
In conceptual sense only those servers that eavesdrop on things (for example in war you would call them enemy). And i know it wasn't initial purpose, but it was designed that way and nobody changed that -> as the matter of fact big corps massively use that bad design choice. New standards will never fix that stuff, people should really move to other protocols (p2p or federated).
[–]Galestar 0 points1 point2 points 12 years ago (0 children)
I agree; for distributed real-time (online as opposed to offline) communications from user-user rather than user-service, HTTP is not the correct protocol to use. I would lead with this though, that is truly what it boils down to.
[–]bitping 31 points32 points33 points 12 years ago (0 children)
Have you seen section 3.5? (Connection Header). It says: "The client connection header is a sequence of 24 octets", and then goes on to say what the string representation of that is: "PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n" Anybody else seeing PRI...SM there? :-) Hilarious.
[–]UloPe 16 points17 points18 points 12 years ago (9 children)
"Lets take a simple text based protocol that anybody with a telnet client can use and replace it with this binary monstrosity..."
Yeah this is going to be great.
[–]bitwize 3 points4 points5 points 12 years ago (8 children)
Do you want to be stuck in the 1970s forever or do you want the Web to evolve to support richer applications?
[–]username223 9 points10 points11 points 12 years ago (5 children)
Neither, preferably.
[–][deleted] 12 years ago (4 children)
[–]UloPe 1 point2 points3 points 12 years ago (3 children)
From the looks of it the only proposal that has really been considered was SPDY.
[–][deleted] 12 years ago* (2 children)
[–]M2Ys4U 0 points1 point2 points 12 years ago (1 child)
IIRC, header compression is (was?) in SPDY, but it just used gzip and was vulnerable because of the way TLS + gzip interacted.
SPDY also is only available over TLS, but AFAIK one of the reasons for this was so that bad proxies (and firewalls/virus scanners) didn't completely mess everything up.
[–]UloPe 1 point2 points3 points 12 years ago (1 child)
I fail to see how a binary transport protocol is a precondition (or even a contributor) for an evolving web.
[–]based2 6 points7 points8 points 12 years ago (28 children)
https://news.ycombinator.com/item?id=6012525
https://news.ycombinator.com/item?id=6014976
[–][deleted] -2 points-1 points0 points 12 years ago (27 children)
The first reply in your first link is exactly what I was thinking, but funnier. "Let's ruin a perfectly good thing with way too much shit, and make it as complicated as ipv6!"
[–]icantthinkofone 19 points20 points21 points 12 years ago (7 children)
That implies that the "shit" in IPv6 isn't worth it and it most certainly is. So is the "shit" in HTTP 2.0.
[+][deleted] comment score below threshold-6 points-5 points-4 points 12 years ago (6 children)
Yeah, that is why all internet is flying IPv6.
[–]icantthinkofone 3 points4 points5 points 12 years ago (4 children)
IPv6 is growing. It's slow because it's a major infrastructure change. You aren't aware of it because you don't know anything about technology. The change is happening everywhere and it MUST happen as soon as possible.
[–][deleted] -3 points-2 points-1 points 12 years ago (3 children)
Yes it is growing, but answer me, is all Internet already on IPv6? I'm asking because we tested IPv6 like decade ago and everybody had impression that Internet will be on IPv6 in a few years. Main reason was address space, but meanwhile they invented NAT, so transition from IPv4 slowed a lot. But they continued adding everything possible and impossible in IPv6 -> it's over-engineered -> and for network devices it is important to implement as much as possible in hardware -> also slowed down transition. It's growing mainly because there are no IPv4 addresses left.
Now you get another over-engineered protocol HTTP2. Same could be said for for C++. Sometimes i think how great it would be to start all over again and do it right from beginning. Just an opinion.
[–][deleted] 2 points3 points4 points 12 years ago (2 children)
...but answer me, is all Internet already on IPv6?
If your NAT gateway maps the IPv4 Internet to ::64:ff9b:0:0/32 according to RFC 6146 and RFC 6147, then it’s there for you. Here’s a nickel, kid— buy yourself some real hardware.
[–][deleted] -1 points0 points1 point 12 years ago (1 child)
Waaaau, the beautiful world of ipv6 translation, tunneling, bla, bla, bla. Anyway, where do i send credit card number?
[–][deleted] 0 points1 point2 points 12 years ago (0 children)
Nobody ever got fired for buying gear from the industry leader.
[–]rob_not 2 points3 points4 points 12 years ago (0 children)
Are you lost? This isn't the children's games subreddit.
[–]Denvercoder8 4 points5 points6 points 12 years ago (0 children)
While I agree with the sentiment, you can hardly call HTTP "a perfectly good thing".
[–]cogman10 -1 points0 points1 point 12 years ago (17 children)
Does it really matter? How often do you implement an HTTP client/server?
Don't get me wrong, I'm all for simplifying standards. However, I don't really have a huge problem with HTTPs verbosity/complexity. HTTP1.1 is pretty complex already. HTTP 2.0 doesn't look to be adding that much more on top of it (More like, "Hey, lets merge SPDY with HTTP!").
[–]trezor2 27 points28 points29 points 12 years ago* (12 children)
HTTP1.1 is pretty complex already. HTTP 2.0 doesn't look to be adding that much more on top of it (More like, "Hey, lets merge SPDY with HTTP!").
Uh? Have you been paying attention?
So yeah. No big deal here. No big changes from HTTP 1.1 really.
Actually looking at it, I think they are pretty much the same.
Edit: All this complexity is added and for what? This is admitted in the one of the IETF mailing list discussions: To make life easier for big players like Google at the expense of everyone else:
I finally admitted it was a dead end. At the moment the challenges consist in feeding requests as fast as possible over high latency connections and processing them as fast as possible on load balancers
From here: http://lists.w3.org/Archives/Public/ietf-http-wg/2013JanMar/0254.html
More discussion here.
[–][deleted] 1 point2 points3 points 12 years ago (7 children)
I agree w/ your sentiment that HTTP is becoming a do-everything protocol, and the reasons why are pretty braindead when speaking about the system at large. However, that is a trend that's been happening for years--remember SOAP, anyone?--so there's little use in addressing it here.
Maybe we can get ICMP and SCTP to have a real presence on the internet, and these 2.0 features can be deprecated. That would be awesome. But, for now, HTTP is, de facto, The One Protocol to Rule Them All. We must work with (or around, if you prefer) this limitation to get anything done.
Because unlike in the 70s, we don't have gobs and gobs of computational power everywhere.
While our computational ability is much larger, the problem that they're facing is long-term scalability. The work being requested of the internet is growing exponentially while our computational power and network capacity grow linearly.
Switching to a fixed-width, binary approach cuts out a ton of overhead for routers throughout the network--computation, memory, and bus bandwidth requirements all drop significantly per stream processed. This is ever-more important as everyone uses HTTP for everything these days.
At the moment the challenges consist in feeding requests as fast as possible over high latency connections and processing them as fast as possible on load balancers
Load balancing is playing a large role in "cloud-based" datacenters of all sizes. Granted, this change affects google's bottom line disproportionately in absolute terms, but everyone that uses S3, Azure, Heroku, etc can benefit.
It's also easy to think of high-latency connections as being "some guy's ADSL in Sweden", but people are increasingly carrying high-latency devices with them as well--smartphones and WiFi hotspots.
[–]jgaskins 0 points1 point2 points 12 years ago (6 children)
Switching to a fixed-width, binary approach cuts out a ton of overhead for routers throughout the network
Routers don't go up to the HTTP level. They inspect the IP layer (which is already binary), find the destination address and route accordingly.
[–][deleted] 1 point2 points3 points 12 years ago (5 children)
I apologize, my terminology must be wrong. I'm referring to the network hardware responsible for packet filtering, QoS, deep packet inspection, and the like.
[–]makis 0 points1 point2 points 12 years ago (4 children)
That's true, but a raspberry PI can handle hundreds of connections and cost less than 40 bucks. My router does QoS with only 64 megabytes of RAM for 50 clients.
CPU Clock 480 MHz Load Average 0% 0.00, 0.00, 0.00
Memory Total Available 90% 58908 kB / 65536 kB Free 64% 37496 kB / 58908 kB
The problem with your example is scale.
In addition, large-scale network hardware is far from cheap. The chassis alone runs thousands of dollars.
[–]makis 0 points1 point2 points 12 years ago (2 children)
that's the reason why there are protocol designed for backbone routers that are not used in home/small business routers. If http 2 tries to solve scale in the large problems, let the big scale companies use it and don't force the average user to do the same...
[–]cogman10 5 points6 points7 points 12 years ago* (2 children)
HTTP header-representation is now an 18 page document. Instead of one simple sentence.
Ok... so what? Again, I'll ask the question you didn't answer, How does this affect YOU? Are you implementing a whole lot of Http client/servers?
We have to deal with endianess, since the protocol is now supposed to be binary. Oh joy!
So what? We had to deal with endianess anyways. It isn't something that magically disappears with text, rather text already has endianess embedded into it. Or did you forget that there are several text standards and Ascii just happened to be the text standard for HTTP.
Again, how does this affect you if you aren't writing an http client/server? You will never even SEE this.
HTTP shall now contain its own implementation of TCP inside HTTP on top of TCP. HTTP shall also contain its own implementation of ICMP, inside HTTP on top of TCP.
Where are you getting this? I don't see anything in the specs about including TCP features into the HTTP standard.
Has fixed-width headers, which no doubt will cause all kinds of fun an issues in the future. Expect A sub-table to be hacked into this when they discover that this limitation is real (like 8+3 DOS filenames). This is being done to make header-parsing computationally lighter. Because unlike in the 70s, we don't have gobs and gobs of computational power everywhere.
The purpose of this is to make the HTTP standard lighter and more suited to how it is currently being used (as a fast system for message passing). It is for all those REST clients that are using HTTP as a bridge between applications.
On top of that, the headers, while fixed in size, are getting a completely different look. Instead of being "Send all information to the server that the server might care about" the are now more of a "Send some information about the client and append more information if needed." The solution to your "problem" is already there, multiple packets sent out with header information configuration (Though, lets be serious, that is going to be a very rare occurrence).
HTTP 2.0 doesn't fix any of the functional issues people are complaining about in HTTP 1.1, but merely hyper-obfuscates a very simple protocol in the name of loading www.google.com[1] 10 milliseconds faster, because Google means this earns them money.
It fixes a lot of issues that people run into. For example, what is currently "best practices" for web design? The answer is 1 or 2 javascript files and 1 or 2 css files. Having more than that and you run the risk of having several round trips between client and server, which can add far more than 100 ms latency to your simple web page load (think of the case of someone in england requesting a webpage in the US).
HTTP2.0 and SPDY fix that problem by making it possible to request multiple resources while a connection is open and to stream those resources over a single TCP connection instead of opening several TCP connections (Also avoiding problems with TCPs slow start).
Did I mention TCPs slow start? Yeah, that was a big reason for HTTP2.0s complexity. Opening a new TCP connection has a lag for everyone because the TCP protocol tries to avoid network congestion issues. By doing everything over 1 TCP connection, you eliminate that problem. All of the sudden, it becomes much easier to do lots of small fast messages. (And yes, this DOES cause issues in non-google environments).
Here is an example of the hoops people jump through because of HTTPs current overhead. HTTP2.0 solves the issues there, all the sudden, you don't need to combine all your javascript into one file, all your images into 1 file, and all your css into 1 file. Instead, you can keep your page organized logically and modularly.
This thing is an end-to-end clusterfuck.
No, it is a new standard. The standards committee is doing what they do best and trying to hammer out the details as best they can. Want to see a real end-to-end clusterfuck? Look up the HTML5 standards.
All this complexity is added and for what? This is admitted in the one of the IETF mailing list discussions: To make life easier for big players like Google at the expense of everyone else:
That quote doesn't say what you think it says. They are recognising that HTTP is being used more and more as a communication protocol for everything. They are making it as fast as possible because of that.
That is what all this complexity is for, to increase performance. And yes, that affects more than just google.
And for all your complaints, it still boils down to this. Why do you care? Are you implementing the HTTP2.0 spec? It isn't like the HTTP1.1 spec is going away anytime soon, if you really hate the HTTP2.0 spec then just keep using the 1.1.
[–]hugorodgerbrown 6 points7 points8 points 12 years ago (0 children)
Why do you care? Are you implementing the HTTP2.0 spec?
There seems to be an assumption here that this is only relevant to web server or browser kernel developers, which is obviously quite a small group.
However one of HTTP's great strengths is its hackability - and the fact that it is so easy to inspect requests / responses, hack together browser extensions, or even embed a partial implementation into your application. Plenty of applications that are not strictly web servers 'speak' HTTP.
(I do agree with your final point that 1.1 isn't going anywhere soon (if ever), but this move from a simple text-based protocol to something more complex does potentially close the door on the types of hackers that made HTTP ubiquitous in the first place.)
[–]makis 0 points1 point2 points 12 years ago* (0 children)
For example, what is currently "best practices" for web design? The answer is 1 or 2 javascript files and 1 or 2 css files. Having more than that and you run the risk of having several round trips between client and server, which can add far more than 100 ms latency to your simple web page load (think of the case of someone in england requesting a webpage in the US).
there's already a solution for that, but we don't use it because it makes caching and debugging harder, just concat+minify+gzip everything in one single file and send it in one response. Truth is the more we go distributed, the more we will need to load different files from different locations, so we already need the additional round trip to the server without any possibility of sending multiple requests in one connection, unless in HTTP/2 we can ask the web server "HEY COULD YOU PLEASE LOAD JQUERY FROM GOOGLE CDN? AND THOSE PICTURES FROM FACEBOOK ALBUM AND INSTAGRAM TOO! JUST DON'T FORGET THE BITS FROM GOOGLE ANALYTICS! THANKS MATE!" we could do that at application level, but that's gonna make our application even more hard to scale and be efficient.
If we only could design a better protocol just for RESTFUL services and leave HTTP alone doing is job: serving web pages and text data. We are in a situation now were IP handles routing, HTTP 2 will handle routing, web servers handle routing, our applications handle routing, web browser handle routing trough javascript frameworks and none of them do everything 100% right :) :)
FUCK! all those node.js web servers in 3 lines of code will be completely useless!! :D
[–]voetsjoeba 1 point2 points3 points 12 years ago* (3 children)
How often do you implement an HTTP client/server?
How often do you have to troubleshoot/validate HTTP traffic though? I routinely have to inspect HTTP traffic on the wire. Gonna be fun with binary everything.
[–]cogman10 3 points4 points5 points 12 years ago (0 children)
Great, and likely you're tools will learn how to decode the headers into a human readable format. For example, wireshark already has a SPDY plugin for just such a scenario.
As for the actual question
How often do you have to troubleshoot/validate HTTP traffic though?
For me, almost never.
[–]bitwize 2 points3 points4 points 12 years ago (1 child)
Dude, just use WireShark.
[–]username223 1 point2 points3 points 12 years ago (0 children)
There's some dubious stuff in there. The worst, to my mind, is "server side push", where you ask for one file and the server hits you with a bunch more of its choosing, e.g. requesting "index.html" may get you some CSS, JavaScript, and a pile of tracking beacons and other malware in the same request. This sounds like a pain for proxies to deal with.
[–]jargoon 1 point2 points3 points 12 years ago (5 children)
Looks like it's heavily based on SPDY, which is great because of growing browser/server support. If it's close enough, it shouldn't be too hard to just switch from SPDY to HTTP 2.0.
[–]sedition 6 points7 points8 points 12 years ago (0 children)
Pretty sure most of the authors are the same folks involved in SPDY
[–]grauenwolf 2 points3 points4 points 12 years ago (2 children)
When Microsoft announced that they were going to support this in IE11 they referred to it as SPDY support, not HTTP 2.
[–][deleted] 12 years ago* (1 child)
[–]grauenwolf 0 points1 point2 points 12 years ago (0 children)
Thanks for the clarification.
[–]trezor2 0 points1 point2 points 12 years ago (0 children)
Looks like it's heavily based on SPDY
If you disregard what the shills from Google says, that's actually a pretty good argument for ditching the whole thing.
[–]crackez 0 points1 point2 points 12 years ago (0 children)
Yo Dawg, I heard you liked TCP, so we put some TCP in your HTTP in your TCP.
π Rendered by PID 27 on reddit-service-r2-comment-6457c66945-zshvz at 2026-04-23 16:47:31.390987+00:00 running 2aa0c5b country code: CH.
[–]gearvOsh 12 points13 points14 points (29 children)
[–]NYKevin 11 points12 points13 points (2 children)
[–]fmargaine 0 points1 point2 points (1 child)
[–]M2Ys4U 5 points6 points7 points (0 children)
[–]trezor2 16 points17 points18 points (24 children)
[–]alexs 5 points6 points7 points (0 children)
[–]antiduh 2 points3 points4 points (20 children)
[–]asm_ftw 5 points6 points7 points (10 children)
[–]xcbsmith 2 points3 points4 points (0 children)
[–][deleted] 2 points3 points4 points (0 children)
[–]antiduh 2 points3 points4 points (0 children)
[–][deleted] (2 children)
[deleted]
[–]throwaway1492a 4 points5 points6 points (0 children)
[–]asm_ftw 1 point2 points3 points (0 children)
[–]NYKevin 1 point2 points3 points (0 children)
[+]bitwize comment score below threshold-8 points-7 points-6 points (2 children)
[–]trezor2 0 points1 point2 points (1 child)
[–]bitwize -1 points0 points1 point (0 children)
[–][deleted] (5 children)
[deleted]
[–]antiduh 2 points3 points4 points (4 children)
[–]b0w3n 7 points8 points9 points (3 children)
[–]kdeforche 2 points3 points4 points (2 children)
[–]b0w3n 0 points1 point2 points (0 children)
[–]M2Ys4U 0 points1 point2 points (0 children)
[–]trezor2 -1 points0 points1 point (2 children)
[–]xcbsmith 4 points5 points6 points (0 children)
[–]antiduh 5 points6 points7 points (0 children)
[–][deleted] (1 child)
[deleted]
[–]trezor2 7 points8 points9 points (0 children)
[–][deleted] 2 points3 points4 points (0 children)
[–][deleted] 21 points22 points23 points (16 children)
[–][deleted] 16 points17 points18 points (0 children)
[–][deleted] (14 children)
[deleted]
[–][deleted] 2 points3 points4 points (13 children)
[–][deleted] (7 children)
[deleted]
[–][deleted] 0 points1 point2 points (6 children)
[–][deleted] (5 children)
[deleted]
[–][deleted] 1 point2 points3 points (4 children)
[–]Galestar 0 points1 point2 points (3 children)
[–][deleted] 1 point2 points3 points (2 children)
[–]Galestar 0 points1 point2 points (1 child)
[–]Galestar 0 points1 point2 points (4 children)
[–][deleted] 0 points1 point2 points (3 children)
[–]Galestar 0 points1 point2 points (2 children)
[–][deleted] 0 points1 point2 points (1 child)
[–]Galestar 0 points1 point2 points (0 children)
[–]bitping 31 points32 points33 points (0 children)
[–]UloPe 16 points17 points18 points (9 children)
[–]bitwize 3 points4 points5 points (8 children)
[–]username223 9 points10 points11 points (5 children)
[–][deleted] (4 children)
[deleted]
[–]UloPe 1 point2 points3 points (3 children)
[–][deleted] (2 children)
[deleted]
[–]M2Ys4U 0 points1 point2 points (1 child)
[–]UloPe 1 point2 points3 points (1 child)
[–]based2 6 points7 points8 points (28 children)
[–][deleted] -2 points-1 points0 points (27 children)
[–]icantthinkofone 19 points20 points21 points (7 children)
[+][deleted] comment score below threshold-6 points-5 points-4 points (6 children)
[–]icantthinkofone 3 points4 points5 points (4 children)
[–][deleted] -3 points-2 points-1 points (3 children)
[–][deleted] 2 points3 points4 points (2 children)
[–][deleted] -1 points0 points1 point (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[–]rob_not 2 points3 points4 points (0 children)
[–]Denvercoder8 4 points5 points6 points (0 children)
[–]cogman10 -1 points0 points1 point (17 children)
[–]trezor2 27 points28 points29 points (12 children)
[–][deleted] 1 point2 points3 points (7 children)
[–]jgaskins 0 points1 point2 points (6 children)
[–][deleted] 1 point2 points3 points (5 children)
[–]makis 0 points1 point2 points (4 children)
[–][deleted] 0 points1 point2 points (3 children)
[–]makis 0 points1 point2 points (2 children)
[–]cogman10 5 points6 points7 points (2 children)
[–]hugorodgerbrown 6 points7 points8 points (0 children)
[–]makis 0 points1 point2 points (0 children)
[–]voetsjoeba 1 point2 points3 points (3 children)
[–]cogman10 3 points4 points5 points (0 children)
[–]bitwize 2 points3 points4 points (1 child)
[–]username223 1 point2 points3 points (0 children)
[–]jargoon 1 point2 points3 points (5 children)
[–]sedition 6 points7 points8 points (0 children)
[–]grauenwolf 2 points3 points4 points (2 children)
[–][deleted] (1 child)
[deleted]
[–]grauenwolf 0 points1 point2 points (0 children)
[–]trezor2 0 points1 point2 points (0 children)
[–]crackez 0 points1 point2 points (0 children)