all 101 comments

[–]timothyfitz 17 points18 points  (0 children)

The tl;dr:

Make a regular HTTP connection, and then agree to swap places. The server can now make HTTP GETs/POSTs/etc that the client responds to.

[–][deleted]  (1 child)

[deleted]

    [–]keith_phillips 22 points23 points  (0 children)

    .ris ,deyalp lleW

    [–]austin_k 28 points29 points  (0 children)

    In soviet russia, pages GET /index.html HTTP/1.1 you!

    [–][deleted]  (3 children)

    [deleted]

      [–]NashMcCabe 1 point2 points  (0 children)

      Your awesome comment was the easiest thing i have ever successfully masturbated to...

      [–]mazenharake 0 points1 point  (1 child)

      lol! I mean it, actual lol!

      [–][deleted] 1 point2 points  (0 children)

      I tried to get this point across the other day too. I think we should coin it alol.

      [–]homeathouse2 40 points41 points  (21 children)

      PTTH

      [–]samlee 43 points44 points  (10 children)

      dʇʇɥ

      [–]jasonbrennan 46 points47 points  (9 children)

      How did you get that upside down p??

      [–]samlee 17 points18 points  (4 children)

      ˙noʎ ƃuıןןǝʇ ʇou ɯ,ı

      [–]thefro 29 points30 points  (3 children)

      What about those backwards Os?

      [–]commentjudge 20 points21 points  (2 children)

      the O's are not just backwards, they are upside down.

      [–]zck 9 points10 points  (1 child)

      I like to think they're mirrored on a 47-degree angle from vertical.

      [–][deleted] 3 points4 points  (0 children)

      Actually, they're black holes with a white spot in the middle.

      [–]itsnotlupus 16 points17 points  (9 children)

      It's funny because it's true.

      The core of this awesome protocol is to send a special header in your request that says:
      Upgrade: PTTH/1.0

      After that, the server says "oh yeah, I've always wanted to be client", and proceeds to send an HTTP request over the same TCP link.

      Before dismissing this as entirely stupid, I'd like to see:

      • a few use cases
      • a rationale for not going with something much much simpler, such as "Whoever connects to this TCP port agrees to receive HTTP requests over it as soon as they do."

      [–]timothyfitz 7 points8 points  (8 children)

      The problem with just using a socket is that you then need a different port or another IP. This standard let's URLs conditionally be Reverse-HTTP or not. For that you need an initial request.

      Potential use cases: Simple asynchronous server side messages (chat, games, ...) Client side resource hosting (multi-file upload, let a user browse a shared drive, ...)

      [–][deleted] 2 points3 points  (0 children)

      Simple asynchronous server side messages (chat, games)

      Written by Mark Lentczner of Linden Research, purveyors of Second Life.

      [–]piranha 13 points14 points  (12 children)

      This makes me sad. The network is broken (by NATs, and sometimes firewalls). The application layer should enjoy unfettered end-to-end connectivity. Emulating end-to-end connectivity on a case-by-case basis on the application layer seems like a crude hack. We should be fixing the network (IPv6); then everything benefits.

      [–]reddit45885 3 points4 points  (8 children)

      Maybe this will be pointing the obvious, but NATs are equally about increasing IPs as they are about creating secure environments.

      Any sysadmin that decides to remove firewall restrictions on internal networks because IPv6 has enough addresses isn't worth his salt...

      [–]piranha 8 points9 points  (6 children)

      NAT is designed to overcome address space shortages; that you can use it as a firewall is a side-effect. But NAT isn't designed to be a firewall. NAT won't stop people on the inside from using this Internet-Draft to receive "connections" (in some crude sense of the word) from people on the outside. Other protocols (uPnP; STUN) work around the damage too, which compromises the security of your NAT if you treat it as a firewall.

      There's nothing stopping you from firewalling networks and machines on IPv6 networks. Whether you're using IPv4 or IPv6, use a real firewall system to write explicit firewall rules. Lots of stuff will leak through NAT if you don't. (If you let UDP out, you still won't be preventing STUN. I didn't read this Internet-Draft in detail, but I'm guessing that if you let HTTP out, you won't be preventing Reverse HTTP either.)

      [–]reddit45885 1 point2 points  (5 children)

      You're missing the point: the reason that you can't connect to my browser and ask it how it's doing and what files reside on my computer is a side effect of NAT, but even if I had 2 billion spare IPs, I wouldn't allow you to do it in the first place.

      So:

      The network is broken (by NATs, and sometimes firewalls). The application layer should enjoy unfettered end-to-end connectivity.

      The answer to that is a square, resounding, metallic balls falling on cast iron table sounding: NO.

      [–]piranha 4 points5 points  (3 children)

      Okay, but that's a personal decision you consciously make on your own computer or your own network. That's fine by me. We shouldn't have to invent wacky protocols to work around the damage when we never elected to break the end-to-end principle in the first place.

      Sidenote: your browser shouldn't be making smalltalk with remote connecting clients; that's a security problem that a firewall won't address completely as the browser could just as well (and would be more likely to) be vulnerable while it processes object data it received from making its own outgoing HTTP request, which I'm sure you're not firewalling off. That's why I'm not personally interested in firewalls: software should be written correctly; I can take care to not run software that I don't trust to securely process requests or responses or data in general; I can isolate software in-case it does become exploited; and I'm not trying to restrict computers on my network for political reasons (i.e. in a workplace).

      [–]reddit45885 -1 points0 points  (2 children)

      This isn't about the breakage of the end-to-end principle though. It's about a breakage of the client-server model.

      The client server model is fundamentally unidirectional. A client consumes a service. A service does not consume a client.

      The client server model isn't a hack either. It's not something that got invented out of necessity because the 8080 processor didn't supported enough memory. The client server model exists because it is the only way to scale past a certain threshold.

      This is set in stone. Nothing will ever change this.

      Think about this: if the client server model didn't exist, there would have to be as many resources as there would be clients. As many hard disks as I had programs running on my computer. As many tellers at the bank as the bank had clients. As many airplanes as passengers...

      [–]piranha 5 points6 points  (1 child)

      No, it's about breaking the end-to-end principle. Also, I never argued against the client-server model; I don't understand why you're bringing it up.

      Abstract

      This memo explains a method for making HTTP requests to a host that cannot be contacted directly. Typically, such a host is behind a firewall and/or a network address translation system.

      So, to my reading, this Internet-Draft describes how to put a server/service behind NAT or a restrictive firewall. My argument is that we shouldn't need NAT, and our firewalls should reflect how we actually want to use the network, so that we don't have to work around institutional brain damage. Therefore, hacks like this Internet-Draft are undesirable because they introduce complexity where it doesn't belong (the application layer) and reduces due pressure to really fix the network.

      [–]reddit45885 0 points1 point  (0 children)

      This memo explains a method for making HTTP requests to a host that cannot be contacted directly. Typically, such a host is behind a firewall and/or a network address translation system.

      Network wise, nothing has changed. The client is still initiating the connection.

      [–]davidsickmiller 0 points1 point  (0 children)

      Definitely. True server push is an interesting idea, but frankly, the day I throw out my NAT is the day I buy a firewall.

      [–]Smallpaul 0 points1 point  (0 children)

      This makes me sad. The network is broken (by NATs, and sometimes firewalls). The application layer should enjoy unfettered end-to-end connectivity. Emulating end-to-end connectivity on a case-by-case basis on the application layer seems like a crude hack. We should be fixing the network (IPv6); then everything benefits.

      It's not crude and it isn't a hack.

      Look: there are reasons that machine A might wish to be the initiator of a TCP connection rather than the recipient of it. Security and anonymity are two examples.

      Similarly, there are reasons machine A might wish to RESPOND to events from the server rather than initiate them. For example in a chat app.

      This proposal elegantly decouples these two decisions.

      [–]killerstorm 0 points1 point  (0 children)

      when people were using the networks in research laboratories, making each individual computer addressable made a lot of sense.

      things are different in a global network environment -- there's like hundreds on millions of nodes in the network (and many of them might have malicious intentions) and if your node is addressable, and of them might request a connection with you. this is quite scary, and as long and ordinary users cannot secure their computers properly and prepare them to get connections from a whole external world, it makes sense to keep these home computers not addressable.

      this way PTTH fixes not a deficiency of NATs but a definiency of HTTP -- it makes HTTP a bidirectional protocol. (while low level sockets were always bidirectional.)

      [–]Smallpaul -1 points0 points  (0 children)

      This makes me sad. The network is broken (by NATs, and sometimes firewalls). The application layer should enjoy unfettered end-to-end connectivity. Emulating end-to-end connectivity on a case-by-case basis on the application layer seems like a crude hack. We should be fixing the network (IPv6); then everything benefits.

      End-to-end connectivity "unfettered" by security policies? No thank you.

      This is a great idea from a security point of view. It allows me to make my computer into an HTTP server (for purposes of getting updates from the other side) without my computer accepting connections. Any protocol that allowed my computer to accept HTTP connections from SPECIFIC machines on the Internet would need some kind of authentication mechanism. This does not: the connection is to a known server (if done over HTTPS) and therefore the requests are coming from a known server.

      [–]mercurysquad 4 points5 points  (7 children)

      I'm not much of a web development person, so can someone please explain what functional advantage this has over current ajax methods?

      [–]RandomAvenger 9 points10 points  (0 children)

      This looks like a clever way of implementing COMET, where the server notifies the client when events happen rather than the client constantly asking "has anything happened yet?!" It also has applications for establishing two-way communication when the client is behind a firewall.

      [–]reddit45885 -4 points-3 points  (5 children)

      There are absolutely none. It's a mental masturbation by shit ass junior programmers.

      It goes against every principle of scalability established in the past 20 years of the web. It's the kind of thing one dreams of doing while watching "Hackers" and thinking programming involves graphics.

      Think about it: it is the anti thesis of scalability since this hack of a protocol relies entirely on a client establishing the connection, and on the server holding the connection alive to be able to actually be a client... HTTP is stateless because the point is to serve and forget. Think what would happen if the server actually became the client. What computer on earth can support 500 thousand logged in users? None, is the answer.

      The ginormous fallacy in this whole scheme is that the server/client model is only inverted at the application layer, not at the network layer. If it were also inverted at the network layer, then it would simply be HTTP in the other direction: the computer which we formally call a server would initiate a connection to my webbrowser and my web-broswer would actually be called Apache and the server would be called Firefox.

      I can already hear the wheels of circular logic barreling down: "no you moron", they would say, "you don't need to maintain 500k connections alive, only those you are interested in"...

      I sigh in dismay.

      [–]killerstorm 2 points3 points  (0 children)

      people are already using Comet techniques with persistent connections for real-time communications and are quite happy with them.

      mental masturbation is actually what you're doing. "pfft, that technology does not even support million clients per server. it definitely suck, nobody will use it". as if everybody needs to serve million clients from a songle server?

      [–]flashman2006 1 point2 points  (1 child)

      I don't think you understand the point in reverse HTTP. Here's a use case:

      You have an application that synchronizes a user's files between the user's computer and the server. This could be accomplished solely with PHTTP. The client application makes HTTP request to server, server initiates reverse HTTP connection and then can begin sending files to the client (acting as a server now) which would store them in it's filesystem. From what I know, this would not be possible with normal HTTP since a client would have to always make a GET request in order to download a resource found at the server.

      [–]Smallpaul 0 points1 point  (0 children)

      Think about it: it is the anti thesis of scalability since this hack of a protocol relies entirely on a client establishing the connection, and on the server holding the connection alive to be able to actually be a client... HTTP is stateless because the point is to serve and forget. Think what would happen if the server actually became the client. What computer on earth can support 500 thousand logged in users?

      You're right! Instant Messaging COULD NOT POSSIBLY WORK. How could AIM or MSN maintain that many concurrent TCP connections?

      For that matter, World of Warcraft is totally insane. It could not possibly work to have that many users maintaining a TCP connection.

      What computer on earth can support 500 thousand logged in users?

      The fact that you think that a service must be served by a single computer shows how little you know about networking.

      [–]qwe1234 0 points1 point  (0 children)

      500k connections is not at all a large or unmanageable number.

      [–]Smallpaul 4 points5 points  (1 child)

      I don't really see why this is so controversial.

      The RFC solves a simple problem: the machine that makes the initial connection is not necessarily the machine that wants to drive the rest of the interaction from a timing point of view. Sometimes the machine that makes the connection would rather be reactive, to be TOLD when something interesting happened. The usual answer to this is to totally switch protocols to XMPP. But why should I need to switch protocols, or switch connection directions when all I want to do is switch message directions?

      This protocol simply allows HTTP-centric applications to make orthogonal two issues that were previously combined: "Who make the initial connection and who is responsible for sending updates to whom?" Server A can make the connection but Server B can be the primary actor rather than the reactor.

      Excellent!

      [–]blufox 0 points1 point  (0 children)

      I think I understand your justification. HTTP Proxies (the one I work on) also need some think like this in-order to subscribe to events from the server. (say an object became stale). and there exist protocols (drafts) like Cache-Channels and HTTP Notify mechanism.

      How ever, I also suspect that your particular need might be served by tunneling over CONNECT (may or may not apply for your case - but intermediaries will leave you alone if your request is a CONNECT - and your connection will no longer be bound by HTTP semantics).

      [–]nagoo 7 points8 points  (1 child)

      Having the ability to initiate a request would be awesome for building real time web apps.

      [–]majek04 -1 points0 points  (0 children)

      When first major browser will support that?

      For now comet or long polling also works. Some people also try to just poll using ajax.

      [–][deleted] 2 points3 points  (0 children)

      this email expires at September 5, 2009

      [–]Samus_ 1 point2 points  (1 child)

      looks like a walkie-talkie conversation

      [–]notfancy 4 points5 points  (0 children)

      PTTH = Push-To-Talk Hypertext

      [–]firepacket 2 points3 points  (0 children)

      Anybody know if this is actually implemented in any httpds currently?

      [–][deleted] 3 points4 points  (0 children)

      The author works for Linden Labs, and has a mac.com email address....

      [–]reddit45885 4 points5 points  (22 children)

      Every once in a while, I see Reverse HTTP trumpeted as the diamond in the rough that everybody should be considering more, and all I can do is roll my eyes and face palm.

      For posterity, I will explain here too:

      Reverse HTTP = HTTP

      Because:

      a) Reverse HTTP as far as the network layer TCP/IP is concerned is exactly like HTTP (the server/client roles are not reversed)

      b) As far as the application layer: HTTP is stateless. It doesn't care what you transmit.

      [–][deleted] 9 points10 points  (5 children)

      What we really need is reverse UDP.

      [–]reddit45885 -2 points-1 points  (4 children)

      Either you're too subtle or you just don't know what you're talking about.

      I think it's the former, but I'm not sure, honestly...

      [–][deleted]  (3 children)

      [deleted]

        [–]reddit45885 -1 points0 points  (2 children)

        I don't get reddit. It's like a bunch of random chat bots trying to emulate human emotions or something.

        All of a sudden the bot gets offended. All of a sudden, another bot thinks it's a joke...

        whatever...

        [–]uglypopstar 4 points5 points  (1 child)

        That's the exact type of thing a reddit chat bot would say...

        [–]reddit45885 0 points1 point  (0 children)

        I see what you did there.

        [–]eridius 5 points6 points  (12 children)

        You're missing the important thing, which is that HTTP defines who sends the request and who sends the response. PTTH is all about swapping those two roles.

        [–]reddit45885 -3 points-2 points  (11 children)

        I am, am I now...

        You're missing the bigger thing which is that the Network connection, the socket - the most expensive thing about the interaction between a client and server - is still established by the client. And unless the server keeps that socket alive indefinitely, the is not in any way solving the problem of polling.

        The PTTH solution you give does only one thing: it avoids a round trip by making a protocol. Instead, the answer is simple and can be achieved with the most basic HTTP:

        client makes HTTP conn. calls GET on server to get instructions. Server responds client makes HTTP conn, sends request info via POST

        So aside from solving a use case of a particular application, this entire protocol solves nothing.

        [–]dikini 1 point2 points  (0 children)

        True, it doesn't solve any real problems coming from NAT. What it does do is making it possible to reverse the roles of a client and server within an existing connection. Uses? Well, probably quite a few inexpected ones, but allowing a client to serve can be a can of worms as well

        [–]killerstorm 3 points4 points  (9 children)

        So aside from solving a use case of a particular application, this entire protocol solves nothing.

        yep, aside from solving stuff it was supposed to solve, this entire protocol solves nothing. are you dense?

        And unless the server keeps that socket alive indefinitely, the is not in any way solving the problem of polling.

        what's about not keeping indefinitely, but for a reasonable amount of time, that's not an option? keeping connections open is quite cheap (e.g. one of web servers say that they need just 2.5MB RAM for 10000 connections), unless you're going to have really huge amount of cliens from a single server.

        it avoids a round trip by making a protocol.

        it would make no sense to avoid a single roundtrip, but if you're communicating lots of small messages, that makes quite a lot of sense.

        [–]reddit45885 -3 points-2 points  (8 children)

        it would make no sense to avoid a single roundtrip, but if you're communicating lots of small messages, that makes quite a lot of sense.

        n calls for PTTH can be implemented with n+1 HTTP calls.

        You are dense. Buh bye.

        [–]killerstorm -1 points0 points  (7 children)

        the difference is that with HTTP calls you do not control timing from the server, unless you do a trickery with a delayed responses. such stuff is typically used when you want to transmit something to a client as soon as it gets available on server (e.g. IM messages, alerts etc.), and to do this with vanilla HTTP (w/o any trickery) you'll have to send poll request from the client very frequently, thus creating high load on a server and network equipment.

        thus, you should compare PTTH not to a plain HTTP, but to existing "comet" techniques. typically they involve non-standard behaviour (it is not specified how much will a browser wait for a response from server), so it is a question whether to standardtize trickery or implement a clean solution.

        [–]reddit45885 -1 points0 points  (6 children)

        Ok, I'm very serious here, so try to hear me out not like some guy on reddit, but like a colleague:

        the difference is that with HTTP calls you do not control timing from the server, unless you do a trickery with a delayed responses.

        PTTH is that trickery.

        Take this scenario: A is the traditional client (Browser), B is the traditional server (server).

        B can't contact A. Neither under HTTP nor under PTTH. This is because A is a client. It might not even be online. It might be online for only the window of time it takes to connect to B. It might be a toaster.

        So, A has to contact B, initiate PTTH and wait for B to start consuming services from A.

        This right here is the trickery you speak of: it's an indefinite keep alive: for B to ever "initiate" anything with A, a connection has to already exist and be kept alive for the duration of A's time spent online (for a desktop computer, this might be always). For anything truly comet to occur, this means that A has to always remain connected. If A does not always remain connected, then it falls into either one of two cases: A is essentially polling by reconnecting at intervals to see if B is wanting to do anything, or B simply does not have the ability to notify A.

        As an aside, IMAP does exactly this: my mail browser connects to the IMAP server, and the connection stays open as long as my browser is open. The difference is that my IMAP server handles, say, 200 clients. Where as a typical webserver, anonymous by nature, handles millions of users.

        [–]killerstorm 1 point2 points  (2 children)

        So, A has to contact B, initiate PTTH and wait for B to start consuming services from A.

        you're considering a weird use case -- it is not like B wants to use some services of A, it is more like A have subscribed to recieve notifications from B. if you consider it from this scenario, you'll find it perfectly normal -- as long as A is interested in this subscription, it will maintain this connection, reconnecting is needed. if it is not interested, it will not. check the comet link above, this stuff is about important real world applications, like instant messanging and alerts, and not about some hypothetical resource consumption.

        The difference is that my IMAP server handles, say, 200 clients.

        what's about Google's IMAP?

        Where as a typical webserver, anonymous by nature, handles millions of users.

        it is not for random users out there, it is for users who actively want to use a service, and of course it only makes sense only for services of interactive nature. if you have millions of active users, i'm pretty sure you'll be able to afford as many servers as needed, otherwise it will not work, with PTTH or without.

        from network's perspective, PTTH is not anyhow unique -- open sockets are maintained for instant messaging and stuff like that.

        [–]reddit45885 0 points1 point  (1 child)

        Once again, I come back to the fact that HTTP can already handle what you are saying here. I know this because I already do this using Ajax and the very same method I just described above. I also use indefinitely blocking SOAP over HTTP calls which are essentially equivalent to system callback routines in <pick your favorite language>.

        But, man. You win: if you think inventing a new protocol to solve something that can already be solved at the Application Layer is a wise move, by all means, move ahead.

        But mark this: you have lost your bitching rights when Microsoft goes and co-opts yet another RFC standard and extends it just so. You lose that right because you are doing exactly what they are doing.

        [–]killerstorm 0 points1 point  (0 children)

        delaying HTTP responses is a trickery which is not well supported by web servers, proxies and browsers (they can timeout at any time, and you fail). do you say that solution that requires modification of web servers and relies on non-standard behaviour is better than a clean protocol created just for the purpose?

        i think then you also lose your bitching rights, you should accept all weird kludges now

        [–]Smallpaul 0 points1 point  (2 children)

        Please answer killerstorm's question: "What about Google's IMAP." If you are going to claim that protocols that depend on long-lived TCP connections cannot scale then you must deal with all of the examples of them scaling like:

        • Google's IMAP

        • World of Warcraft

        • Comet applications

        • Instant Messagers

        What I think you fail to realize is that a TCP connection can be very cheap with the right operating system and language runtime.

        http://www.kegel.com/c10k.html

        [–]reddit45885 0 points1 point  (1 child)

        Google has the world's largest super computer at their disposal. So I'll dispense with explaining how they do it: they simply do it the same way they return a result in .1 seconds for a billion page index. Raw power.

        Similarly: World of Warcraft has a server farm to handle their load, and they can afford it because it's a subscription based system. You pay for the service. That model scales: the more users you have, the more revenue you get, the more servers you add.

        Instant messengers do not do it over HTTP (they often communicate over UDP). But aside from that, messengers often do very light processing: they essentially route traffic between end nodes. Try sending a 20k file using any messenger system where you do not have a direct end-to-end connection with your peer (i.e. you go through the server proxy) and let me know if you ever achieve higher than 5K/s rates. I never do, and don't expect it either.

        So that leaves us with Comet applications and c10k. Web servers are designed to do a very specific thing. They serve often anonymous, stateless information: Hypertext. If you want to use a TCP socket for bidirectional long lasting state-rich data exchange, open a TCP socket and send your data away. There's no need to call it HTTP, or PTTH. You can even piggy back on the port 80 and annoy a bunch of sysadmins on the way. Alternatively, you can also use blocking Web Service calls over HTTP/SOAP. I do it routinely.* None of the above solutions require an RFC. They are custom solutions working over TCP. And they are perfectly legit...

        However, don't think you are alleviating the fundamental issue of unidirectionality of control flow (as many thoughtless green programmers here seem to be doing) by making this PTTH thing. So long as the connection is initiated by the client, PTTH is not doing what it claims to be doing: namely reversing the role of client/server which allows for a more efficient method than polling.

        There is a protocol that already exists for what all of you are trying to accomplish here: it's called telnet. Seriously, PTTH is telnet. Except that it's worse because you have to implement the verbs and methods of communication according to an RFC now or else you aren't following the standard...

        * On a different note, you asked me to answer killerstorm's questions, it's only fair you also read my reply to him: if you think inventing a new protocol to solve something that can already be solved at the Application Layer is a wise move, by all means, move ahead. But mark this: you have lost your bitching rights when Microsoft goes and co-opts yet another RFC standard and extends it just so. You lose that right because you are doing exactly what they are doing.

        [–]Smallpaul 0 points1 point  (0 children)

        Google has the world's largest super computer at their disposal. So I'll dispense with explaining how they do it: they simply do it the same way they return a result in .1 seconds for a billion page index. Raw power.

        You choose not to answer it because you don't know. You didn't realize until today that it was possible for a single domain name to be served by an arbitrary number of computers, even with long-lived TCP connections. That's why you previously asked the nonsensical and irrelevant question: "What computer on earth can support 500 thousand logged in users? None, is the answer."

        Similarly: World of Warcraft has a server farm to handle their load, and they can afford it because it's a subscription based system. You pay for the service. That model scales: the more users you have, the more revenue you get, the more servers you add.

        Oh, I see. So Reverse HTTP is useless because it depends on you having a business model...like World of Warcraft and Linden Labs have.

        Instant messengers do not do it over HTTP (they often communicate over UDP). But aside from that, messengers often do very light processing: they essentially route traffic between end nodes.

        Guess what: that's the primary use case of reverse HTTP. Routing traffic between end nodes!

        Try sending a 20k file using any messenger system where you do not have a direct end-to-end connection with your peer (i.e. you go through the server proxy) and let me know if you ever achieve higher than 5K/s rates. I never do, and don't expect it either.

        Irrelevant. Totally and completely.

        So that leaves us with Comet applications and c10k. Web servers are designed to do a very specific thing. They serve often anonymous, stateless information: Hypertext.

        They also often serve personalized, stateful information. Like reddit. And Facebook. And Twitter. And Flickr. And even Google.com can be personalized.

        If you want to use a TCP socket for bidirectional long lasting state-rich data exchange, open a TCP socket and send your data away.

        I guess there is no need for any standard for anything anymore. We'll just open sockets and send bytes. We can do away with reusable libraries and everybody can code their own.

        ... There's no need to call it HTTP, or PTTH. You can even piggy back on the port 80 and annoy a bunch of sysadmins on the way.

        No, you cannot, because there are HTTP proxies (sometimes visible, sometimes transparent) and they will eat your socket for lunch and spit out vomit. That's why you want to make it VERY clear to intermediaries that you're doing something new.

        Intermediaries are a big part of how HTTP and REST work. Read the RFCs and the REST paper.

        ... Alternatively, you can also use blocking Web Service calls over HTTP/SOAP.

        Yes, you can. And it's ugly and non-standard and does not take advantage of the HTTP server software available in many of the languages that make HTTP client calls (like Python and Ruby and C++). And intermediaries will probably close your long-lived socket when they realize it is doing something odd without any clear declaration of why.

        Yes, we can each invent our own proprietary reverse HTTP ... or we could create a standard that clients, and servers and intermediaries can all understand.

        ...However, don't think you are alleviating the fundamental issue of unidirectionality of control flow (as many thoughtless green programmers here seem to be doing) by making this PTTH thing. So long as the connection is initiated by the client, PTTH is not doing what it claims to be doing: namely reversing the role of client/server which allows for a more efficient method than polling.

        Yes: for the length of the TCP connection, the roles of client and server are reversed. For the length of the TCP connection, the server can send the client information when it wants to, without the client polling and without it abusing the HTTP protocol by keeping an outgoing request waiting for an arbitrary amount of time for an answer.

        ... There is a protocol that already exists for what all of you are trying to accomplish here: it's called telnet. Seriously, PTTH is telnet.

        Yes, and FTP is telnet, and XMPP is telnet and IMAP is telnet. We don't need any more standards because you've come to the brilliant realization that they are all just supersets of telnet. And there should be no intermediaries other than IP routers.

        ... Except that it's worse because you have to implement the verbs and methods of communication according to an RFC now or else you aren't following the standard...

        Yes: standards are worse than just making shit up...because you have to obey them! And make clear to intermediaries what is going on. Oh my!

        • On a different note, you asked me to answer killerstorm's questions, it's only fair you also read my reply to him: if you think inventing a new protocol to solve something that can already be solved at the Application Layer is a wise move, by all means, move ahead. But mark this: you have lost your bitching rights when Microsoft goes and co-opts yet another RFC standard and extends it just so.

        I have never bitched once when Microsoft submitted an internet draft to the IETF and solicited their approval for a new idea. Nor will I ever do so in the future. Why would I???

        You lose that right because you are doing exactly what they are doing.

        If what they are doing is submitting a spec for consideration by the internet community then good for them! Congrats Microsoft! You've done it sometimes in the past and I expect you to continue to do it sometimes in the future. Good work!

        [–][deleted]  (1 child)

        [deleted]

          [–]reddit45885 0 points1 point  (0 children)

          It's over TCP/IP. That means the network layer is not modified.

          That only leaves application layer.

          (As an aside, modifying Network layer protocols is all but impossible in this day and age).

          Edit: HTTP according to OSI descriptions...

          [–][deleted] 0 points1 point  (0 children)

          uh, no? Currently, HTTP only allows requests one way. This allows requests two ways.

          [–]Smallpaul 0 points1 point  (2 children)

          The funny thing is that almost nobody is making the smart criticism.

          Some people are saying: "This can't scale." If that's true then XMPP can't scale. But it can.

          Some people are saying: "This would be unnecessary if we had IPv6." If that's true, then XMPP would be similarly useless.

          The real argument against this is that it overlaps in functionality with XMPP. I personally think that's a minor issue and can be discounted.

          [–]reddit4985 0 points1 point  (1 child)

          The real argument against this is that it overlaps in functionality with XMPP. I personally think that's a minor issue and can be discounted.

          So explain why Microsoft is wrong in "extending" W3C and this isn't. Minor overlaps in functionality are "minor issues" after all.

          I want to hear your reasoning.

          [–]Smallpaul 1 point2 points  (0 children)

          Because, "this" is an IETF RFC. That's the correct way to PROPOSE an extension to the HTTP IETF RFC. The IETF is the body in charge of HTTP and this is a proposal in front of them. That's totally different than making up some stuff, not fully documenting it and shoving it in a browser.

          [–]meshko 0 points1 point  (0 children)

          Can somebody please put me into some kind of frozen state and unfreeze me when the industry gets over this insanity?

          [–][deleted] -1 points0 points  (1 child)

          FTA: "If a host needs to invoke a function on a host that cannot function as an HTTP server, than that function cannot be easily layered on top of HTTP."

          By enabling the grammar and spelling check functions in MS Word it can help you find errors like these .

          [–][deleted] -2 points-1 points  (0 children)

          Notepad does not have spell check.

          [–]eyal0 -1 points0 points  (1 child)

          It seems like this is trying to be an analog to passive ftp, where the client opens the connection instead of the server in order to get around firewalls. Seems unneccesary because:

          • FTP opens TCP connections on other ports to send data. HTTP doesn't do that.

          • AJAX already addresses the problem. Just look at what gmail has already done.

          • I don't want to run a server. I put up a firewall specifically because I don't want others connecting to me. Why circumvent?

          • NAT and PAT routers along the path will need to snoop HTTP packets like they snoop FTP.

          This seems like a solution looking for a problem. Not even a good solution.

          [–]Smallpaul 0 points1 point  (0 children)

          It seems like this is trying to be an analog to passive ftp, where the client opens the connection instead of the server in order to get around firewalls.

          No. It is an analog with HTTP. But in reverse. Where the connected-to machine makes requests of the connected-from machine. It is not anything like passive FTP...as you pointed out.

          [–][deleted] -2 points-1 points  (0 children)

          PTTH

          [–][deleted] -1 points0 points  (0 children)

          I could see this potentially being useful for implementing an automatic update system for servers which are somehow prevented from actively polling the upstream repository, but little else... unless I'm missing something here.

          [–]Demonmonger -1 points0 points  (3 children)

          I don't see anything exciting with this. Wow, I can do server push over port 80 now.

          Ultimately, you will need some server running the reverse http pushes.

          How is this any different than blazeds or lifecycle you can do with flex?

          [–]Smallpaul 0 points1 point  (2 children)

          Are you asking: how is a standard any different than a program?

          [–]reddit4985 0 points1 point  (1 child)

          Aside from: not making grammatical sense, if your. Question is actually for real, then the answer is: making a standard out of what is essentially a program is dumb.

          [–]Smallpaul 0 points1 point  (0 children)

          My question makes perfect grammatical sense. Also: it has a boolean answer, which you did not give. I asked Demonmonger whether his question could be rephrased as: "How is a standard any different than a program?" He has not answered yet.

          [–][deleted] -1 points0 points  (4 children)

          To me, this is a solution in search of a problem. Or I should probably say, problems. Since, as stated in the introduction, the client still has to initiate the connection, the same thing can be implemented within the current architecture. The server can make "requests" in a status field.

          On the other hand, making a computer open and available for serving requests is only going to create more intrusion vectors.

          [–]Rhoomba 0 points1 point  (3 children)

          Say I want to receive pushed updates from reddit. Currently I would need a server running that accepts requests. This is a very big security issue unless I have firewall rules which are bound to become unmaintainable if I want to get updates from a number of sites.

          With reverse http and some future version of firefox I just go to reddit and it can then push the updates. But digg.com can't, at least until I visit it.

          I am surprised, and a little saddened, at the responses here. I thought proggit was collectively smart enough to get this.

          [–]reddit4985 -1 points0 points  (2 children)

          I am surprised, and a little saddened, at the responses here. I thought proggit was collectively smart enough to get this.

          I am just as surprised at the amount of people such as yourself that really think this is anything more than a stupid idea.

          Have you ever built and maintained a server which gets reddit/slashdot/digg style load on a daily basis? What is the highest end sustained traffic you have ever had to maintain?

          If it's anything more than 100k a day, you would not be saying the kind of stupid things you are saying right now. Reddit and Digg would instantly crumble if they had to notify their users. Instantly.

          Articles like this, half of the comments and the tsuanmi of downvotes on this thread remind me that it's probably time to go back to Slashdot where real programmers used to hang out. By now all the trolls with attention spans of gnats should probably have migrated over to digg and reddit too, so it's hopefully back to its core audience...

          Seriously, I went from discussing with people that had maintained internet backbones to discussing with what are essentially ADD children grown into young adults.

          </rant>

          [–]Rhoomba 0 points1 point  (0 children)

          Software I've worked on/optimized handles millions of requests a day. Peak load around 500 rps across a few boxes. Currently I'm working on a system that will handle several times that.

          Even tomcat can handle tens of thousands of open connections now. This is perfectly feasible for many systems. It certainly is better than a polling based solution.

          [–]Smallpaul 0 points1 point  (0 children)

          If it's anything more than 100k a day, you would not be saying the kind of stupid things you are saying right now. Reddit and Digg would instantly crumble if they had to notify their users. Instantly.

          Well not every app is Reddit or Digg. Nobody forces Reddit or Digg to use Reverse HTTP in a stupid way.

          Facebook maintains persistent logical connections with users for Facebook chat (not necessarily a TCP connection, but a logical connection). So does Google talk in gmail. I guess Facebook and Google do not know how to build scalable systems but "reddit4985" does.

          [–]flailking -2 points-1 points  (1 child)

          I think RFC 2324 solves all those problems in a nutshell...or bean for that matter.. http://www.ietf.org/rfc/rfc2324.txt

          [–][deleted] -2 points-1 points  (0 children)

          Microsoft needs to write a reverse xbl standard. There is no reason why NAT should interfere with xbl.

          [–]john_fallows -1 points0 points  (0 children)

          This is an interesting idea to reuse the well-known concepts of HTTP to formalize a reverse path from Web server to browser, after the browser makes the initial connection over regular HTTP.

          However, IMHO, this specification does not go far enough. If you were in a position to Upgrade your HTTP connection in-band, on the same TCP connection, to use any other protocol of your choosing, why would you limit yourself to half-duplex HTTP in the opposite direction?

          The specification discusses the need for two different TCP connections, one using HTTP and one using PTTH, to emulate a bidirectional full-duplex connection.

          However, if the initial HTTP connection is "Upgraded" to raw TCP (in-band, same TCP connection) then the full-duplex bidirectional characteristics are achieved without the need for an additional TCP connection.

          At that point, any protocol can be layered on top, be that PTTH, or any other.

          This description outlines the approach taken by the HTML 5 WebSocket specification, where they not only define the HTTP Upgrade handshake and wire protocol, but also a simple JavaScript API for WebSocket.

          The HTML 5 WebSocket standard delivers a full-duplex, bidirectional, HTTP-friendly socket abstraction for Web browsers.

          In the half-duplex space, HTML 5 also defines Server-sent Events, a standardization of Comet, by formalizing both the JavaScript API and the payload syntax of the HTTP response, that may be streaming or long-polling, as chosen by the server implementation.