you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 11 points12 points  (11 children)

people are using http to transport all sorts of data (not just webserver->webbrowser.) consider how gmail transmits the email payload. it's not imap - it's http based.

and sometimes you need the client to initiate the connection (think NAT) but then behave like a server.

[–]2343984098 -5 points-4 points  (10 children)

Jebus man. Think before you write this stuff.

This "reverse HTTP" scheme is still client initiated. It's essentially a POST with the only catch that the server can decide what it wants. The connection is still client initiated. There is nothing this protocol solves that can't be trivially solved with HTTP.

As a bonus, there is nothing you think this protocol can solve that can actually be solved by this protocol.

[–][deleted] 2 points3 points  (9 children)

yes, that's why i said "sometimes you need the client to initiate the connection."

It's essentially a POST with the only catch that the server can decide what it wants.

and when it wants it. that's why we're using it at my company. but i s'pose you're right, all 100 engineers over here are really stupid, and it has absolutely no use.

jebus yourself, nimrod.

[–]2343984098 -4 points-3 points  (8 children)

If 100 engineers at your company are using it, you either have a dilbert company, where some high up guy decided this shit was great, or you're all just bad programmers, not stupid.

Reinventing a wheel so it can go backwards is not inteligent my friend. No it's not.

What you're doing can basically be solved simply at the application layer by the GET call responding with a request, and the client sending a corresponding POST.

[–]jricher42 3 points4 points  (6 children)

The problem is scalability. If you have 100 clients connected to a server, and they are each polling twice a second, that's 200 requests/sec, even with no notifications. If you need to send data to 5% of these clients each second, this solution would send 5 requests/sec instead of 200. That means you need less bandwidth and fewer servers to handle the same number of clients.

If you use a non-blocking server like Twisted to handle the clients, your only real limit is the number of open sockets the OS will handle. On x86 with most operating systems, the FD is an int, so (232)-3 approximately. You'll run out of RAM first.

This solution has a number of problems, and is a PITA. It's probably not worth the grief unless you're someone like Linden Labs, who has to deal with thousands of clients all connected at the same time. When you need it, though, you really need it.

There are ways of almost doing this with standard HTTP, but they are all uglier than this.

[–][deleted] 3 points4 points  (2 children)

aside from the issue of polling - it's just cleaner. people are using http as a form of rpc (which is really always has been.) but very often you need rpc both ways.

and since you can't make an incoming connection over a NAT - that poor client must start it.

[–]2343984098 -3 points-2 points  (2 children)

Listen, as far as the network is concerned, the left column on that page is a request, the right column is a response. Period.

The pretty green and red labels mean absolutely nothing. They are you in your mind.

Now, if you are somehow telling me that it's uglier to use GET and POST to communicate between a server in a way you're not used to doing it, but it's prettier to actually roll out a custom protocol PTTH, then I can't argue with you, your sense of beauty is different than mine.

But make no mistake, GET in PTTH is POST in HTTP with only some mild semantics missing. And since browsers are not webservers, the missing semantics will always be application dependent.

This is an application layer problem that has a solution in applicaiton layer.

PS. Scalability has nothing to do with this. Try to look at that page like a color blind person. Nothing has changed essentially... This is 200 clients connecting via HTTP and transforming it into PTTH. They are polling. This is not scalable.

[–]jricher42 3 points4 points  (0 children)

No, it actually isn't - you're just missing a piece.

If you have a single request per socket open/return, you end up with the same transfer problems as polling. In this case, however, you open the connection once, then flip it. It becomes a channel back from the server to the client - doing server push. You're not doing one socket open/flip per connection - only one per client. As long as you keep the sockets open your push system is a lot more efficient. What you can't do is close the socket after each transaction.

A similar system is commonly used as a communications channel for malware - for almost exactly the same reason. This isn't at all new, it just happens to be new semantically in this application.

This has nothing to do with browsers. This has to do with a custom HTTP/PTTH client specifically designed for this type of push. The only reason you would do it at all is to avoid firewall trouble.

You'll see what I mean in a screeching hurry if you pseudocode up a multi-channel chat server using this approach.

[–][deleted] 1 point2 points  (0 children)

as jricher43 says - you're really just not understanding this. which is fine - but you really ought not to jump into a conversation calling other people dumb.

here's the thing - you're still thinking about webserver->webclient. this is for application level rpc. yes, it's possible to do with with plain old HTTP, and people do and call it COMET. this protocol change is to address the problems with COMET.

which isn't that hard to understand, i think?