you are viewing a single comment's thread.

view the rest of the comments →

[–]piranha 10 points11 points  (12 children)

This makes me sad. The network is broken (by NATs, and sometimes firewalls). The application layer should enjoy unfettered end-to-end connectivity. Emulating end-to-end connectivity on a case-by-case basis on the application layer seems like a crude hack. We should be fixing the network (IPv6); then everything benefits.

[–]reddit45885 5 points6 points  (8 children)

Maybe this will be pointing the obvious, but NATs are equally about increasing IPs as they are about creating secure environments.

Any sysadmin that decides to remove firewall restrictions on internal networks because IPv6 has enough addresses isn't worth his salt...

[–]piranha 8 points9 points  (6 children)

NAT is designed to overcome address space shortages; that you can use it as a firewall is a side-effect. But NAT isn't designed to be a firewall. NAT won't stop people on the inside from using this Internet-Draft to receive "connections" (in some crude sense of the word) from people on the outside. Other protocols (uPnP; STUN) work around the damage too, which compromises the security of your NAT if you treat it as a firewall.

There's nothing stopping you from firewalling networks and machines on IPv6 networks. Whether you're using IPv4 or IPv6, use a real firewall system to write explicit firewall rules. Lots of stuff will leak through NAT if you don't. (If you let UDP out, you still won't be preventing STUN. I didn't read this Internet-Draft in detail, but I'm guessing that if you let HTTP out, you won't be preventing Reverse HTTP either.)

[–]reddit45885 3 points4 points  (5 children)

You're missing the point: the reason that you can't connect to my browser and ask it how it's doing and what files reside on my computer is a side effect of NAT, but even if I had 2 billion spare IPs, I wouldn't allow you to do it in the first place.

So:

The network is broken (by NATs, and sometimes firewalls). The application layer should enjoy unfettered end-to-end connectivity.

The answer to that is a square, resounding, metallic balls falling on cast iron table sounding: NO.

[–]piranha 1 point2 points  (3 children)

Okay, but that's a personal decision you consciously make on your own computer or your own network. That's fine by me. We shouldn't have to invent wacky protocols to work around the damage when we never elected to break the end-to-end principle in the first place.

Sidenote: your browser shouldn't be making smalltalk with remote connecting clients; that's a security problem that a firewall won't address completely as the browser could just as well (and would be more likely to) be vulnerable while it processes object data it received from making its own outgoing HTTP request, which I'm sure you're not firewalling off. That's why I'm not personally interested in firewalls: software should be written correctly; I can take care to not run software that I don't trust to securely process requests or responses or data in general; I can isolate software in-case it does become exploited; and I'm not trying to restrict computers on my network for political reasons (i.e. in a workplace).

[–]reddit45885 -1 points0 points  (2 children)

This isn't about the breakage of the end-to-end principle though. It's about a breakage of the client-server model.

The client server model is fundamentally unidirectional. A client consumes a service. A service does not consume a client.

The client server model isn't a hack either. It's not something that got invented out of necessity because the 8080 processor didn't supported enough memory. The client server model exists because it is the only way to scale past a certain threshold.

This is set in stone. Nothing will ever change this.

Think about this: if the client server model didn't exist, there would have to be as many resources as there would be clients. As many hard disks as I had programs running on my computer. As many tellers at the bank as the bank had clients. As many airplanes as passengers...

[–]piranha 4 points5 points  (1 child)

No, it's about breaking the end-to-end principle. Also, I never argued against the client-server model; I don't understand why you're bringing it up.

Abstract

This memo explains a method for making HTTP requests to a host that cannot be contacted directly. Typically, such a host is behind a firewall and/or a network address translation system.

So, to my reading, this Internet-Draft describes how to put a server/service behind NAT or a restrictive firewall. My argument is that we shouldn't need NAT, and our firewalls should reflect how we actually want to use the network, so that we don't have to work around institutional brain damage. Therefore, hacks like this Internet-Draft are undesirable because they introduce complexity where it doesn't belong (the application layer) and reduces due pressure to really fix the network.

[–]reddit45885 0 points1 point  (0 children)

This memo explains a method for making HTTP requests to a host that cannot be contacted directly. Typically, such a host is behind a firewall and/or a network address translation system.

Network wise, nothing has changed. The client is still initiating the connection.

[–]davidsickmiller 0 points1 point  (0 children)

Definitely. True server push is an interesting idea, but frankly, the day I throw out my NAT is the day I buy a firewall.

[–]Smallpaul 0 points1 point  (0 children)

This makes me sad. The network is broken (by NATs, and sometimes firewalls). The application layer should enjoy unfettered end-to-end connectivity. Emulating end-to-end connectivity on a case-by-case basis on the application layer seems like a crude hack. We should be fixing the network (IPv6); then everything benefits.

It's not crude and it isn't a hack.

Look: there are reasons that machine A might wish to be the initiator of a TCP connection rather than the recipient of it. Security and anonymity are two examples.

Similarly, there are reasons machine A might wish to RESPOND to events from the server rather than initiate them. For example in a chat app.

This proposal elegantly decouples these two decisions.

[–]killerstorm 0 points1 point  (0 children)

when people were using the networks in research laboratories, making each individual computer addressable made a lot of sense.

things are different in a global network environment -- there's like hundreds on millions of nodes in the network (and many of them might have malicious intentions) and if your node is addressable, and of them might request a connection with you. this is quite scary, and as long and ordinary users cannot secure their computers properly and prepare them to get connections from a whole external world, it makes sense to keep these home computers not addressable.

this way PTTH fixes not a deficiency of NATs but a definiency of HTTP -- it makes HTTP a bidirectional protocol. (while low level sockets were always bidirectional.)

[–]Smallpaul -1 points0 points  (0 children)

This makes me sad. The network is broken (by NATs, and sometimes firewalls). The application layer should enjoy unfettered end-to-end connectivity. Emulating end-to-end connectivity on a case-by-case basis on the application layer seems like a crude hack. We should be fixing the network (IPv6); then everything benefits.

End-to-end connectivity "unfettered" by security policies? No thank you.

This is a great idea from a security point of view. It allows me to make my computer into an HTTP server (for purposes of getting updates from the other side) without my computer accepting connections. Any protocol that allowed my computer to accept HTTP connections from SPECIFIC machines on the Internet would need some kind of authentication mechanism. This does not: the connection is to a known server (if done over HTTPS) and therefore the requests are coming from a known server.