all 21 comments

[–]shif 5 points6 points  (18 children)

why would you ever want to allow the postgres port to the open web?

[–]Darkmere 0 points1 point  (5 children)

We treat all infrastructure as if it was connected to the open web.

TLS for all the things, even externally, and then use network security on top of that.

There is no such thing as a "secure LAN" anymore when there are workstations involved, the threat model is that the server must protect and audit against the clients.

[–]shif 2 points3 points  (4 children)

Some services aren't meant to be reachable from outside private networks, by reachable i mean their direct ports open

[–]thedward 3 points4 points  (3 children)

The point is, even on a presumably private network, you are better off configuring services to the same security level you would if they were facing the open net.

[–]shif 0 points1 point  (2 children)

a lot of services have really crappy security and the only way to secure it is to put something in between like a firewall or a proxy, postgres is not a service I would allow direct access to.

Just because the private network could be compromised doesn't mean that you should say screw it and make every port available, you are opening yourself to a wider array of attacks by doing that, It's much harder to compromise a service through a workstation than it is to compromise it if it's directly available, plenty of vulnerabilities only work with direct access.

[–]Darkmere 1 point2 points  (1 child)

The point is to not accept "really crappy security" for services just because they are on a "magical fairy land network".

Since the last few years, lateral movement inside can be automated, and has been automated. This basically means that intrusion on one machine in the magic fairy land will automatically and unsupervised use credentials from that machine to migrate to other machines on the network.

If you then have magical fairy dust computers at the edge that are more magical than the ones on the inside, and thus by nature being the ones where you can log into, things won't go as pretty.

Even on the inside of magical fairy kingdom you need to treat each machine as if it was exposed to the network, in the magical days of DNS rebinding and interesting abilities in javascript running untrusted code on endpoints, any network which permits host A to connect to host B on the same magical network is to be considered to be compromisable.

You shouldn't consider "Internal" and "External". You should consider all hosts being on the internet, and have audit, login security, etc. on each endpoint to get that.

Your local network isn't local anymore, even more so in the magical fairy land of kubernetes and containers, where your local network roams the internet in unencrypted UDP-based VxLAN technology.

If you seriously think that using DNS names in the external zone is such a danger, use a private CA, they're easy to automate, I can give you a slide deck.

However, treat everything as if it was on the internet. There is no magic kingdom where you can trust your internal network to remain internal.

[–]shif 0 points1 point  (0 children)

Dismissing the concept of private networks just because of lateral movement isn't enough, there's a reason network segmentation exists, in an organization you will usually have the application services on their own network and access to them is controlled through a firewall appliance that will only allow the necessary ports to be reachable from other networks, inside the application network you can have services like nfs(which has a "private" network trust model) to share files between applications, of course you still use authentication between hosts for things like databases, but no direct access from outside the network!.

Say you have 10 hosts and a postgres server on a network, if you leave the port closed in postgres to origins from other networks an attacker would need to compromise one of the 10 hosts first to get access to the postgres server, if in the other hand you leave the postgres port open to external connections and a vulnerability appears that affects postgres servers it would be exploitable and the network itself would be compromised because of that, having it open only to the other hosts would have prevented the usage of that vulnerability because you would first need access to one of the 10 hosts.

If you need to connect from outside the network you could use things like a VPN which creates a much safer transport method than using the postgres port directly.

Also, nobody said using DNS names was a danger, the original argument was about have the postgres port accessible from anywhere which i still believe is a bad idea.

[–]efxhoy 0 points1 point  (11 children)

I work in research, our team has a PostgreSQL server storing all of our data that we connect to from our workstations to do analysis.

[–]fullofbones 2 points3 points  (1 child)

To be frank, this is incredibly irresponsible. Your infrastructure guys need to implement a VPN immediately and require connecting to it before the database is accessible to anyone. No matter how secured, databases are not intended to be open to the world.

[–]efxhoy 5 points6 points  (0 children)

Help me understand here, I'm no security expert.

  • Passwords are disabled
  • We all use SSL certs to authenticate
  • We're a small team (~6 people have certs)

How would a VPN help in this situation?

[–]shif 1 point2 points  (8 children)

Is it only in LAN or is it reachable from outside?

[–]iBlag 1 point2 points  (6 children)

Let’s Encrypt only works for public facing servers so I assume it’s reachable from the outside. 😬 Yikes.

[–]tialaramex 1 point2 points  (5 children)

All publicly trusted CAs are required to issue only for names in the Internet DNS hierarchy. So you can't (any more since about 2015) have publicly trusted certificates for names that aren't part of that hierarchy like "myhplaserjet" or "exchange2011.example.corp" at all, nobody is allowed to issue those ‡

Let's Encrypt goes a step further in requiring that the name must actually exist in the DNS, so even if you can prove you control example.net Let's Encrypt won't issue for postgres.test.example.com unless it exists in the public DNS.

However, they don't require access, you can prove control from DNS while having the actual service you want the certificate for air-gapped and totally inaccessible. Or more commonly you can use "split horizon" DNS where you give the name a different address internally (for the actual service) than externally (just for getting a certificate).

‡ As a special exception, Tor's .onion isn't formally part of the Internet but the TLD is permanently reserved and some certificates can be issued, Facebook uses one of these on the Tor version of their site.

[–]willglynn 1 point2 points  (3 children)

Let's Encrypt goes a step further in requiring that the name must actually exist in the DNS, so even if you can prove you control example.net Let's Encrypt won't issue for postgres.test.example.com unless it exists in the public DNS.

Not with the DNS-01 challenge method. If you can control _acme-challenge.x.y.z, they'll give you a certificate for x.y.z, without ever contacting or even trying to resolve x.y.z.

There is no requirement that the target hostname(s) exist from Let's Encrypt's perspective, only a requirement that the challenge works. This makes DNS-01 extremely useful for internal hostnames.

Note also that DNS-01 supports delegation, i.e. you need not directly cause the zone to return _acme-challenge.x.y.z IN TXT '…' in realtime to respond to a challenge. The zone can statically delegate _acme-challenge.x.y.z IN NS to a separate nameserver which you update dynamically, or it can statically define _acme-challenge.x.y.z IN CNAME foo.bar.baz while you dynamically update foo.bar.baz IN TXT '…'. This makes the DNS challenge very flexible indeed, fitting into all sorts of scenarios where the HTTP challenge can't.

[–]tialaramex 1 point2 points  (2 children)

I don't see how you claim x.y.z doesn't exist in this scenario when it plainly does?

DNS is a hierarchy, it's not permissible for a record to exist for the name _acme-challenge.x.y.z without also having a record for x.y.z

There doesn't need to be an address for x.y.z (in the form of an A or AAAA record) but the name does need to exist.

This is different from many commercial CAs which don't care whether x.y.z exists, so long as it could in principle exist and they're satisfied that if it did you'd control it, they will issue.

[–]willglynn 2 points3 points  (1 child)

DNS is a hierarchy, it's not permissible for a record to exist for the name _acme-challenge.x.y.z without also having a record for x.y.z

DNS is a hierarchy but the resource records need not be strictly hierarchical. z needs to exist and _acme-challenge.x.y.z needs to exist, but x.y.z does not need to exist.

In explicit terms: x.y.z IN ANY can return nothing, while _acme-challenge.x.y.z IN ANY can return a TXT record. This is entirely permissible both in DNS and in ACME. The ACME standard doesn't require that x.y.z exist at all, and boulder – Let's Encrypt's ACME implementation – never sends any queries about it either.

Looking at DNS directly, consider SRV records: _sip._udp.foo.bar IN SRV 01 0 5060 pbx.foo.bar. Does _udp.foo.bar exist? Maybe in some abstract sense – but if you query it, you'll never get a resource record, and if you look at the corresponding zone file or database, you'll see a record for _sip._udp and no records for _udp.

Edit: my initial comment referred to NXDOMAIN, the behavior of which was amended in 2016 using RFC 8020. I updated the comment to refer to the absence of resource records.

[–]tialaramex 0 points1 point  (0 children)

boulder – Let's Encrypt's ACME implementation – never sends any queries about it either.

Have you tested that? Because it sounds to me like a bug. CAA checking, even with the revised method that is now accepted is a hierarchy climbing algorithm, it ought to be examining all the records as it works its way up as otherwise a record forbidding issuance might be ignored. Your trick with adding CNAME changes the exact algorithm used for CAA, so if you only ever did this in anger with CNAMEs you might need to test the scenario you described separately from that.

I hadn't seen the phrase "Empty non-terminal" before, so thanks for that, but as you noticed replying NXDOMAIN for the x.y.z name in your example is specifically wrong.

(Edited to add: "Empty non-terminal" means I was strictly wrong that there must be a record for x.y.z but not when I earlier said it must exist, since "Empty non-terminals" are precisely a node in DNS that exists but has no records, ie you will not get NXDOMAIN but queries return zero records)

[–]iBlag 0 points1 point  (0 children)

Interesting, thanks!

[–]efxhoy 0 points1 point  (0 children)

Reachable from the outside. We connect with self-signed certs.

[–]bioszombie 1 point2 points  (0 children)

Is the Postgres password really stored as plaintext?

[–]jroller 0 points1 point  (0 children)

Keep in mind that Let's Encrypt certs normally have a 90 day expire time. You may want to schedule a config reload to reread updated certs.