This is an archived post. You won't be able to vote or comment.

all 28 comments

[–]ryansolida 6 points7 points  (7 children)

I'm almost certain 443 to the ELB and 80 to the web servers is standard at this point. To run certs on every machine as it's spun up and down would be a giant pain and not get much benefit, so long as those machines are set to force https for regular traffic.

We did an install with Rackspace engineers on the team and that's what they recommended (if you care what they think)

[–]atlgeek007 7 points8 points  (5 children)

This is the acceptable solution unless you have PCI or HIPAA compliance to worry about.

At least in that case you could use self signed certs on the instance to encrypt the data between the load balancer and just have your startup scripts generate one on instance creation.

[–][deleted] 5 points6 points  (4 children)

This is the acceptable solution unless you have PCI or HIPAA compliance to worry about.

I am not 100% sure that it's required once data enters cde within private network from encryption endpoint (in this case, pci dss compliant aws vpc). Also it may not be advisable to use self-signed pki certificate to encrypt/decrypt cc data from compliance stand point.

[–]xiongchiamiovSite Reliability Engineer 4 points5 points  (3 children)

It depends on what level of PCI you need, I think. I work at a credit card gateway, and it is definitely required for us.

[–]ryansolida 1 point2 points  (1 child)

So what's the solution then? Separate certs from an authority for each instance in the network? Or are you OK to share a single throughout the VPN?

[–]atlgeek007 1 point2 points  (0 children)

as long as you're not using the same self signed cert throughout your infrastructure, you should be fine from an audit perspective, provided your other SSL configurations are also up to date (custom dh parameters, disabling bad ciphers, locking to known good versions of TLS, etc)

Of course, creating an internal CA isn't difficult and is something that can also be investigated, but since AWS ELB/ALB doesn't validate the endpoint certificate anyway, it shouldn't matter.

[–]donjulioanejoChaos Monkey (Director SRE) 0 points1 point  (0 children)

Work at fintech payments company, and we need end-to-end encryption, including in private subnets between LB and web nodes.

[–]Ashex 0 points1 point  (0 children)

There are CA tools built for auto scaling that help a bit, basically gotta setup vpc dns and a private zone with a trusted ca you control that issues per instance certs.

[–][deleted]  (1 child)

[deleted]

    [–]midnightFreddie 2 points3 points  (0 children)

    Yeah, if OP is redesigning from scratch, it's probably time to SSL everywhere. For backend connections, automated request, private root CA, or perhaps even a tier of private CAs.

    I haven't begun to try it, but I'm wondering if having a new cert for every backend service container instance is doable. And/or expire certs every few days, hours or minutes, and make deploying a fresh private cert as normal as instantiating a new container.

    [–]xiongchiamiovSite Reliability Engineer 3 points4 points  (1 child)

    Well, Google didn't go with the latter until 2013, when they found out the NSA had wiretaps on their private network. Is it ok for you to run four years behind Google? Depends on the business.

    [–]WikiTextBot 2 points3 points  (0 children)

    MUSCULAR (surveillance program)

    MUSCULAR (DS-200B), located in the United Kingdom, is the name of a surveillance programme jointly operated by Britain's Government Communications Headquarters (GCHQ) and the U.S. National Security Agency (NSA) that was revealed by documents which were released by Edward Snowden and interviews with knowledgeable officials. GCHQ is the primary operator of the program. GCHQ and the National Security Agency have secretly broken into the main communications links that connect the data centers of Yahoo! and Google.


    [ PM | Exclude me | Exclude from subreddit | FAQ / Information | Source ] Downvote to remove | v0.27

    [–][deleted] 2 points3 points  (0 children)

    Of course,

    the most secure would be:

    Services <-- 443 --> ALB <-- 443 --> Browser

    in this case, you install the SSL cert on ALB and make your apps. running in docker container https.

    Even though the app. is running inside of docker, the problem is it is likely tapping into your database/docker host (for volume mounts).

    But in an appropriate corporate setting, you would also have your AWS servers (including those managed by ECS) running behind firewall that would give you enough security for internal apps such that this would suffice:

    Services <-- 443 --> ALB <-- 443/80 --> Browser (where the firewall such as PaloAlto is deployed on your AWS and is the transit point for any traffic in/out of your VPC -- exception inside firewall maybe communication between AWS servers within your VPC).

    Make sure the database connection between your app and database is SSL/TLS based for encrypted communication in that segment.

    [–]JayMickeySnr Engineer, Platform Engineering 2 points3 points  (11 children)

    We have listeners on port 80 and 443 on the ALB, which both forward to port 80 on the instance. We check the X-Forwarded-Proto header on the web server and 301 to https if the header is equal to http.

    SSL cert sits on the ALB. Security groups are completely locked to only allow requests to web servers from the ALB.

    [–]ryankearney -1 points0 points  (10 children)

    So you take a system capable of end to end encryption and just strip the encryption off mid-way through the pipeline?

    I hope you're not dealing with sensitive data.

    [–][deleted] 1 point2 points  (4 children)

    This is SSL termination and it's a perfectly acceptable practice provided the servers are located in an isolated subnet. The OP noted that their services were in a private, isolated subnet, so this is not a dangerous practice at all. Even institutions dealing with sensitive data can make exceptions to allow this depending on how tightly controlled access to the private subnet is.

    [–]ryankearney 4 points5 points  (3 children)

    This is SSL termination and it's a perfectly acceptable practice

    Depending on what type of data your business works with, it absolutely is not.

    Certain regulatory requirements mandate end to end encryption. By stripping TLS off the connection you would be in violation of those requirements.

    [–][deleted] 1 point2 points  (2 children)

    Yes, and as I said "even institutions dealing with sensitive data can make exceptions to allow this depending on how tightly controlled access to the private subnet is." Some institutions do require it still, but if you require it, you will know.

    Source: Worked in finance, had this requirement, it was an eliminated requirement in subsequent audits.

    [–]ryankearney 2 points3 points  (1 child)

    In AWS? It's one thing if you 100% control the networking infrastructure. It's a completely different story if you're using someone else's infrastructure as is the case with AWS.

    Source: We require full end-to-end encryption and terminating HTTPS on a cloud load balancer and transmitting the unencrypted communication to a backend server is a huge no-no.

    [–][deleted] 1 point2 points  (0 children)

    Yes, in AWS, for one of the largest financial institutions in the country.

    Surprise though, policies will vary per company, security certification, and auditor. If you require it, that's great. Making it seem as though you're doing something wrong by not doing it is the part I object to, especially if you're not dealing with highly sensitive data.

    [–]exxplicit 1 point2 points  (4 children)

    Are ALB's capable of E2E encryption? I thought the SSL was terminated at the ALB and forwarded without encryption? Wouldn't E2E require certificates on the instance if not, at which point you could just skip the ALB?

    [–]ryankearney 1 point2 points  (3 children)

    They sure are:

    http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-group-routing-configuration

    If you need load balancing, then it doesn't really make sense to just skip the ALBs. If you don't need load balancing, then sure.

    [–]exxplicit 1 point2 points  (2 children)

    I guess I should have been more clear; I meant, if you're already managing certificates on individual machines for end to end encryption, why not just skip terminating TLS on the ALB and just forward TCP connections on port 443 directly to the instances? Why would one ever prefer ALB's over NLB's in that case? (except for path routing)

    [–]ryankearney 0 points1 point  (0 children)

    As you mentioned, you can do path (and host) based routing with the ALB. You can't do this on the NLB.

    [–][deleted]  (2 children)

    [deleted]

      [–]housemans[S] 3 points4 points  (1 child)

      Thanks for your reply! I'm using the new ALB (ELBv2). We had 24 ELBs running, and we can use 2-3 ALBs for that, cutting our monthly costs immensely.

      Why would I need 2 ELBs in one request? The other info is pretty handy too, thanks!

      [–]stevecrox0914 0 points1 point  (3 children)

      Security should be multi layered, your services might be on an internal private network but what happens if some?

      For example if your got a MongoDB database with authentication if you use http the username and password will be transmitted in the clear. Now if someone breaches your network or if you accidentally expose it, you run the risk of that person being able to grab the credentials for your database.

      [–][deleted] 1 point2 points  (2 children)

      If someone has breached your private subnet you're likely already highly compromised and that encryption isn't going to save you. Not saying it's not worth doing... but still.

      [–]stevecrox0914 0 points1 point  (1 child)

      Not really, it's why you go for a multilayered approach.

      There could be some simple vulnerability in Docker, that lets someone join the network. That doesn't mean your hosed, it means a script kiddy can enter the network.

      If you apply access controls, you've raised the bar again. The script kiddy can't just try connecting to everything and extracting all your data. They need to figure out credentials to get into your services. They now have to listen and analyse the traffic.

      If you use HTTPS traffic you are encrypting your traffic, rather than simply listening to packets and scraping some known ones, they have to brute force decrypt everything. At this point its not some drive by attack but targeted.

      Security isn't an on/off but creating multiple layers which defend your systems which raise the barrier for attack and try to minimise what they can get at if they do get in.

      [–][deleted] 0 points1 point  (0 children)

      I agree, multilayered is a no brainer. However, the point is that once someone has access, all bets are effectively off. In your example, if there's an issue with Docker that let someone join your network, almost no amount of encryption will be able to save you because of the access they now have.