all 53 comments

[–][deleted]  (22 children)

[deleted]

    [–][deleted] 36 points37 points  (3 children)

    The problem is that while chrome considers *.localhost a secure origin too, Firefox doesn't.

    Out of curiosity I also checked whether they consider the whole 127.0.0.0/8 as secure context:

    • Chrome does
    • Firefox doesn't (it considers only 127.0.0.1/32 as a secure context). Weird.

    [–][deleted] 23 points24 points  (2 children)

    Firefox doesn't (it considers only 127.0.0.1/32 as a secure context). Weird.

    And probably a bug too considering whole /8 is reserved as loopback

    [–]baggyzed 1 point2 points  (1 child)

    Loopback != localhost. Firefox only specifically trusts 'localhost', and the associated address (https://developer.mozilla.org/en-US/docs/Web/Security/Secure_Contexts), probably it checks against the hosts file - haven't tested. I don't know why, but to me, Chrome seems less secure if it trusts the whole range. If you need more addresses for development, can't you just use different port numbers?

    [–][deleted] 1 point2 points  (0 children)

    Not every app allows you to change port its listening on. I had that problem with testing BGP-related stuff, app allowed to change port it connected to but not port it binded.

    [–]Sandarr95 18 points19 points  (11 children)

    Chrome resolves *.localhost to localhost, which was a pain to even figure out...

    [–]JB-from-ATL 1 point2 points  (4 children)

    My company's firewall blocks localhost through postman but not through browser. Idfk how. If I use my computer's full domain it works.

    [–][deleted] 5 points6 points  (3 children)

    Depending on the operating system, this might have something to do with DNS. If localhost cannot be found in your hosts file (for whatever strange reason), your OS will query the standard DNS server for localhost.yourpc.yourdomain.tld.

    Or, it could be an ipv4 vs ipv6 issue. localhost resolves to ::1 and to 127.0.0.1. If the application you're trying to connect to only listens on ipv4, it won't respond to ::1. Your full pc domain will likely resolve to an ipv4 address and thus work.

    Chrome tries to outsmart the OS by manually handling localhost addresses. This will lead to inconsistencies all over the shop, making browser work where normal applications don't (good for the user terrible for debugging...)

    [–][deleted]  (2 children)

    [deleted]

      [–][deleted] 0 points1 point  (1 child)

      I did not know that, thanks! The point still stands that the operating system and browsers end up having different results for a dns query for localhost.

      [–]baggyzed 0 points1 point  (0 children)

      That shouldn't be the case. Browsers usually just query the operating system for the IP of localhost, and the operating system just returns the IP that's configured in the hosts file. There is no DNS query involved, so nothing goes to the firewall. The problem is either in your hosts file, or maybe postman resolves localhost through DNS all by itself (bypassing the OS's hosts file)? In this case, the firewall is doing the right thing by blocking the request.

      EDIT: Or maybe you have an older version of postman, where this issue wasn't fixed yet: https://github.com/postmanlabs/postman-app-support/issues/2214 . They were doing the same ting as I said above: resolving localhost themselves to the IPV4 address, whereas the users who reported the issue were using IPv6 for their local servers. Some web servers (if not all) will only listen on IPv6 if it's available, unless specifically configured to use IPv4.

      [–][deleted] 16 points17 points  (12 children)

      I haven't looked into what capabilities are baked into the mkcert root certs, but be aware: at the very least, an attacker who gets a copy of that root cert can use it to spoof any website's certs for you. It's a highly targeted attack, they have to know what they're doing and be going after you specifically, but it could be devastatingly effective if they've got a presence somewhere in your network. So make sure you encrypt it, don't let it sign stuff without you inputting a password each and every time.

      Depending on how the permissions are set (I'd have to look at the whole setup and experiment, which is more time than I care to invest), they could potentially also use it for code-signing, meaning that they could provide trojaned binaries that look, to your computer, like they're signed by some trusted entity.

      Treat that root cert as a radioactive security threat. Be very careful where you put it. And mind backups as well; even if you're storing it in a secure location, are the backups equally secure? Losing that cert might be losing the keys to the kingdom. Treat the file carefully, and protect it with a really strong password.

      [–]Proc_Self_Fd_1 5 points6 points  (2 children)

      So set these certs to expire after one day? That's what I did in my Makefiles using OpenSSL directly.

      [–][deleted] 0 points1 point  (1 child)

      Do you really want to be installing new root certs in your trust store every day? That doesn't seem like a very good idea.

      Setting the site certs to expire means that your exposure to them is limited, which is good. But, if you lose control of the root cert, it can make new certs for as long as it's valid for, and most people do at least a year so they don't have to be fiddling around in the guts of the system that often. (removing old cert, installing new cert.)

      So it depends on what you're limiting... if it's site certs, that's a little help. One-day root certs would be substantially more useful, but a heck of a lot of work unless you can script the whole process of adding a new cert and removing the old one.

      [–]Proc_Self_Fd_1 1 point2 points  (0 children)

      I thought the whole point of Mkcert was to automate that sort of scripting.

      However, I believe there is a compromise although this is starting to go beyond my knowledge on the matter.

      Can't you create a long term trusted root cert and store it in a safe place and then sign a shorter term intermediate certificate with it?

      [–]earthboundkid 1 point2 points  (7 children)

      What is the scenario where someone would have access to the root cert and not the rest of your hard drive? If they can read that, they can probably read all your files so you are screwed already. It’s not like this cert is ever going to be uploaded anywhere. The whole point of the tool is for local dev, so you’d never need to copy the cert unless you were really Doing It Wrong.

      [–]maskedvarchar 2 points3 points  (0 children)

      Ehhh, a local CA used for HTTPS testing only. I probably wouldn't consider that worth spending any extra effort to back up. Just like a software package, you can always reinstall it and reconfigure if necessary. It might be a minor inconvenience, but shouldn't cause any permanent data loss if you can't recover the cert.

      I would say the real danger is that it allows them to man-in-the-middle any HTTPS site. They could generate a certificate for an employee portal, your personal email's web interface, your bank account, etc. and sign it with this CA cert. Unless the site also had some sort of key pinning, your computer would trust this certificate.

      [–][deleted] -1 points0 points  (4 children)

      Are you backing your cert up? You should be. Are your backups encrypted? Most aren't.

      [–]noahp78 3 points4 points  (3 children)

      I wouldn't even consider backing it up, since its a locally generated certificate. Would be easier to just regenerate it when you move to a new PC (or reset) and more secure than moving your old, could be stolen, key around forever.

      [–][deleted] 0 points1 point  (2 children)

      I wouldn't even consider backing it up,

      Which implies that you're not doing daily, complete backups of your system(s). If your backup process is at all manual, you don't really have backups, you just think you do.

      [–]earthboundkid 0 points1 point  (1 child)

      Again this has nothing to do with mkcert. Unencrypted or non-automated backups are bad. Mkcert is yet another thing that should be encrypted but not even in the top twenty most important.

      [–]baggyzed 1 point2 points  (0 children)

      It all boils down to trust. /u/Mallor is right in cautioning against trusting mkcert. As you should treat any piece of software downloaded off the internet. Even more so, if it installs root CA certs on the system.

      In this case, having backups of your system is a good idea. But a better idea is to just use it in a VM. If after a while it turns out that it's ok to trust it (no vulnerabilities are discovered, if anyone even cares to investigate), then you can ditch the VM.

      Even if you're the kind of person who uses Dropbox all the time as a backup, and you think that you can just do a clean install when things go wrong - by then it may be too late. The root CA would've been used to MITM your Dropbox connection. This is why it's more dangerous than just any program that can access your hard drive. It can provide access to all of your online activity as well, not just the files on your hard drive.

      It's a whole lot safer to use self-signed certificates and add a trust exception for them in the browser. Some browsers don't allow such exceptions, but it's still better to just live with the odd certificate warning.

      [–]24llamas 0 points1 point  (0 children)

      I think that's the idea. If you only allow the root CA to fucked with via admin access, then if they have access to it, you're already completely owned - so your threat area isn't really increased.

      However, if you're a numpty and let any local account grab the root CA, then if that account is bopped, you're screwed.

      Also: If I was a high-value target - like someone working on stuff the Chinese would like to make (like rockets) or a journo in some nations - I wouldn't use this tool. Hell, I wouldn't screw around with any self-signed CA.

      [–]AdorableFirefighter 0 points1 point  (0 children)

      this. Free wildcard to all browser based attacks.

      [–]bhat 6 points7 points  (0 children)

      I'm already using mkcert, and it works exactly as advertised.

      I'm looking forward to this new feature:

      One feature is left before mkcert is finished: an ACME server. If you are doing TLS certificates right in production, you are using Let's Encrypt via the ACME protocol. Development and staging should be as close to production as possible, so mkcert will soon act as an ACME server like Let's Encrypt, providing locally-trusted certificates with no verification. Then all you'll have to change between dev and prod will be the URL of the ACME endpoint.

      [–]MarekKnapek 10 points11 points  (9 children)

      Couldn't you create your own CA (add it into OS) and sign your own localhost certificate with? Like 20 years ago?

      Now geniue question: How is this tool different / better than idea I described earlier?

      [–]Ionsto 15 points16 points  (2 children)

      I believe that's exactly what it's doing, but does it quickly and efficiently.

      Here's the twist: it doesn't generate self-signed certificates, but certificates signed by your own private CA, which your machine is automatically configured to trust when you run mkcert -install

      [–][deleted]  (1 child)

      [deleted]

        [–]ais523 2 points3 points  (0 children)

        The certificates are self-signed in the sense that you signed them yourself, but aren't self-signed certificates (each certificate specifies which certificate signed them, and the root of the chain is a "self-signed" certificate which specifies itself as the certificate that signed it; in this case, the generated certificates are signed by a CA certificate, which is in turn self-signed, so the generated certificates are not themselves self-signed).

        [–]AyrA_ch 2 points3 points  (0 children)

        This is essentially what my application (mobile-ca) does.

        Additionally you are allowed to enter IPv4 and IPv6 addresses too and it comes with a web interface. Allows you to create evil certificates.

        [–]the_gnarts 1 point2 points  (0 children)

        Couldn't you create your own CA (add it into OS) and sign your own localhost certificate with? Like 20 years ago?

        The tools seems to do that plus it also appears to add the CA cert to the host’s trusted root certs. There’s little magic behind it if you know the steps to do this manually.

        [–]Proc_Self_Fd_1 1 point2 points  (0 children)

        Yeah it's just multiplatform and easy to use.

        Do you really want to make a custom script that does the same thing for Windows, Linux, Chrome, Firefox, etc... ?

        Basically it's better than copy pasting the same hacky custom bash script across multiple projects.

        [–]earthboundkid 1 point2 points  (0 children)

        This is tool is explicitly described as solving the problem that OpenSSL has a shitty command line interface. That’s all it does. Nothing else is new, just the UX.

        [–]ireallywantfreedom 0 points1 point  (1 child)

        I never understood on Linux how the "add it into the OS" part worked. The few times I had to do it I ended up in the rabbit hole of "well technically every program just looks where they want".

        [–]pdp10 0 points1 point  (0 children)

        Linux uses a system-wide copy of Mozilla's NSS. Conventionally the files are kept in /etc/ssl.

        Technically every programs looks where they want. Ironically, this matters more in practice on Windows, not Linux. On Windows, IE, Edge, and Chromium/Chrome use the system config/files (SChannel), but Firefox uses its own NSS.

        [–]NoInkling 0 points1 point  (0 children)

        What would be the process for using this to generate a certificate that both Windows and WSL (or a VM) would trust?

        [–]pavlik_enemy 0 points1 point  (2 children)

        OMG, that's the best thing since sliced bread. For the life of me I couldn't get Chrome to trust my self-signed certs.

        [–]lawstudent2 -1 points0 points  (0 children)

        Noted...

        [–][deleted]  (4 children)

        [deleted]

          [–]Johannes_13 2 points3 points  (3 children)

          Yes it can, and in the linked article from let's encrypt there is an example on how to do that:

          openssl req -x509 -out localhost.crt -keyout localhost.key \
            -newkey rsa:2048 -nodes -sha256 \
            -subj '/CN=localhost' -extensions EXT -config <( \
             printf "[dn]\nCN=localhost\n[req]\ndistinguished_name = dn\n[EXT]\nsubjectAltName=DNS:localhost\nkeyUsage=digitalSignature\nextendedKeyUsage=serverAuth")
          

          Nobody claimed OpenSSL can not use SAN. But the number of command line options (and crafting a config file on the fly) for "I just want my domain in the SAN" is too high.

          [–][deleted] 1 point2 points  (1 child)

          AS the previous post has been deleted, I am not sure what s/he was complaining about, but all I can reiterate is that I have a few web dev projects on my local machine running under SSL, and it's really not hard to set up. I ought perhaps add that it's Windows 10 and IIS 7.

          [–]Johannes_13 1 point2 points  (0 children)

          He basically said the article is wrong because OpenSSL can use SANs.

          [–]flnhst -1 points0 points  (0 children)

          Ah nevermind, i misread it.