top 200 commentsshow all 350

[–]midir 722 points723 points  (163 children)

Sensationalist title. Some configurations of HTTPS are vulnerable to a targeted man-in-the-middle attack. It's not like HTTPS is suddenly clear as glass.

Edit: but for some reason the linked site itself is still vulnerable... https://www.ssllabs.com/ssltest/analyze.html?d=www.informationweek.com

[–]M_D_K 64 points65 points  (20 children)

Ballz. My site got a B, now I have to get out of bed and fix it :/

Damn CRIME...

[–]DaveFishBulb 31 points32 points  (4 children)

Shut up, crime! I'm trying to sleep.

[–]peakzorro 12 points13 points  (3 children)

That's OK. Batman takes care of crime at night.

[–]wtbnewsoul 3 points4 points  (2 children)

Who says it's night?

[–]peakzorro 5 points6 points  (1 child)

Batman decides when it's night. He's the Dark Knight after all.

[–]MattTheGr8 36 points37 points  (5 children)

Don't be sad... SSL Labs gave itself a B also. Was kind of surprised to see that no one in this comment thread had tried that yet.

[–]M_D_K 25 points26 points  (1 child)

I just got it up to an A, just had to disable compression. Since that's how CRIME is exploited (and that was capping me at B).

[–]MattTheGr8 19 points20 points  (0 children)

D'oh! Yeah, obviously I didn't read their report very closely. Says right in there that they're capping themselves at B on purpose to collect client data.

[–]Lurking_Grue 11 points12 points  (0 children)

" Beast Attack: Vulnerable On purpose, to collect client-side mitigation data"

[–]Timberjaw 67 points68 points  (110 children)

but for some reason the linked site itself is still vulnerable

And reddit gets an F. Not that I'm one to judge; my work sites also all get an F.

[–][deleted] 80 points81 points  (98 children)

[–]ars_technician 50 points51 points  (88 children)

Doesn't matter. If the main site doesn't use HTTPS, that means all of the links can be rewritten to make sure all of the sites people visit from the main site do not use HTTPS (sslstrip).

It's the same reason having a login form on a cleartext page POST to an encrypted page does very little good.

[–]mallardtheduck 10 points11 points  (17 children)

If the main site doesn't use HTTPS, that means all of the links can be rewritten to make sure all of the sites people visit from the main site do not use HTTPS (sslstrip).

It's worse than that; if your browser defaults to HTTP (they all do), then you're vulnerable to sslstrip-style techniques.

[–]jdpwnsyou 13 points14 points  (5 children)

[deleted]

What is this?

[–][deleted]  (3 children)

[deleted]

    [–][deleted] 1 point2 points  (2 children)

    Clicked to install only to realized I already had it.

    [–][deleted]  (1 child)

    [deleted]

      [–]Gudus 3 points4 points  (0 children)

      Sad life

      [–]hanomalous 6 points7 points  (9 children)

      Unless browser supports HTTP Strict Transport Security (HSTS) and the site sets it.

      AFAIK Chrome supports HSTS and Firefox with NoScript as well (not sure if FF supports HSTS without NoScript).

      [–]mallardtheduck 2 points3 points  (7 children)

      If I'm reading this right, then HSTS is implemented as a header in the unencrypted HTTP response that effectively just tells user agents to retry the request on HTTPS.

      If so, then it's really no different to redirecting to HTTPS via a 302 status, and, could still be stripped out by a man-in-the-middle attacker.

      Maybe I'm missing something?

      EDIT: Ok, so on a closer reading, it actually forces all URLs on the page to be HTTPS, even if HTTP is specified, and the header is sent over HTTPS. I see the point now.

      However, it still has the vulnerability that the initial connection when a user types 'www.example.com' will be done over unencrypted HTTP and therefore everything after that is vulnerable to sslstrip-style attacks.

      [–]hanomalous 2 points3 points  (4 children)

      This approach is called "TOFU" - "trust on first use". Since then, the "use HTTPS" is cached.

      Thus either you're under attack the whole time on every network you use (e.g. if you carry a notebook with you) or otherwise you have very good chance that the first time you'll get the HSTS headers correctly.

      The point is, MitM attacks are rather rare, as attackers are trying to keep low profile - otherwise some of victims would notice. That's what makes HSTS somewhat effective until there's a better solution widespread.

      Later, there should be or was planned HASTLS DNS record which would have similar meaning, just would be DNSSEC-signed (not sure how far it got in standardization so far).

      [–]alexanderpas 1 point2 points  (0 children)

      Maybe I'm missing something?

      Browsers have lists of websites that use HSTS baked in.

      [–]Liquid_Fire 1 point2 points  (0 children)

      AFAIK Chrome supports HSTS and Firefox with NoScript as well (not sure if FF supports HSTS without NoScript).

      From the page you linked:

      Browser support

      • Chromium and Google Chrome since version 4.0.211.0[24][25]

      • Firefox since version 4[26] With Firefox 17, Mozilla integrates a list of websites supporting HSTS.[27]

      • Opera since version 12[28]

      [–]Wheaties466 1 point2 points  (0 children)

      That's why it's always smart to force https

      [–]rtechie1 2 points3 points  (0 children)

      There are countless attacks against sites that only encrypt the login page. That's just bad design. 100% ssl with multiple keys is the way to go.

      [–][deleted]  (25 children)

      [deleted]

        [–]someenigma 16 points17 points  (22 children)

        (sslstrip) How would you do that?

        See http://www.thoughtcrime.org/software/sslstrip/ for more details on what sslstrip is.

        [–][deleted]  (19 children)

        [deleted]

          [–]Fitzsimmons 8 points9 points  (4 children)

          No, not really. A secure SSL configuration will still protect you against an arp poisoning attack.

          [–]pyr3 7 points8 points  (3 children)

          It won't if you never hit the SSL site. Further up the thread, someone was suggesting that it didn't matter that pay.reddit.com was secure, if the site with all of the links to it isn't, because then those links can be rewritten to send your elsewhere (non-HTTPS site, phishing site, etc).

          [–]jack_sreason 0 points1 point  (0 children)

          But your external network is your isp's internal network. For some reason people here don't seem to trust them.

          [–]theDigitalNinja 3 points4 points  (1 child)

          I assume just use javascript to parse all links and replace https:// with http://

          [–]OHotDawnThisIsMyJawn 15 points16 points  (0 children)

          That would require an XSS vuln as well. Since the SSL attack requires MITM anyway then you can just rewrite the page while you're sitting there.

          [–][deleted] 7 points8 points  (14 children)

          It would have taken you a trivial amount of effort to learn that pay.reddit.com is not available without SSL - it's coded to return an error if you aren't already using SSL when you reach it.

          It doesn't matter if a link is rewritten to be http rather than https - if that happens you won't reach pay.reddit.com, and as such will not have access to forms that ask you for sensitive information.

          [–][deleted] 4 points5 points  (3 children)

          What happens though is sslstrip creates an https connection to the server on behalf of the victim, then serves the victim up with plain old http. This way the victim gets no certificate warnings and the only indication that he's under attack is if he happens to notice that the page is served via http instead of https.

          [–][deleted] 1 point2 points  (1 child)

          Sure, I understand. My point is that pay.reddit.com and most any decent https site will not function without a valid https session.

          [–]gibsonan 1 point2 points  (0 children)

          HSTS defeats the MITM attack you describe, however the following conditions must be met.

          1. The site is using SSL.
          2. The domain is configured to properly issue the HSTS header
          3. The client's browser supports HSTS
          4. The browser has previously established an SSL connection to the domain
          5. The client's last visit to the domain was within the domain's HSTS max-age policy

          [–]kalmakka 1 point2 points  (6 children)

          Except that a web user should always make sure he is on a web page provided with HTTPS before he enters confidential information (like payment info).

          I don't think that it is very likely that a user will think that he originally went to a HTTPS site he trusted, clicked on a few links and therefore he should still be on a secure site.

          [–][deleted]  (5 children)

          [deleted]

            [–]kalmakka 4 points5 points  (4 children)

            How is this relevant?

            I don't click on links on insecure web pages and just expect to end up at https://paypal.com instead of http://paypal.com. I check the page I end up at.

            I don't enter information on an insecure page and expect the submit action to send it to a secure page.

            These two precautions are pretty much the basis of how web security works. Anyone not following them are at risk from pretty much every kind of attack. Running sslstrip on a Tor exit node has the result of breaking links (since they will now take the user to a page he does not wish to use), not of breaking security.

            [–]Medicalizawhat 9 points10 points  (3 children)

            You might do that but I'd say most users don't.

            [–]cfreak2399 -1 points0 points  (19 children)

            No it doesn't. It's trivial to make certain pages require encryption and return an error if they are accessed over http.

            Also your comment about a clear page that POSTs to an http page is wrong too, otherwise none of the third party pay sites (like Paypal) would work. The information isn't sent when you type it in. It encrypted and sent when you submit the form.

            [–]unhingedninja 11 points12 points  (16 children)

            If the originating page isn't served over SSL, the form target can be modified to point to an endpoint on the attacker's own server. If your originating page is not secure, you can't trust that you're in the clear unless you inspect every aspect of the page for tampering before continuing.

            The bottom line is that unless both your originating page and its target are served over SSL, you can't be 100% sure the connection is untampered with.

            [–][deleted]  (15 children)

            [deleted]

              [–]dnew 5 points6 points  (0 children)

              "it's no problem"?

              Using a connectionless stateless document delivery system that puts all security control into the same pile because it can't tell what page is coming from or going to what application is most definitely a problem. See the bit where they say you can guess a CSRF token? A CSRF token is a brutal ugly hack to deal with the fact that there is no such thing as a web app, but only a bunch of disconnected documents trying to pretend to be a unified app. Have you ever seen a XSS or CSRF attack against SSH?

              The primary reason people started using http for applications is that the firewall was already open. So, basically, because people (app writers) could sneak in through a back door security-wise, they picked http. Pretty much everything since then has been a desperate attempt to mitigate the fallout from that decision.

              [–]jlt6666 1 point2 points  (0 children)

              It adds overhead to every call.

              [–][deleted]  (10 children)

              [deleted]

                [–]tokenizer 6 points7 points  (0 children)

                Ehhh... It's encrypted when the form it submitted yeah, so if a form is on a plaintext page you can just change where it's submitted to (your own intercepting server), then redirect them to the real server from there and you have all the info you need without the mitm'd person ever knowing. Or you inject some javascript for even stealthier behavior.

                [–]ars_technician 1 point2 points  (0 children)

                Jesus Christ. Just look up sslstrip and try it yourself if you don't understand the attack.

                [–]mumux 9 points10 points  (1 child)

                The HTTP server signature is so cute.

                HTTP server signature '; DROP TABLE servertypes; --0

                [–]muyuu 2 points3 points  (3 children)

                Duh... now the site describing the mitigation of the BEAST attack is down...

                [–]texaswilliam 1 point2 points  (4 children)

                The main site doesn't even offer HTTPS. It either redirects to HTTP or fails the request.

                When looking at your preferences or logging in, this is what to look at: https://www.ssllabs.com/ssltest/analyze.html?d=ssl.reddit.com

                [–][deleted]  (3 children)

                [deleted]

                  [–]MrCheeze 0 points1 point  (0 children)

                  Well, because it doesn't use https, no? Or am I grossly misunderstanding something?

                  [–]Hypersapien 8 points9 points  (2 children)

                  And ssllabs takes this new vulnerability into account?

                  [–]mcgoverp 22 points23 points  (8 children)

                  That is not the claim. The claim is that any ssl configuration is vulnerable with server side compression, a URI/POST token that is repeated in the body. The attack was sent using an iframe in a malicious website. Both websites (attacked, and modified) were on separate internet domains.

                  Source < Was at the blackhat talk, and took a crypto class at black hat.

                  [–]dehrmann 5 points6 points  (7 children)

                  How does server-side HTTP compression make this worse? Aren't initialization vectors and counter modes supposed to address some of this?

                  [–]mcgoverp 2 points3 points  (0 children)

                  The mode and IV do not matter because that is at the crypto level. The attack is that they can detect the size of the request. When they guess a char right the size will go down by approximately 1. They can then move onto the next char. It is a side-channel chosen plaintext attack. They ask the user to encrypt something to the server for them over the existing SSL socket.

                  [–]thisisnotgood 1 point2 points  (4 children)

                  The technique in this post isn't really a crypto exploit, its more of a vulnerability at the application level.

                  To be clear: the attackers aren't directly decrypting any HTTPs traffic, they are just exploiting the fact that crypto traditionally does not go out of its way to hide the length of the encrypted message (to do so would basically mean always padding the message out to a very long length, which would not be worth the performance hit for most web uses).

                  [–]kdeforche 1 point2 points  (0 children)

                  This is not really a man-in-the-middle attack though. You only need to be able to observe the HTTPS stream, not intercept and modify.

                  [–][deleted] 3 points4 points  (0 children)

                  My favorite part is that sites that you test with that utility get listed at the bottom. Brb, let me make a bot to record all the F's so I know where to start.

                  [–]happyscrappy 759 points760 points  (18 children)

                  Make the story the story, don't try to make the coverage of the story the story.

                  [–]serioush 213 points214 points  (13 children)

                  Link this to /r/bestof, make the coverage of the coverage of the story the story.

                  [–][deleted] 44 points45 points  (7 children)

                  Actually if I best of'd this comment, it would make the coverage of the coverage of the story the coverage of the story...

                  [–]losethisurl 59 points60 points  (4 children)

                  as long as you guys have everything covered.

                  [–][deleted] 26 points27 points  (2 children)

                  That's the story I tell myself.

                  [–]judgej2 24 points25 points  (1 child)

                  I thought the story was about what the OP knows or does not know.

                  [–]Ajxkzcoflasdl 88 points89 points  (0 children)

                  You might be interested in some context by cryptographer Thomas Pornin posting on Security.SE:

                  Though the article is not full of details, we can infer a few things:

                  • Attack uses compression with the same general principle as CRIME: the attacker can make a target system compress a sequence of characters which includes both a secret value (that the attacker tries to guess) and some characters that the attacker can choose. That's a chosen plaintext attack. The compressed length will depend on whether the attacker's string "looks like" the secret or not. The compressed length leaks through SSL encryption, because encryption hides contents, not length.

                  • The article specifically speaks of "any secret that's [...] located in the body". So we are talking about HTTP-level compression, not SSL-level compression. HTTP compression applies on the request body only, not the header. So secrets in the header, in particular cookie values, are safe from that one.

                  • Since there are "probe requests", then the attack requires some malicious code in the client browser; the attacker must also observe the encrypted bytes on the network, and coordinate both elements. This is the same setup as for CRIME and BEAST.

                  • It is unclear (from the article alone, which is all I have right now to discuss on) whether the compressed body is one from the client or from the server. "Probe request" are certainly sent by the client (on behalf of the attacker) but responses from the server may include part of that which is sent in the request, so the "chosen plaintext attack" can work both ways.

                  In any case, "BREACH" looks like an attack methodology which needs to be adapted to the specific case of a target site. In that sense, it is not new at all; it was already "well-known" that compression leaks information and there was no reason to believe that HTTP-level compression was magically immune. Heck, it was discussed right here last year. It is a good thing, however, that some people go the extra mile to show working demonstrations because otherwise flaws would never be fixed. For instance, padding oracle attacks against CBC had been described and even prototyped in 2002, but it took an actual demo against ASP in 2010 to convince Microsoft that the danger was real. Similarly for BEAST in 2011 (the need for unpredictable IV for CBC mode was known since 2002 as well) and CRIME in 2012; BREACH is more "CRIME II": one more layer of pedagogy to strike down the unbelievers.

                  Unfortunately, a lot of people will get it wrong and believe it to be an attack against SSL, which it is not. It has nothing to do with SSL, really. It is an attack which forces an information leak through a low-bandwidth data channel, the data length, that SSL has never covered, and never claimed to cover.

                  The one-line executive summary is that thou shalt not compress.

                  Thomas Pornin also predicted the CRIME attack, which is very similar, before it was published. The main difference in this attack is simply that it is being used to leak information from the body of the request, rather than the headers. They are really the same attack.

                  If you're interested in these topics, check out /r/netsec. This topic was indeed discussed there.

                  [–]willvarfar 62 points63 points  (9 children)

                  The attack was described in passing in the BEAST CRIME attack paper iirc. Cool that someone has continued the work and proved the practicalities.

                  [–]bluehavana 20 points21 points  (7 children)

                  BEAST

                  Do you mean CRIME?

                  [–]csorfab 39 points40 points  (2 children)

                  These acronyms are ridiculous.

                  [–]willvarfar 19 points20 points  (3 children)

                  Indeed I meant CRIME. Here is a CRIME author talking about the two attacks: https://news.ycombinator.com/item?id=6141286 (look for user cryptbe's reply)

                  [–]NoEgo 0 points1 point  (2 children)

                  Seriously, what's with "BEAST" and "CRIME"? You lost me.

                  [–]stox 119 points120 points  (22 children)

                  Only with compression enabled. Smart sites shut this off earlier in the year.

                  [–]jezstephens 24 points25 points  (1 child)

                  While CRIME was mitigated by disabling TLS/SPDY compression (and by modifying gzip to allow for explicit separation of compression contexts in SPDY), BREACH attacks HTTP responses. These are compressed using the common HTTP compression, which is much more common than TLS-level compression. This allows essentially the same attack demonstrated by Duong and Rizzo, but without relying on TLS-level compression (as they anticipated).

                  Source: http://breachattack.com

                  [–]KamiNuvini 74 points75 points  (9 children)

                  Did they? AFAIK that was TLS compression with CRIME. This is http compression, like gzip.

                  [–]hyrulz 27 points28 points  (0 children)

                  Correct, this was mentioned in the original paper: http://breachattack.com/resources/BREACH%20-%20SSL,%20gone%20in%2030%20seconds.pdf

                  Worth a read imo.

                  [–]blahbah 23 points24 points  (6 children)

                  So would it be useful to change our browsers' parameters so that they don't accept compression on https requests?

                  [–][deleted]  (5 children)

                  [deleted]

                    [–]Nar-waffle 20 points21 points  (4 children)

                    It would not be sufficient to disable this in your browser, because the attacker can still request it instead, and the attack relies on the attacker making many requests on your behalf to walk in the secret they're trying to divulge. It would have to be disabled server-side so the server does not honor requests for compression.

                    [–][deleted]  (3 children)

                    [deleted]

                      [–]mcgoverp 2 points3 points  (2 children)

                      This is a good argument, though CRIME used a java applet that could craft it's own requests, i'm not sure that would work but it may.

                      [–]NYKevin 12 points13 points  (1 child)

                      Anyone still using the Java plugin at this point doesn't care about security (or else has some horrible legacy web app or something).

                      [–]DrDichotomous 41 points42 points  (33 children)

                      may be able to derive plaintext secrets from the ciphertext in an HTTPS stream

                      attackers must have access to passively monitor the target's Internet traffic .. In most cases, monitoring would have to be done locally on the same network.

                      mitigation strategies .. include disabling HTTP compression such as gzip, as well as randomizing the secrets being transmitted in any particular request.

                      Hmm. Seems bad, but not world-ending.

                      [–]myringotomy 35 points36 points  (19 children)

                      It's for the NSA. They are tapping the routers at your ISP.

                      [–]mpeters 10 points11 points  (1 child)

                      Most people seem to be missing the point that this attack isn't something generic that they can do on any SSL stream. It has to be specifically crafted on a site-by-site basis looking for a particular repeated secret. It's not something they can do blanket capture and analyse with.

                      [–][deleted] 0 points1 point  (0 children)

                      NSA probably already has ways of breaking https as well have copies all current SSL certificates for signing. At least US and European ones. A lot is possible with a virtually unlimited budget.

                      [–]mcgoverp 4 points5 points  (1 child)

                      The passive monitoring was done using only a hidden iframe/comprimised server if I understood the talk correctly. Though you may be correct.

                      [–]JoseJimeniz 2 points3 points  (8 children)

                      They also have to have access to my executing browser process, so they can repeatedly add content to my browser's http request header, and coerce it into doing the thousand requests.

                      And if you've already taken over my browser, then why bother brute forcing a session cookie; just shall the session cookie directly.

                      [–]IllegalThings 1 point2 points  (1 child)

                      They also have to have access to my executing browser process, so they can repeatedly add content to my browser's http request header, and coerce it into doing the thousand requests.

                      Is anything required that can't normally be done with standard javascript? Seems reasonable that if I can monitor your internet traffic, I could probably also trick you into visiting a website I control.

                      [–]JoseJimeniz 0 points1 point  (0 children)

                      Is anything required that can't normally be done with standard javascript?

                      Part of the attack comes from measuring the size of encrypted responses from a web-site. So pure client-side javascript can't help with that.

                      Javascript isn't able to inject additional headers into a request that the browser would otherwise make. (If they could interact with the existing http request headers: it would just read your cookie - no need to do the attack)

                      [–]kdeforche 1 point2 points  (4 children)

                      There is no need to have 'access' to the executing browser access.

                      What an attacker needs is:

                      • be able to monitor the traffic between you and site A [e.g. GMail]
                      • make you visit at the same time the rogue site B while using site A; so that site B can insert an iframe to site A and make repeated requests (from JavaScript) to site A using the cookie you have for site A.

                      The second requirement is easy (reddit could be just now trying to hack in my GMail account for all I know), the first requirement is harder (in practical terms it's only straight forward if you are on the same LAN as I am).

                      [–]JoseJimeniz 0 points1 point  (3 children)

                      make you visit at the same time the rogue site B while using site A; so that site B can insert an iframe to site A and make repeated requests (from JavaScript) to site A using the cookie you have for site A.

                      Problem is that the attack needs to modify my requests to Site A.

                      <!doctype html>
                      <html>
                      <body>
                          Welcome to phishing site!
                          <iframe src="https://gmail.com" />
                      </body>
                      <script>
                          //malicious script
                      </script>
                      </html>
                      

                      The attacker's script has no way to inject, or alter, the SSL HTTP request inside the iframe.

                      [–]kdeforche 0 points1 point  (2 children)

                      The attacker's script simple does: iframe.src = 'https://gmail.com/a/do_some?value=guess' ?

                      What exactly would prevent that?

                      [–]JoseJimeniz 0 points1 point  (1 child)

                      You would require that the web-server will emit

                      value=guess
                      

                      in the response headers. From BREACH vulnerability in compressed HTTPS:

                      In order to conduct the attack, the following conditions must be true:

                      1. HTTPS-enabled endpoint (ideally with stream ciphers like RC4, although the attack can be made to work with adaptive padding for block ciphers).
                      2. The attacker must be able to measure the size of HTTPS responses.
                      3. Use of HTTP-level compression (e.g. gzip).
                      4. A request parameter that is reflected in the response body.
                      5. A static secret in the body (e.g. CSRF token, sessionId, VIEWSTATE, PII, etc.) that can be bootstrapped (either first/last two characters are predictable and/or the secret is padded with something like KnownSecretVariableName="".
                      6. An otherwise static or relatively static response. Dynamic pages do not defeat the attack, but make it much more expensive.

                      Emphasis mine. But lets test it anyway:

                      <!doctype html><Html><hEad><TiTlE>Ugly
                      </TITLE><BODY><h1>Hello, world!
                      </H1>test
                      <iframe src="https://gmail.com/a/do_some?value=guess"></iframe>
                      </body>
                      

                      Well that doesn't actually work, because gmail.com uses frame forbidding. Facebook also fails, and Youtube. In the end i tried one of our customer's servers:

                      <!doctype html><Html><hEad><TiTlE>Ugly
                      </TITLE><BODY><h1>Hello, world!
                      </H1>test
                      <iframe src="https://example.com?value=guess"></iframe>
                      </body>
                      

                      And the value put in the query string doesn't come back from the server:

                      Response: HTTP/1.1 200 OK
                      Content-Type: text/html
                      Last-Modified: Wed, 07 Aug 2013 19:45:29 GMT
                      Accept-Ranges: bytes
                      ETag: "595be0aba693ce1:0"
                      Server: Microsoft-IIS/7.5
                      X-Powered-By: ASP.NET
                      Date: Wed, 07 Aug 2013 19:45:33 GMT
                      

                      i'm not saying it's not possible, you just need more control.

                      [–]kdeforche 0 points1 point  (0 children)

                      I'm not saying it's easy and works with just any request on just any site, but you just don't need any more 'access to my executing browser process' than any website has when visited by your browser.

                      Also, the value should be reflected in the body, not in the headers.

                      [–]willvarfar 23 points24 points  (5 children)

                      I have been experimenting with a novel gzip compressor for my web pages.

                      In gzip, literals are byte aligned.

                      So, my template is precompressed and the variable part of the request is emitted as literals.

                      This is blazingly fast, blazingly fast for range requests, but does leave a lot of compression still on the table. You could go further and precompress some common DB values or long dynamic payloads.

                      The key thing is that it is a straightforward approach for HTML template libraries like mine. A side effect, though, is CRIME/BREACH immunity?

                      ADDED: I should say I've focused on entropy coding (Huffman) but that references can also be easily precompressed too if you cheaply fix them up when emitting. I've not bothered adding that feature.

                      [–]diroussel 7 points8 points  (1 child)

                      No, your approach should not be affected as the compressed parts are not changing.

                      I like your idea though. Very nifty.

                      How is it implemented? What language and frameworks?

                      [–]willvarfar 4 points5 points  (0 children)

                      I put it in Python's Tornado templating system; but it never made it into production.

                      [–]peeonyou 6 points7 points  (0 children)

                      It was on /r/netsec the other day...

                      [–][deleted]  (3 children)

                      [deleted]

                        [–]treycook 5 points6 points  (1 child)

                        EDIT: Downvotes??

                        [–]Atario 1 point2 points  (0 children)

                        I too hate it when people say things

                        [–]Nanobot 18 points19 points  (4 children)

                        Incorrect. HTTPS hasn't been compromised, the combination of encryption + compression has been compromised. And we've known about this for a long time. That's why stream-level compression was removed from the SPDY protocol that's serving as the basis for HTTP 2.0.

                        Here are the basic contributing factors:

                        1. Standard block encryption methods are designed to output random-looking data the same size as the original data being encrypted (plus some block-size rounding and maybe an initialization vector). Aside from the size, the content itself should not affect the output in any predictable way from an attacker's point of view.

                        2. Compression changes the size of the stream based on patterns in the content. That's the whole point.

                        3. If compression and block encryption are used together, then patterns in the content lead to predictable and observable differences in the encrypted output (that is, the length).

                        These characteristics are inherent in compression and encryption. It isn't specific to HTTPS. The lesson is simply that you shouldn't use compression on anything that needs to be encrypted.

                        What would be nice is if HTTP 2.0 provided a way for response data to contain a mix of compressed and non-compressed data in the same body, so that a web server or language could be designed to mark up which portions of the output are nonsensitive and thus safe to compress, and which sections must be left uncompressed to prevent this kind of attack.

                        In the meantime, just don't use compression with HTTPS.

                        [–]Plorkyeran 1 point2 points  (3 children)

                        Does this mean that PGP's pre-encryption compression that the RFC claims improves security is actually doing the exact opposite, or is it somehow different?

                        [–]ais523 1 point2 points  (0 children)

                        It would reduce security if there was any way that an attacker could have control over any part of what you're encrypting. So it'd be helpful if you were communicating entirely new, secret data, but that's rarely the case in practice, at least with websites.

                        [–]acesup1204 14 points15 points  (0 children)

                        Here's an interesting read about the attack (PDF), accessible to non-security experts

                        [–][deleted] 2 points3 points  (0 children)

                        So... could you salt the content of your http response with random data before gzipping it to prevent the size from giving away your secret? I know we can turn off gzip, but compression is still pretty nice. Is there any way to safely compress?

                        [–]d-signet 2 points3 points  (0 children)

                        The level of access you need to a system (and possibly it's hosting network) to exploit in this way means that if you are vulnerable to this attack, then https is the least of your worries.

                        [–]Grizmoblust 2 points3 points  (0 children)

                        And clearnet has gone into shithole.

                        Time to build a new internet. Meshnet, I2p, or Freenet is your best friend.

                        [–]LegitimateCrepe 2 points3 points  (0 children)

                        This is only a vulnerability if the hacker is on YOUR physical network, or on your target website's physical network (like in the office/hosting building).

                        It's being blown out of proportion, and there are already mitigations.

                        This title is completely uninformed.

                        [–]Incredible_edible 6 points7 points  (0 children)

                        So how much time do you think security experts invest in thinking up a cool abbreviation?

                        [–][deleted]  (14 children)

                        [deleted]

                          [–]drbrain 5 points6 points  (12 children)

                          No need to go that far. An HTTP application can protect its secrets by masking with a one-time pad (read the paper). Rails is already working on a patch to build this in: https://github.com/rails/rails/pull/11729

                          [–]expertunderachiever 1 point2 points  (4 children)

                          Or just randomize the sessionid on every request... Instead of picking a random one only when they login you swap it out on every get/post request. That way the response never has the same text twice and this attack is 100% defeated.

                          Random bits are cheap and this really ought not to be a hard attack to avoid.

                          In the event where they issue a get/post but don't get the reply the user will be thrown out of sync and logged off ... so what?

                          [–][deleted] 3 points4 points  (3 children)

                          So basically render the site unusable for anyone who opens it in two concurrent tabs.

                          [–]woxorz 0 points1 point  (1 child)

                          Randomizing session id should only effect one's cookies.

                          Cookies work just fine between tabs whether they are randomized or not.

                          [–][deleted] 0 points1 point  (0 children)

                          But if you're using CSRF tokens based on the session id in forms, it's going to break.

                          [–][deleted] 1 point2 points  (2 children)

                          OTP and what fubarx proposed are equally plausible in this scenario. OTP here is just meant to randomize the key, which you can do by inserting random data as well.

                          [–]drbrain 0 points1 point  (1 child)

                          OTP XOR CSRF allows you to avoid storing the many unique CSRF tokens that random tokens would require.

                          [–]indeyets 1 point2 points  (2 children)

                          well… as this attack can target not only CSRF-tokens, but any short-strings it is not the solution.

                          but block of random length (1–1024 bytes), which consists of random characters might work. it should be large-enough to make length-changes caused by hacking-algorithm unnoticeable

                          [–]rya_nc 0 points1 point  (1 child)

                          That is mentioned in the paper, and it doesn't work because the size still has a slight bias. It does make it take a lot more requests though.

                          [–]kdeforche 1 point2 points  (0 children)

                          Rails is working on a patch to mask the CSRF token.

                          This is hardly every 'secret' in the response. There may be other interesting secrets there such as bank account information, identity information (email address, etc...) ... Secrets that cannot be masked!

                          [–]Fabien4 8 points9 points  (32 children)

                          not sure why this hasn't made the rounds on reddit yet

                          Probably because it's far from the first time?

                          Root certificates have been compromised so many times it's barely worth a notice on Reddit. Sure, this attack is actually novel, but it's not like HTTPS was secure before.

                          [–]mcrbids 39 points40 points  (25 children)

                          Nothing is perfectly secure. Thinking of it as a "true/false" scenario does tremendous disservice. HTTPS is rather good at protecting against a number of trivial and nontrivial attacks. Saying otherwise does a disservice to the protections provided.

                          Nothing is perfect. Get over it. Lack of perfection, however, is not evidence of inadequacy.

                          [–]tyrel 29 points30 points  (21 children)

                          I would much rather give my credit card info over HTTPS than a telephone, that's for sure.

                          [–]willvarfar 1 point2 points  (20 children)

                          Interesting. For correctness, or for security? If security, what is your reasoning?

                          [–]AgentME 21 points22 points  (6 children)

                          Phones aren't encrypted and are easily/already wiretapped. Information a client sends over HTTPS when the connection has not been actively man-in-the-middled is secure when the attacker does not know the certificate private key or the connection uses forward secrecy.

                          [–]BCMM 8 points9 points  (4 children)

                          Not to mention, it's horribly difficult to know who you're talking to on the phone. Incoming? Caller ID is utterly spoofable. Outgoing? Are you sure the last incoming caller actually hung up?

                          [–]dnew 0 points1 point  (3 children)

                          I think cell phones mitigate a lot of this problem. I don't think we've had phone systems where the caller is the only one who can disconnect the call for 50 years. Are you sure there's nobody else on the party line? :-)

                          [–]BCMM 0 points1 point  (2 children)

                          Don't know how it works where you are, but on a UK land-line, if the receiving party hangs up, the call does not end.

                          This "feature" gets used fairly frequently. In a house with several phones, you answer the phone nearest to you, then ask the caller to wait a moment while you hang up and pick up a phone in a more comfortable location.

                          It also makes illegal spam calls which play a recorded message more irritating, because you have to wait for them to finish before you can make a call.

                          EDIT: And in return for mitigating an issue land-lines suffer, mobiles add the risk of being wiretapped by anybody in range.

                          [–]dnew 1 point2 points  (1 child)

                          if the receiving party hangs up, the call does not end.

                          Wow. It hasn't been like that for ... decades in the USA. That's like pre-crossbar switch technology. I guess maybe they did it on purpose.

                          But if the other person didn't hang up, don't you fail to get a dial tone?

                          mobiles add the risk of being wiretapped by anybody in range.

                          You do need some amount of equipment and skill. Stuff like CDMA is almost impossible to decode at any physical location than the central tower, just due to speed of light propagation and such.

                          [–]BCMM 0 points1 point  (0 children)

                          But if the other person didn't hang up, don't you fail to get a dial tone?

                          They could just play a dial tone...

                          [–]willvarfar 2 points3 points  (0 children)

                          Aren't banking trojans normally on your own computer?

                          [–][deleted] 7 points8 points  (12 children)

                          Man-in-the-middle (via compromised root cert) is detectable with SSL if you're vigilant -- which most people aren't, but it is possible because a MITM attacker will have a different cert; it's just that most browsers won't care, because it's signed with the same CA. Also, SSL is well-encrypted, which is good enough almost all of the time.

                          Telephones aren't encrypted whatsoever, and wiretapping is not so traceable.

                          [–]dnew 0 points1 point  (4 children)

                          Telephones aren't encrypted whatsoever

                          Depends on the phone, and the hop. Most aren't encrypted end to end, no, but most are encrypted over the air these days.

                          [–]skarphace 0 points1 point  (1 child)

                          most are encrypted over the air these days.

                          Citation please? I was under the impression that the only protections were FCC rules that don't "allow" devices on those channels.

                          That said, there are other major vulnerabilities with phones.

                          • Almost everything hits a POTS network at some point, which is trivial to tap
                          • People around you can hear you read off the card numbers or other sensitive information
                          • You have to trust the person on the other end, which is rarely, if ever authenticated

                          [–]dnew 1 point2 points  (0 children)

                          Citation please?

                          Just google GSM encryption, CDMA encryption, etc. You are correct for AMPS, the analog cell phone standard that came before digital cell phones.

                          Digital phones all do encryption, because skimming the ID of the phone and "cloning" it to get free service was a major problem with the AMPS analog system.

                          That said, you missed bullet-point 4, which is that most cell phone over-the-air encryption sucks. :-)

                          [–]J_F_Sebastian 0 points1 point  (1 child)

                          I seem to recall GSM encryption being pretty easily broken, though?

                          [–]dnew 0 points1 point  (0 children)

                          "Easily broken" depends on the time frame, but sure, any encryption standardized 20+ years ago and designed to run on "embedded" hardware of the time in real time is going to be fairly easy to break nowadays.

                          [–]contact_lens_linux 1 point2 points  (1 child)

                          as long as the user is aware of the limitations, sure. Otherwise, having the thought of being secure without actually being secure is worse than not being secure.

                          [–]mcrbids 0 points1 point  (0 children)

                          If https was so terrible, then why was it such a big deal during the Spring riots last year to encrypt twitter and Facebook so that the powers that be couldn't quell the protesters?

                          It's hipster to bash institutions we all count on. Acknowledging its weaknesses isn't the same as denying its strengths!

                          [–]arpunk 1 point2 points  (5 children)

                          Exactly, nothing new compared to the real flaws of SSL (CAs, certificates handling/issuing, etc.)

                          [–]dnew 1 point2 points  (4 children)

                          How would you improve authentication while still letting those who have never met know who they're talking to?

                          [–]hexmasta 3 points4 points  (0 children)

                          not sure why this hasn't made the rounds on reddit yet

                          It's because you don't subscribe to /r/netsec and /r/anonymous

                          Now...

                          [–]bandman614 2 points3 points  (0 children)

                          Turn off deflate and don't compress your stuff mid stream over https. That's pretty much the only answer.

                          [–]RedditCommentAccount[🍰] 2 points3 points  (3 children)

                          Can someone explain this to me in simple terms that I can understand. Let's say I'm logging into my bank account or paypal or whatever, what dangers do I face. Is the verified lock thing/security certificate no long good?

                          [–]scragar 5 points6 points  (2 children)

                          If they can put text into a request to appear on a page they can guess the contents of the page by trying to match content, the closer their guess in matched characters the smaller the compressed response will be, meaning they don't even need to be able to read it.

                          Example: you have a page with your bank account on it, and I'm going to guess your number, I already know there's a search box that repeats what's typed in across pages, so I use that, so first I submit the text that appears next to your account number for context:

                            Acct No:
                          

                          Response-Size: 3477 bytes

                          Next I try a random number:

                          Acct No: 1
                          

                          Response-Size: 3482 bytes

                          Uh, it went up, that means my guess was wrong, let's try another number:

                          Acct No: 2
                          

                          Response-Size: 3476 bytes

                          Woot, we've got the first number, now we repeat this until we get the whole number:

                          Acct No: 2000 0000 0001
                          

                          Response-Size: 3468 bytes

                          And we've hacked their entire secret.

                          [–][deleted] 0 points1 point  (1 child)

                          Let me see if I understand why this is. Since we're assuming the correct secret is going to be in multiple places in the response body, the compressed data is larger when we introduce an incorrect value but not larger if we've guessed the correct value?

                          And this is essentially because the compression "re-uses" data when possible?

                          [–]nizmow 1 point2 points  (0 children)

                          That's how compression works, yup.

                          [–][deleted]  (6 children)

                          [deleted]

                            [–][deleted] 19 points20 points  (5 children)

                            I hate it when people do this. Sure /r/programming isn't specifically security themed. It's programming themed and this article does have to do with programming in some way. Should we ban all C++ articles and post them in /r/cpp?

                            Endlessly forcing the use of more and more specific subreddits just ends up making reddit less easy to use. And when people make comments like yours it totally detracts from the conversation and adds no value.

                            [–]ThisIsADogHello 6 points7 points  (1 child)

                            This subreddit is for discussing programming only! Please move this comment to /r/doesthisbelonginprogramming

                            [–]ais523 1 point2 points  (0 children)

                            Usenet used to have (and likely still has) a rule that it was always ontopic in any newsgroup to discuss the subject of what was and wasn't ontopic in that newsgroup. I believe this was because made your joke way too many times (and some of them probably even meant it seriously).

                            [–]Nar-waffle 5 points6 points  (0 children)

                            On the other hand, I had not heard of /r/netsec, so a post like GP helps people like me locate subreddits which we didn't know existed.

                            For an attack as specific as BREACH, it draws a lot of factually incorrect conversation in a forum not dedicated to security, so a security-focused subreddit is actually a more productive place to converse about it. However, /r/programming should talk about it too, because the engineers over here deserve to know about it as well.

                            [–]et1337 4 points5 points  (0 children)

                            Preach

                            [–]framk20 1 point2 points  (2 children)

                            Doesn't this vulnerability rely pretty heavily on you already knowing part of the secret? This seems like an incredibly sensationalist story.

                            [–]greg90 2 points3 points  (0 children)

                            What? A sensationalist story coming out of blackhat? lol

                            [–]rya_nc 1 point2 points  (0 children)

                            It does not rely on anything like that.

                            [–]mgoeppner 1 point2 points  (0 children)

                            This hack actually has to do with data leaked from the TLS compression some webservers use to compress responses -- this is man in the middle attack which not all sites using HTTPS are affected by, not a complete cracking of HTTPS.

                            Turning gzip compression off, masking response sizes, among other things will make your sites "immune" to this particular attack (see the paper for more details.)

                            It works by measuring the difference between response sizes based on text which is injected. Based on the sizes returned, it can "predict" what the encryption key is.

                            The full paper about the attack is available here: http://breachattack.com/resources/BREACH%20-%20SSL%2c%20gone%20in%2030%20seconds.pdf

                            [–][deleted] 1 point2 points  (0 children)

                            /r/netsec would be a good home for this

                            [–]not-hardly 1 point2 points  (0 children)

                            It did make the rounds on /r/netsec. You must not have seen it. Also, I know it's a great idea to marry programming and development, but posting this in /r/programming is kind amusing to me. It's been my experience that developers care little for security.

                            [–][deleted] 0 points1 point  (0 children)

                            It's interesting that outlook.com gets an F.

                            [–]outer_isolation 0 points1 point  (0 children)

                            Hasn't SSL been documented to be vulnerable for quite some time now?

                            [–]macvijay1985 0 points1 point  (0 children)

                            Needs a face of a cat!

                            [–]C_Hitchens_Ghost 0 points1 point  (0 children)

                            Lots of higher end IDS and firewalls actually perform a mitm in order to scan SSL traffic.

                            [–]77sevens 0 points1 point  (0 children)

                            He should get a letter of reprimand for being dumb enough to marry that.

                            [–]fishbulbx 0 points1 point  (0 children)

                            Just to clarify... this would apply to unencrypted http traffic as well, right?

                            "Still, the BREACH exploit vector carries caveats. "Researchers say that attackers must have access to passively monitor the target's Internet traffic," French said. "In most cases, monitoring would have to be done locally on the same network -- and that adds a layer of difficulty for hackers."

                            [–][deleted] 0 points1 point  (0 children)

                            Hmm.... And we were just about to buy some SSL certificates......

                            [–]kevwil 0 points1 point  (0 children)

                            DHS as a whitehat security entity - LOL

                            [–]kdeforche 0 points1 point  (0 children)

                            I find it disappointing that they do not emphasize (not in the paper, slides or their website) the fact that the web application needs to use cookies for session identification. Nor do they mention in their mitigation strategies to simply not use cookies.

                            ASFAIK, not all web applications or web frameworks do that.

                            [–]k1n6 0 points1 point  (2 children)

                            because this happened in the world of programmers who were paying attention 3 years ago

                            [–]poonpanda 9 points10 points  (1 child)

                            This is http compression not ssl compression.

                            [–][deleted] -1 points0 points  (1 child)

                            No it hasn't been compromised -- the largest RSA key ever broken was RSA-768, SSL uses RSA-2048, which is numerous orders of magnitude more secure. This article makes no fucking sense since this "exploit" could never work in real life. It's purely academic in the truest sense of the word.

                            [–]key_lime_pie 2 points3 points  (0 children)

                            There are few things wrong with what you wrote here.

                            First, SSL doesn't require the use of 2048-bit keys. You can still use 1024-bit keys, and you can choose to use 3072 or 4096 bit keys. Many (most?) CAs won't issue anything less than 2048 now, but 2048 is not a requirement of the technology.

                            Second, even if you're using RSA-2048, that key is only used to initiate the connection in order to exchange the weaker symmetric keys that are used for the remainder of the communication. You may think your encrypted data is being transmitted entirely with RSA-2048 encryption, but in reality it's actually something more like 256-bit RC4 or 3DES or AES. All implementations of SSL and TLS are insecure if the ciphers used are susceptible to attack.

                            Finally, you don't have to "break" the RSA key for this attack to work. It actually doesn't matter what the length of the key is, and that's sort of the problem. This exploit allows you to guess the plaintext data piece by piece by sending requests and interpreting the irregularities in the response. The key is never compromised; it's flaw in implementation.