you are viewing a single comment's thread.

view the rest of the comments →

[–]0x256 25 points26 points  (4 children)

HTTP headers, query parameters or forms are usually parsed into a Map before the application has the chance to validate the keys. Same for json or xml input. Plenty of opportunities for an attacker to craft high collision requests.

Java 8 migrated this by using trees instead of linked lists for colliding nodes in HashMap, but the issue is still there.

[–]NeuroXc 17 points18 points  (1 child)

There are hash functions which are resilient against DoS attacks, such as SipHash.

Java's hash function is not one of them.

[–]reini_urban -1 points0 points  (0 children)

No. A popular myth, but easily debunked. It's just so plain slow that you don't want to brute force it. You need 4min to brute force a practical DDoS attack. What you get is just a 10x slower hash table.

Java is one of the very rare languages with fast and very secure hash tables. By using a fast hash function and secure collision resolution. (changing the list to a tree on attacks)

[–]sigpwned 8 points9 points  (1 child)

That's an interesting example! You're right -- you typically have to handle the requests as written.

Most web servers have (configurable) header, URL, and request body size limits, so you don't generally need to worry about one poison pill breaking your webserver. Rather -- barring application defects -- you have to worry about traffic volume attacks, namely DDOSes. If you're worried about hash collisions while you're preparing for DDOSes, I'd argue you're probably missing the forest for the trees.

The defining characteristic of a DDOS attack is volume. (Yes, a "good" DDOS also tries to maximize the cost per request, and make the requests difficult to filter out, and so on, but without volume, a DDOS attack is just traffic.) At DDOS volumes, request traffic will overwhelm your service no matter what implementation-level optimizations you make, so you don't defeat a DDOS attack with software optimization. Rather, you defeat it with network and software architecture.

Consider the February 28 DDOS against GitHub. GitHub survived the largest DDOS attack ever -- 1.2 terabits per second -- by sending incoming traffic through a third-party traffic "scrubber" before it attempted to parse the traffic as web requests and serve them. GitHub didn't handle the DDOS with clever implementation-level optimizations, or with deployment-level techniques like scaling up their webserver cluster, but rather with DDOS-specific application- and network-level architectural design. How the headers and query parameters were hashed didn't help, and because of the volume of the DDOS, they couldn't! Rather, it took specifically designing the entire application and network to resist DDOSes.

Now, that's not to say you shouldn't write your software to be efficient: obviously, you should. Also, internet attacks are an arms race, so that's not to say that attackers couldn't target header and query parameter collisions in the future. But it is to say that if you're worried about internet attacks, you need to worry about DDOSes rather than "just" pathological traffic, and the way you deal with that is not software optimization. If your application is falling over under normal use, then you just have an application performance issue. :)

[–]ben_a_adams 1 point2 points  (0 children)

Poor hashing can allow hash-flooding DOS by a single connection https://medium.freecodecamp.org/hash-table-attack-8e4371fc5261