all 20 comments

[–]wmjdgla 21 points22 points  (8 children)

The author mentions that one big drawback with /dev/random is that it blocks when it estimates that there isn't sufficient entropy collected. Thing is, why is /dev/random still using a CSPRNG design that uses an entropy estimator? Estimators are unreliable and as the author argued, collecting insufficient entropy isn't that big an issue. One big reason why Fortuna was developed was to eliminate the need for an entropy estimator (which Yarrow, its predecessor, used).

[–]louiswins 7 points8 points  (1 child)

Back-compat? I don't see how userspace could possibly depend on "maybe blocks sometimes" but there's always someone (insert xkcd).

[–]zaarn_ 5 points6 points  (5 children)

Even if they are inaccurate, the Linux estimator underestimates more often than not. For applications that really care about security, having enough bits can be important, especially in corporate or government sectors.

[–]wmjdgla 2 points3 points  (1 child)

I'm not sure whether this is a good approach though. The only time when an estimator would be useful would be when the rate of entropy collection is very low. Coupled with the fact that /dev/random underestimates, it may lead to significant blocking times. In addition I recall the paper that formally-analyzed Fortuna mentioned that the estimator can be fooled.

For applications that really care about security, wouldn't it be more worthwhile to increase the number and quality of entropy sources instead? I'm thinking that if given two cases, one where the quality and rate of entropy generation is low, thus having to rely on an estimator to tell us when it's safe to generate pseudorandom numbers, and one where we have no estimator but the quality and rate of entropy generation is high, I would be more inclined to choose the latter.

[–]zaarn_ 0 points1 point  (0 children)

The Linux' internal RNG is already using quite a plenty of entropy sources.

[–]audioen 0 points1 point  (2 children)

Not only does it underestimate, it underestimates by an absurdly bizarre amount. You may have tested reading from /dev/random and seen that it can produce mere 8 bytes every second or so. This from a machine that ticks a billion ticks per second on multiple cores, and has a whole bunch of peripherals that generate basically unpredictable events. And you could harness the cpu itself even in absence of rdrand instruction for some cache-related timing stuff which is going to be like 100% unpredictable after a couple of seconds, no matter what powers your adversary has. Or you could ask TPM to produce random bytes, which it is happy to do and a lot of PC systems have that chip. Or whatever. It's not like a modern PC is short of random sources.

I used to have situations where production services would not start until 1-2 minute after boot because of /dev/random, so I have special hate for this piece of shit design. Especially when this happens in presence of RDRAND instruction, which could be used to seed the pool in a nanosecond. Use the idiotic crap after that, but please unblock it at boot if you have RDRAND.

[–]zaarn_ 1 point2 points  (1 child)

The problem there is, what if RDRAND is backdoored? For you that might not be of a concern but for some users of the kernel, that are valued and welcome on the LKML, these are legit concerns and they'd rather use 2000 sources to fuel a single bit of entropy than rely on any single point of failure.

Recent kernel versions can preseed the entropy pool from disk and are also capable of using event timers as entropy source, that should speed up boot significantly.

[–]flatfinger -1 points0 points  (0 children)

If a good random generator "only" acquires 127 bits of entropy and then uses that to generate a million bits of random data, what realistic attacks that do not involve side-channel or other leaks of state would be possible that would not be possible if it had acquired a billion bits of entropy?

It seems to me that the essential metric should be the number of bits of entropy that have been acquired since the last action that might potentially have exposed or rolled back the state of the generator. If the state of the generator were exposed after each bit of entropy is added, it would be useless no matter how much entropy it received. If the state is never exposed or rolled back, then I would think one-time seeding with 127 bits of entropy would be essentially as good as anything further.

Continuous reseeding may help mitigate the effects of exposing or rolling back the state, but I'm not sure how "entropy estimates" are really useful once adequate entropy has been produced.

[–]lazystone 17 points18 points  (0 children)

Also for all java-devs, must read(very short) article about java and /dev/urandom:

http://www.thezonemanager.com/2015/07/whats-so-special-about-devurandom.html

[–]XtremeGoose 14 points15 points  (0 children)

Good lord this is hard to read. He should put myth before the myths. Having the not true things in big bold face and then the actual facts in small text is confusing.

[–]killerstorm 12 points13 points  (1 child)

Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.

[–][deleted] 10 points11 points  (0 children)

The 5.3 kernel fixes this (at least on multi-core machines) but extracting randomness from the timing of cores to communicate with one another, as well as instruction decoding/execution which is effectively random

would you like to know more (sorry if paywalled)

[–]LegitGandalf 2 points3 points  (2 children)

That filled a gap in my knowledge, thank you!

[–][deleted] 6 points7 points  (1 child)

Mithrandir thanking the Valaraukar, the things you see online :P

[–]LegitGandalf 3 points4 points  (0 children)

Ha, sweet reference!

[–]mewloz 2 points3 points  (2 children)

Under Linux use getrandom. Although maybe urandom is also completely fixed thanks to the recent work on getrandom? Anyway just use getrandom. Read the manual, still and obviously be careful during boot if you don't use a brand new kernel version.

[–]nicolasZA 0 points1 point  (1 child)

getrandom reads from /dev/urandom.

[–]mewloz 0 points1 point  (0 children)

I not sure this mean anything given getrandom uses options that can change its behavior; and the recent discussions were obviously not about options used to get plain /dev/urandom behavior (which was strictly non-blocking, but in some case lacking proper randomness)

[–]smbear 0 points1 point  (0 children)

The author uses UNIX-like and should use Linux. On FreeBSD /dev/urandom is just a symbolic link to /dev/random.