Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment More than Debian and Fedora/Red Hat (Score 1) 110

Debian is definitely a popular root but I'd dispute I'd argue that it isn't Fedora that's a major root, rather it's Red Hat/RHEL. Even then, there are large numbers of popular distros not derived from those sources. From the GNU/Linux Distribution Timeline:
  1. Slackware has spawned lots of distros (including SUSE)
  2. Enoch spawned the Gentoo line of distros (and Gentoo is the current base of ChromeOS).
  3. The Arch family started independently
  4. The on-the-rise Alpine Linux was independently started

So by lineage alone I'd argue there are more than two major categories.

Comment Forced to click through (Score 4, Informative) 47

My experience of these changes is that you'll be forced to click through a warning in your browser even if you installed the certificate (or the root CA signing the certificate). The Microsoft page about no longer trusting SHA1 certs is confusing in this respect because it includes information about signing Windows binaries but it does say

Windows [...] will no longer trust any code that is signed with a SHA-1 code signing certificate and that contains a timestamp value greater than January 1, 2016

That document also says it only applies to certs that are in the Microsoft Root Certificate Program so ones you've manually installed might not be affected.

This is slightly different to the Mozilla's SHA-1 deprecation information:

After January 1, 2017, we plan to show the “Untrusted Connection” error whenever a SHA-1 certificate is encountered in Firefox.

Perhaps this isn't the override you were thinking of but it doesn't sound like a total block.

Comment Corporate deployments? (Score 1) 182

I think it could be possible for Chromebooks to be successful without having a significant home market share. If business with all their software online start finding them acceptable the fact they don't run all possible software locally could be seen as advantage (corporates are in a position to make things like Chrome's remote desktoping work). I could see Chromebooks working well for telesales or even places like libraries which are typical homes for existing thin clients...

Comment The chrony web page has some nice comparisons (Score 3, Informative) 157

The Chrony comparison page compares ntpd, Chrony and OpenNTPd. Another yet to be finished alternative is ntimed (which seems to currently be around 6000 LoC). On some Linux's if you don't care about accuracy or trying to weed out false time you can always use an client such as systemd-timedated.

Comment Linux Foundation trying to work out who to give to (Score 1) 157

The Linux Foundation has already given funding to a few open source projects it considers "core" (which includes the original NTP project) and has been trying to assess which other core products are most at risk. From looking at the members page, at least two of the companies you mentioned (Google, Facebook) are part of the Linux Foundation so the giving back has at least started...

Comment Re:This is FUD (Score 1) 111

Doesn't it let you essentially let you find out if you've had (since this boot?) up to 256 bits of entropy? You can ask it whether it has had an amount so long as it's less than 256 bits and you can force it to return failure if you ask for an amount it hasn't yet reached. It's not as generic as what you're asking ("tell me how much you've ever had") but it does still sound close to what you're asking for (albeit in a limited 256 bit form).

Comment Re:This is FUD (Score 1) 111

The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API that allows you to find out how many total bits of entropy have been gathered by the system, regardless of how many remain in the pool at any given point in time. If the system has managed to accumulate a few hundred bits, just use /dev/urandom and get on with life. If it hasn't, use /dev/random and block.

The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API that allows you to find out how many total bits of entropy have been gathered by the system, regardless of how many remain in the pool at any given point in time. If the system has managed to accumulate a few hundred bits, just use /dev/urandom and get on with life. If it hasn't, use /dev/random and block.

You could build what you are asking for by using the new (since v3.17 kernel) getrandom() syscall. See the part about emulating getentropy for determining if you've ever had up to 256 bits entropy in its man page for implementing your API suggestion...

Comment Your link explains the problem (Score 2) 111

This isn't so much about entropy "drying up" a few days after the system has booted - this is more about generating random numbers just after a system has booted and before "enough" entropy was gathered in the first place. From your link:

Not everything is perfect
[...]
Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.

but also from your link

FreeBSD does the right thing[...]. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
[...]
On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting.
[...]
And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
[...]
Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.

So not great but not (always) a disaster and modern Linux allows programs to counter this if they wish by using getrandom.

Comment Not just virtualization (Score 3) 111

Virtualization is a strong candidate because everything can be so samey but it can happen on real hardware too - imagine a trying to generate randomness on a basic MIPS based home router with flash based disks, no hardware RNG, typically booting from a fixed extract RAM disk install and doesn't have hardware clock to save time when powered off but makes ssh certs early during its first boot...

Comment When is not enough entropy a problem? (Score 4, Informative) 111

For the interested: Understanding-And-Managing-Entropy-Usage Whitepaper Black Hat whitepaper.

So it seems this is the classic problem that (Linux) programmers are told to use /dev/urandom (which never blocks) and some programs are doing so at system startup thus there's the opportunity for there to be "insufficient" randomness because not enough entropy has been gathered at that point in time. In short: using /dev/urandom is OK but if you are using it for security purposes you should only do it after /dev/random would have stopped blocking for a given amount of data for the first time since system startup (but there's no easy way to determine this on Linux). Or is there? Since the v3.17 kernel there is the getrandom syscall which has the beahviour that if /dev/urandom has never been "initialised" it will block (or can be made to fail right away by using flags). More about the introduction of the Linux getrandom syscall can be read on the always good LWN. And yes the BSD's had defences against this type situation first :-)

So this is bad for Linux systems that make security related "things" that depend on randomness early in startup but there may be mild mitigations in real life. If the security material is regenerated at a later point after boot there may be enough entropy around. If the the system is rebooted but preserves entropy from the last boot this may be mitigated for random material generated in subsequent boots (so long as the material was generated after the randomness was reseeded). If entropy preservation never takes place then regeneration won't help early boot programs. If the material based on the randomness is never regenerated then again this doesn't help. If you take a VM image and the entropy seed isn't reset then you've stymied yourself as the system believe it has entropy that it really doesn't.

Slashdot Top Deals

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...