Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Comment The chrony web page has some nice comparisons (Score 3, Informative) 157

The Chrony comparison page compares ntpd, Chrony and OpenNTPd. Another yet to be finished alternative is ntimed (which seems to currently be around 6000 LoC). On some Linux's if you don't care about accuracy or trying to weed out false time you can always use an client such as systemd-timedated.

Comment Linux Foundation trying to work out who to give to (Score 1) 157

The Linux Foundation has already given funding to a few open source projects it considers "core" (which includes the original NTP project) and has been trying to assess which other core products are most at risk. From looking at the members page, at least two of the companies you mentioned (Google, Facebook) are part of the Linux Foundation so the giving back has at least started...

Comment Re:This is FUD (Score 1) 111

Doesn't it let you essentially let you find out if you've had (since this boot?) up to 256 bits of entropy? You can ask it whether it has had an amount so long as it's less than 256 bits and you can force it to return failure if you ask for an amount it hasn't yet reached. It's not as generic as what you're asking ("tell me how much you've ever had") but it does still sound close to what you're asking for (albeit in a limited 256 bit form).

Comment Re:This is FUD (Score 1) 111

The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API that allows you to find out how many total bits of entropy have been gathered by the system, regardless of how many remain in the pool at any given point in time. If the system has managed to accumulate a few hundred bits, just use /dev/urandom and get on with life. If it hasn't, use /dev/random and block.

The solution is to use /dev/urandom, but only after verifying that the pool has some entropy. Ideally, it would be nice to have an API that allows you to find out how many total bits of entropy have been gathered by the system, regardless of how many remain in the pool at any given point in time. If the system has managed to accumulate a few hundred bits, just use /dev/urandom and get on with life. If it hasn't, use /dev/random and block.

You could build what you are asking for by using the new (since v3.17 kernel) getrandom() syscall. See the part about emulating getentropy for determining if you've ever had up to 256 bits entropy in its man page for implementing your API suggestion...

Comment Your link explains the problem (Score 2) 111

This isn't so much about entropy "drying up" a few days after the system has booted - this is more about generating random numbers just after a system has booted and before "enough" entropy was gathered in the first place. From your link:

Not everything is perfect
[...]
Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.

but also from your link

FreeBSD does the right thing[...]. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
[...]
On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting.
[...]
And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
[...]
Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.

So not great but not (always) a disaster and modern Linux allows programs to counter this if they wish by using getrandom.

Comment Not just virtualization (Score 3) 111

Virtualization is a strong candidate because everything can be so samey but it can happen on real hardware too - imagine a trying to generate randomness on a basic MIPS based home router with flash based disks, no hardware RNG, typically booting from a fixed extract RAM disk install and doesn't have hardware clock to save time when powered off but makes ssh certs early during its first boot...

Comment When is not enough entropy a problem? (Score 4, Informative) 111

For the interested: Understanding-And-Managing-Entropy-Usage Whitepaper Black Hat whitepaper.

So it seems this is the classic problem that (Linux) programmers are told to use /dev/urandom (which never blocks) and some programs are doing so at system startup thus there's the opportunity for there to be "insufficient" randomness because not enough entropy has been gathered at that point in time. In short: using /dev/urandom is OK but if you are using it for security purposes you should only do it after /dev/random would have stopped blocking for a given amount of data for the first time since system startup (but there's no easy way to determine this on Linux). Or is there? Since the v3.17 kernel there is the getrandom syscall which has the beahviour that if /dev/urandom has never been "initialised" it will block (or can be made to fail right away by using flags). More about the introduction of the Linux getrandom syscall can be read on the always good LWN. And yes the BSD's had defences against this type situation first :-)

So this is bad for Linux systems that make security related "things" that depend on randomness early in startup but there may be mild mitigations in real life. If the security material is regenerated at a later point after boot there may be enough entropy around. If the the system is rebooted but preserves entropy from the last boot this may be mitigated for random material generated in subsequent boots (so long as the material was generated after the randomness was reseeded). If entropy preservation never takes place then regeneration won't help early boot programs. If the material based on the randomness is never regenerated then again this doesn't help. If you take a VM image and the entropy seed isn't reset then you've stymied yourself as the system believe it has entropy that it really doesn't.

Comment Checking the wrong thing in a not great place? (Score 1) 136

First up lkml.org is a third party site that hosts Linux kernel mailing list archives on a website. Regular Linux kernel mail isn't actually sent from it (I believe that's done by vger) so we're looking up the email reputation for the wrong IP...

Secondly UCEPROTECT is a very aggressive blacklist which states upfront they will block people who they believe are in the vicinity of people who the judge to be sending them spam. It's not the be and end all though and on one server I looked some time ago it's effectiveness was surpassed by other blacklists (here's someone else's old DNS blacklist comparison for 2014). In general I prefer more conservative tools like senderbase when trying to work out an IPs mail reputation.

For what it's worth I've also seen GMail incorrectly mark mails sent to the fio mailing list (which is also managed by vger) as spam and in that case it was purely down to mail being proxied through the list which was a place that didn't match the sender's DMARC records. Most of the time GMail was getting the marking of spam right though (even for mailing list mails)...

Comment systemd was in native mode for the first two (Score 1) 442

Anecdotes 1 + 2 were running in native mode because it was initially a fresh install that was then switched sysvinit-core. Anecdote 3 I don't know (most likely compatability mode as it was several years ago). Even if all the anecdotes were only running in compatibility mode the results show systemd finishing quicker...

Does the above make things any better and how fast do you expect things to go when you're bottlenecked on I/O throughput? Is 15 seconds for a hard disk or 4(!) seconds for a RAM disk so bad?

Comment Re:Systemd vs sysinit boot speed anecodote (Score 1) 442

The NFS mounts were always mounted at boot and not on demand both before and after systemd. The difference was that because there were multiple NFS mounts only proceeded in serial before the switch. After the switch the waiting of the mounts happened in parallel not just with each other but other services too. Yes it is entirely pathological but it really happened (and presumably other parallel inits would have solved it this way too).

For what it's worth I have an EeePC too. The default Xandros distro was a classic case of not running very much because it was highly custom (e.g. no printing, no mdns, kernel doesn't have to probe for all different kinds of hardware, hotplug of USB devices, io scheduler set to deadline etc) and thus was fast - I would always expect it to outperform a generic "full fat" Linux distribution. I'd expect your FreeBSD to beat my current XFCE Ubuntu setup too because I bet you can get start less. With lots of hand tweaks the boot speed to an XFCE desktop on my EeePC 900 is still 17 seconds...

"You need tender loving care once a week - so that I can slap you into shape." - Ellyn Mustard

Working...