Oh, and it's not once a year, it's once a year on average, over four years. So if you work on a big project for 2-3 years and then get a flurry of papers out at the end, then that's fine too.
Now you cost us a good book !
Wait, I thought TFA was about Charles Stross?
- The new FreeBSD randomness framework allows whitening algorithms (Yarrow, Fortuna, whatever) to be plugged in easily, along with multiple sources.
- Linux initially trusted RDRNG unconditionally to provide streams of random numbers, then backtracked to only using it as an input to whitening. FreeBSD only ever used it as an input to the PRNG and now has a more generic framework for doing so.
- Neither the new, or the old, FreeBSD random number generation framework is vulnerable to the attack published in October (and covered on Slashdot) on the Linux random number generator.
The big problem is that it's very hard to get good entropy early on in the boot process (when things like TCP sequence numbers and sometimes when SSH server keys are initially generated). You can use a hash of the kernel, but that's shared between other machines with the same kernel. You can use the time, but that's likely known to the attacker (and in some embedded systems will always be the same on every boot, until it queries an external source and corrects it). You can use interrupt times, but the ones from the disk / flash are likely to be similar, if not the same, across boots of the same kernel and the early network ones are susceptible to attack by people on the local network.
The hardware RNG definitely gives you some entropy, and so using it to stir the pool for Yarrow helps a lot here. Later on, there is a lot more entropy. As you start to get disk access patterns based on system use and network connections from a variety of sources, interrupt times give quite a lot of entropy. It still helps to mix in the hardware RNG, however.
As I said in another post, it's quite unlikely that the hardware is intentionally compromised (although it's a nice attack, so I wouldn't guarantee that future versions won't be), but it's very likely that it provides less entropy than advertised. This makes it fine for input into a PRNG like Yarrow of Fortuna (I think Fortuna made it into FreeBSD 10, if not it should be in 10.1), but not adequate for general use. The point of a PRNG algorithm like Yarrow is to generate an unpredictable sequence of numbers from some source entropy seed, which can change over time. As long as you have enough entropy, you will get a cryptographically secure sequence of pseudo-random numbers. All this work is doing is saying 'we trust the hardware to give us some entropy, but we don't trust it to give us all of the entropy that we need'.
This work has been ongoing for about a year, since long before the NSA stuff came out. The consensus has been for a while that some hardware random number generators give very good entropy, but some are very poor and it's difficult to tell without querying them a few million times and plotting the distribution which one you have. Add to that, some of them appear to be influenced by the temperature, and as Stephen Murdoch's attack on Tor showed influencing the temperature of someone else's server is not always as difficult as you'd think.
It seems quite unlikely that the hardware RNGs are tampered with, although it would be a very neat hypothetical attack if you could influence a specific RNG in such a way that you could reduce the entropy to, say, 16 bits within a larger space and only you be able to determine what the real space was, but it's very likely that some of them are quite bad. Adding Yarrow makes you a bit safer, because there will be other entropy sources mixed in and so even a relatively poor RNG helps stir the pool.
 Or some other whitening algorithm - Yarrow is the default, but there are some newer ones that are better, at the cost of a footprint that is not desirable for embedded devices, and FreeBSD 10 now includes a framework to make it easy to plug in the one you want.
The Research Excellence Framework (REF) is the ranking of UK universities. The REF replaces the older Research Assessment Exercise (RAE), which happened every four years. The last RAE was 4 years ago, and the current REF is just finishing. Established academics have to submit 4 research outputs since the last RAE / REF. These are usually papers, but can be other things (systems you've built and so on).
The REF is a really big deal in UK universities, because it directly impacts the availability of research grants. The CVs of individual researchers are taken into account, but the REF / RAE score of the department is the biggest factor. If you have 4 papers in top-tier publications (conferences or journals, depending on your field), then it's very easy to get hired in the run up to the REF, because a lot of second tier universities are looking to find people who will bump them up the rankings.
Conversely, if you don't have the 4 publications (or other impressive things), then it's very hard to get a tenured position, but if you're not averaging one good paper a year then there's probably something wrong with you as a researcher: part of the point of publicly funded research is that the results are communicated to the public, and if you're not doing this then you're not keeping up your end of the deal.
int class = 42;
There are numerous other examples. The interesting behaviour of sizeof() when you have a class and a variable of the same name is one of my favourites.
On the other hand, crowdfunding things like kickstarter make patronage a lot easier. You don't need to be able to afford to hire an orchestra to play, you just need to find enough other people who are willing to do so. There was an article a few months ago about an effort to do this and produce high-quality public domain recordings of a large set of classical pieces.
We're in a world now where a band can produce an okay recording of a few songs in their living room, distribute it for free, and ask for funding for doing a studio recording of the whole album. They can then distribute the album for free and ask for funding for the next one (and bookings for gigs and so on). They're free to set the threshold cost for the next album to whatever they want, and if they have enough fans that think it's worth chipping in for, then it gets made and they get paid.
VMS managed to get the idea of the platform ABI specifying procedure call conventions right very early on. It had quite an easy job though. C, BASIC and Fortran are all structured programming languages with basically the same set of primitive types. None of them have (or, in the VMS days, had) classes, late binding, or real garbage collection. BASIC is kind-of GC'd, but it doesn't have pointers and so everything passed across the language barrier from BASIC was by value, so the GC didn't have to do anything clever.
It's worth remembering that when VMS was introduced, other platforms were still having problems getting C and Pascal to play nicely together (Pascal pushing arguments onto the stack in the opposite order to C), so that's not to belittle the achievement of VMS, but it's a very different world now that we have Simula and Smalltalk families of object orientation, various branches of functional languages, languages like Go and Erlang with (very different) first-class parallelism, and so on.