serviscope_minor writes: If you think exponentials, factorials or even Ackermann's function grow fast, then you're thinking too small. For truly huge, but well defined, numbers, you need to enter the realm of non computability.
The Busy Beaver function BB(n) is the largest number of steps that an n state Turing machine will run for when fed with a blank tape excluding non halting programs. It grows faster than any computable series but starts off as the rather pedestrian 1, 6, 21, 107. By BB(7) it reaches at least 10^10^10^10^10^7 and at some point becomes non computable. It must be non computable because if it wasn't, you could run a program for BB(N+extra states needed to encode the initial tape state)+1 steps, and if it gets that far then you know it never halts, so you've solved the Halting Problem. So, at some point it must transition from numbers that can be computed to ones that can't be.
And now there's some new and rather interesting insight into that which essentially reduces the problem to code golf or the International Obfuscated Turing Code Contest (as if there is any other sort). Imagine you have an axiomatic system, say ZFC (which underlies almost all of modern maths), and you know you can't prove it's consistent (you can't). If you write a program that systematically evaluates and tests hypothesis based on the axioms, you can't prove it will halt or not since that's equivalent to proving consistency.
This insight and first upper bound is the program proving that BB(7918) is noncomputable comes from this new paper. It turns out that writing a ZFC axiom evaluator directly in a Turing machine is rather tricky and long winded, so the authors wrote a small interpreter for a higher level language then wrote the axiom evaluator in that. Now finding a smaller uncomputably larger number is a question of writing even smaller programs which attempt to compute undecidable things. Think you can do better? A good starting point would probably be the existing code on github.
(I hope I've got the explanation at least half way right!)
serviscope_minor writes: The OpenBSD people forked and heavily cleaned up OpenSSL to create LibreSSL due to dissatisfaction with the maintainance of OpenSSL, culminating in the heartbleed bug. The emphasis has been on cleaning up the code and improving security, which includes removing things such as SSL2 which has fundamental security flaws. As a result, LibreSSL is not affected by the DROWN bug.
A friend of mine has recently started a research group. As usual with these things, she is on a shoestring budget and has computational demands. The computational task is very parallel (but implementing it on GPUs is an open research problem and not the topic of research), and very CPU bound.
Can slashdotters advise on a practical way of getting really high bang for buck? The budget is about 4000 GBP (excluding VAT/sales tax), though it is likely that the system will be expanded later.
The computers will probably end up running a boring Linux distro and Sun GridEngine to manage batch processing (with home directories shared over NFS?).
serviscope_minor writes: What filesystem do you use for portable disks, especially large ones, under Linux? FAT is simply not very good. Using a proper filesystem (e.g. ext3) preserves the read/write permissions of the original machine which is rather annoying when the disk is moved to a different machine with differet user IDs. So is there a way to have a good filesystem that supports all the unixy things such as symlinks, and an execute bit, but does not require lots of chown'ing as root when moved to a different machine?