The point of a parallel file system is that you do not need RAID.
Really? Why has virtually every production parallel file system implementation I've ever seen (using GPFS, Lustre, and PVFS) been done on top of hardware RAID controllers?
One area where I disagree with TFA is the claimed paucity of programming models and tools. Virtually every OS out there supports some kind of concurrent programming model, and often more than one depending on what language is used -- pthreads, Win32 threads, Java threads, OpenMP, MPI or Global Arrays on the high end, etc. Most debuggers (even gdb) also support debugging threaded programs, and if those don't have enough heft, there's always Totalview. The problem is that most ISVs have studiously avoided using any of these except when given no other choice.
No one uses Linux for anything important.
Other than every supercomputer on the planet worth talking about, that is...
Why always nuclear explosions simulation is the primary use for this type of computer?
Uh, because it's paid for out of the NNSA budget?
I mean, it has a SourceForge page whose mailing list archives go back to 2001, fer cryin' out loud.
Now some of the "OpenHPC" stuff appears to be new, but not all of it appears to originate from IBM. For instance, part of it appears to be a repackaging of the SLURM batch system from LLNL. The one thing that looks like a genuine contribution from IBM is the "Advance Toolchain" stuff, but even that appears to draw heavily from existing open source code bases like valgrind.
The gent who wakes up and finds himself a success hasn't been asleep.