Because that way they don't have to pay Red Hat anything.
I think they are going to find it tough to keep Enterprise-level SLAs using Centos vice Red Hat. Anytime there is a major security vulnerability, rather than waiting on Red Hat to release an Erratum, they are going to have to wait on Red Hat to release AND then wait on the CentOS folks (who have no financial motivation to do things with any urgency) to take what Red Hat released and rebuild it for CentOS.
I typically get 1-2 robocalls PER DAY during the 2-3 weeks leading up to an election. If the election is for a federal office (senate, etc.) or state office in addition to local positions (judges, etc.) that number can be as high as 4-5 per day. Any other time of year I get 0-1 per month (My number is on the do-not-call list).
Candidates for judge and Senate (both federal and state) seem to be the worst offendors.
It is extremely frustrating that lawmakers felt the need to exempt themselves from do-not-call legislation.
According to the research paper the goal is a million *processor* computer, not a million *node* computer. Each node described in the paper is made up of 20 ARM processors, so it would technically be a 50,000 node computer.
So 10 years ago everyone was talking about how the phones of tomorrow would have this neat technology called "bluetooth" that would let us use our phones like an ATM card. Obviously that never happened. So what does NFC give us that bluetooth didn't that will actually allow mobile payments to work?
I heard through the grapevine that a cable at ARL was cut. I can't find anything to substantiate this other than a slightly related "unscheduled network maintenance" notice here
I wonder if their kernel is patched against CVE-2010-3081 already? Otherwise, so much for that whole unbreakable claim
Supercomputers are big. Even when idle they still require lots of power and cooling, so ideally you want your supercomputer to be 100% utilized all of the time. That's why most supercomputers are "over-subscribed" and have batch schedulers (moab/torque, PBS, LSF, etc.). Users submit jobs, and the scheduler goes about placing those jobs on the supercomputer in a way that keeps utilization as close to 100% as possible. This means that typically when you submit a job it will not run immediately.
If your cellphone "out in the field" is relying on a supercomputer to do calculations, you probably aren't going to want to sit there waiting the minutes/hours/days it might take for your job to make its way through the queue. So you have a few choices: Make some sort of system reservation and only use your phone during the reservation time (probably not practical when you are "out in the field"), configure your scheduler to pre-empt currently running jobs in favor of the "cell phone" jobs (this might piss off non-cellphone users), or dedicate some or all of the system to doing nothing but being available for cell phone jobs....and the portion you dedicate will have to be enough to cover all of your cell phone users.
The last option is probably the best in terms of making sure that there is always supercomputing resources available for the cell phone users, but this undersubscription will cause your supercomputer to sit idle when field work isn't being done. So suddenly you are paying to power and cool a supercomputer that is sitting there waiting on the user to do something.
Supercomputer companies are slowly working on making supercomputers "greener", i.e. requiring less power/cooling, the ability to power off cpus/nodes/frames when not in use, etc. But until this green technology is perfected paring supercomputers with cell phones seems like a very inefficient way to do things.
Some might argue they are doing better than Oracle.
At the time of this posting RHT is $31.08/share, while ORCL is only $26.00/share.
There are plenty of reasons why supercomputers have to be shut down....besides the fact that even with generators and UPSes facilities outages are still a fact of life. What if there is a kernel vulnerability (insert 50 million ksplice replies here...yeah yeah yeah)? What if firmware needs to be updated to fix a problem? You can't just depend on RAM for storage. HPC jobs use input files that are ten's of Gigabytes and produce output files that can be multi Terrabytes. The jobs can run for weeks at a time. In some cases it takes longer to transfer the data to another machine that it takes to generate/process the data. You can't just assume that the machine will stay up to protect that data.
I am a Linux administrator at a DoD site. I have never seen anything that says that you must run kernel 2.6.30 or anything like that. Can you please provide a link to where you read this? (links to CAC-authenticated websites are ok)
DoD I-8500.2 requires you to run an OS that is EAL certified at a certain level depending on your classification. The only Linux distributions I know of that have EAL certification are SLES (9 and 10) and RHEL (4 and 5). I keep hearing about people that run things like Fedora, CentOS, and Ubuntu on DoD networks, but I have no idea how they get away with that.
As far as software versions go, what versions you must be at are dictated by IAV-A, IAV-B, and IAV-T notices. The IAV-A may say that there is a vulnerability that affects kernel versions = 2.6.30 and that you must go to 2.6.30 to be compliant, but as long as your vendor's kernel version addresses the CVEs that the IAV-A references then you are covered.
I'm not sure I understand why they constructed their own water treatment plant. I would think that it would be more energy efficient on the whole to use the already constructed municipal system in the area.
According to last month's Top500 list of supercomputers, BOINC's performance is now beating that of the fastest supercomputer, RoadRunner, by more than a factor of two (with the caveat that BOINC has not been benchmarked on Linpack)
Sigh...why do these projects (BOINC, *@home, etc.) insist on comparing their performance to superpercomputers on the TOP500 list? Of course BOINC has not been benchmarked on Linpack. If it was, the performance wouldn't come close to anything at the top of the TOP500 list. A bunch of workstations running a grid client and talking to each other over the internet is never going to have the same type of message passing bandwidth as a supercomputer using something like locally connected infiniband.
"Open the pod bay doors, HAL." -- Dave Bowman, 2001