Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Hardly A New Problem (Score 1) 112

by Bill Barth (#42074439) Attached to: Supercomputers' Growing Resilience Problems

Suppose that your job is computing along using about 3/4ths of the memory on node 146 (and every other node in your job) when that node's 4th DIMM dies and the whole node hangs or powers off. Where should the data that was in the memory of node 146 come from in order to be migrated to node 187?

There are a couple of usual options: 1) a checkpoint file written to disk earlier, 2) a hot spare node that held the same data but wasn't fully participating in the process.

Option 2 basically means that you throw away half of your compute resources in case there is a failure. Basically no computations being done today for scientific research are valuable enough to warrant this approach. Some version of Option 1 (periodic checkpointing and restarting) is always more cost effective. These systems, in the US at least, are generally between 2 and 10 times over requested by the science community. Taking half away to seamlessly prevent an occasional job death just isn't worth the lost opportunity to more fully utilize the resources.

Option 1 implies taking some time away from your job to do the checkpointing. The vast majority of the time, some sort of OS-level automated checkpointing would be overkill as well. The author of the code knows better when is a good time to checkpoint and when it's a bad idea. I.e. you might consider checkpointing at a phase of the calculation when the data volume required to restore the state is at a minimum even if that means losing some part of a future calculation. Generally the calculation is cheap to redo since checkpoints of large volumes of data are expensive.

In addition, OS-level checkpointing is a hard problem. E.g. if there are messages in flight on the network, do you try to log them and be able to restart them, or do you only checkpoint when the network is quiet? If the network is never quiet on all the nodes in the job, do you throw in needless synchronization that could ruin the parallel efficiency of the job in order to find a place to do your automated checkpoint? If you decide to log instead, where do you write the log data in order to avoid catastrophic failure of each node, and what's the cost of doing it?

If these were just a bunch of VMs running a LAMP stack, this wouldn't be a hard problem. That's basically solved already. Migrating tasks for HPC jobs is truly a hard problem with tradeoffs to be considered.

Comment: Re:Similar work exists (Score 1) 25

by Bill Barth (#40522227) Attached to: UK Universities Launch Cloud Supercomputer For Hire
It's nothing like TG. TG systems basically gave all their cycles away for free through the work of the Resource Allocation Committee--a peer-review body that met quarterly to review proposals and give out allocations of time. This work continues through the XD program under the auspices of XSEDE.

Comment: Re:Clarification, as I live here and study there. (Score 2) 386

by Bill Barth (#40274603) Attached to: RMS Robbed of Passport and Other Belongings In Argentina
Nobody checks your ID when you go to class in the US either, though there's much less of a culture of people just showing up listening in. It would often, but not always, be easier to detect a stranger in a class here, though there are plenty of 500-person freshman biology lectures, too. Typical classes have ~30 people in them.

Comment: Re:Typical (Score 1) 76

by Bill Barth (#37032530) Attached to: NCSA and IBM Part Ways Over Blue Waters
It appears to be the latter. The spec is available here. NCSA negotiated a system with IBM, proposed it to NSF under the above linked RFP, went through a peer-reviewed awards process, negotiated an award with NSF, and started working on the delivery and other aspects with IBM and NCSA's other partners. Something went wrong in the last several months, and IBM's pull out was the result. I doubt that there is any more money to be found, and all parties knew what was asked of them in order for the project to be successful.

"Most of us, when all is said and done, like what we like and make up reasons for it afterwards." -- Soren F. Petersen