Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Not doable (Score 1) 264

Assuming a 1.5 to 1 correspondence with the USD, you're either getting a decent cpu box and no storage, or a reasonable amount of storage and no CPU. I build/run supercomputing clusters for molecular dynamics simulations at an university in upstate New York, and I wouldn't even consider attempting a cluster for less than $25,000.

Since the OP didn't specify if this was massively parallel or not, I'm going to assume this is so I can use AMD chips for cheapness.

First off, storage. Computational output adds up quick. You're looking at $7,000 USD for 24TB raw storage from the likes of IBM or HP or Dell. Yes, you can whitebox it for cheaper, but considering if you lose this box, nothing else matters (And I doubt you have the funds for proper backups), it pays to get hardware that's been tested and is from a vendor you can scream at when it breaks.

Second, interconnect. A cheap netgear will work, but reasonable internode communication is not cheap, especially if moving largish amounts of data. This could run $1000 to $3000

Finally, the compute hardware itself. A decent node will run $3000 to $5000 depending on the core count, socket count, GHz, and to a lesser extent RAM.

Assuming you want 128 cores, you're looking at 8 machines for compute ($32,000 right there assuming $4K/node, and dual 8 core chips), plus another $7K for the file server/landing pad, and finally add $1500 for a decent switch that can let those nodes talk to each other at line speed and allow room for future growth. Total cost: $40,500 USD or 27,000 pounds assuming the 1 pound:1.5 USD ratio.

Comment Break out the calculator and spreadsheet (Score 3, Informative) 260

Its time to break out the calculators and do some math. There are two main factors at work here, UPS load capacity and battery run time. I run a series of research clusters at a university, so only the core systems (landing pads, schedulers, auth, disk arrays) are on UPS and all the compute nodes just die at a power hit.

Retrofitting a datacenter for whole center UPS is a very daunting and expensive task, so odds are good you'll be replacing the current rack mounts with beefier units, either pedestal sized units next to their racks or rack mounted units.

When buying UPS gear for work, I aim to hit either 67% capacity with the planned load, or the smallest VA rating that takes 208V single phase, as long as its at least 1/3 under utilized for future expansion. That covers the VA rating. As for battery run time, most of the larger units accept external battery packs to increase the run time. I've never used them, since a 5KVA unit with my load gives me 20 minutes of run time, and if the power isn't back on by then, odds are good its not coming back any time soon.

Another option for extending UPS run time is to prioritize services/VMs. With the appropriate monitoring software on each host, you can configure each host to shutdown when the UPS estimates X minutes of battery time remaining or there have been Y minutes on battery, or both. Less load, more run time for the really important stuff. Almost every UPS I've used (APC, Tripp-lite, Powerware) comes with off the shelf software or there are opensource solutions (apcupsd, nut) for monitoring the UPS over serial, USB, or SNMP (Options vary with mfg and model). My shutdown schedule is: after 5 minutes on battery, power down the compute cluster landing pads. With 10 minutes remaining, power down the file servers with the archival data on them. With 6 minutes remaining, power down the primary file servers. With 2 minutes remaining, power down the auth box/network monitor/iLom control host (This is the only one that can't get powered on/monitored remotely).

Comment Check with the university (Score 5, Insightful) 272

Does your university have a backup solution you can make use of? The one I work at lets researchers onto their Tivoli system for the cost of the tapes. I think I've got somewhere in the neighborhood of 100TB on the system and ended up being the driving force behind a migration from LTO-2 to LTO-4 this summer. If you are going to go and role your own and use disks, I'd recommend something with ZFS - you can make a snapshot after every backup so you can do point in time restores.

Also, I'd recommend more capacity on backup than you have now to allow versioning. I was the admin for a university film production recently (currently off at I believe Technicolor being put to IMAX) and I've lost track of the number of times I had to dig yesterday's or last week's version off of tape because someone made a mistake that was uncorrectable.

Comment Re:Hardware RAID becoming less relevant every day. (Score 1) 171

I have data sets spanning multiple terabytes. One recent PhD graduate in the lab I support accumulated 20 TB of results during his time here. Even if I had highly reliable SSDs that never failed, I'd still toss the SSDs together in a zpool to get the capacities I need to accommodate a single data set. RAID is not just about redundancy. With SSDs, I'd probably use RAID5 instead of RAID6 just in case I had a freak bad drive, but RAID in some form is here to stay.
Silicon Graphics

Submission + - SGI Files for Chaper 11, plans to sell off assets

darkjedi521 writes: According to Bloomberg, SGI filed for Chapter 11 bankruptcy April 1st with plans to sell its assets to Rackable Systems unless another buyer is willing to come forward and pay a higher per share price than Rackable. According to the Mercury News, the sale is for $25 million, though the chapter 11 proceedings leave the possibility of sale to another entity open.

Comment Re:I saw LEDs used as colored stage lights (Score 1) 685

Last time I looked at LED stagelights about a year ago, the LED PAR64 can seemed to be a drop in replacement for 300W PAR56 lamps. Unfortunately, until intensity catches up to their higher wattage cousins, most of the stages I've worked on are going to keep dropping in 750W HPL, 1KW BVT, and 1K PAR64 lamps. The biggest advantage is its easier to get a blue of out an LED than a halogen, for obvious reasons, but losing the light among the other fixtures isn't really desired all the time.
Desktops (Apple)

Submission + - when Macs break

cyber-dragon.net writes: "I have long been a staunch supporter of Apple and Macs, bordering on but not quite a fan boy. My recent experience with trying to bring them into my department at work has been dissapointing. We had a Mac Pro (the big quad processor monster) die after four days. Ok, it happens, everything else has worked flawlessly. I even delt with the inevitable teasing about the siny new Mac being a lemon.
Well after almost four hours dealing with Apple Care, three hours dropping off and picking up my computer at different stores as per thier instructions trying to get this done quickly... I am beginning to wonder if Apple really wants business customers to rely on these machines. Much as I may dislike Dell like the rest of you... when my Linux box died it was fixed in four hours and I spent maybe 20 mintes of my time setting up the repair. I have spent seven hours of my time so far on this Mac and it still will not power up. Is this just me or have other people lost critical business machines to the depths of Apple Care inefficiency and lack of business level support?"

Submission + - Whatever happened to superconductors?

AltGrendel writes: "Jonathan Fildes of the BBC wrote that 'In 1987, Ronald Reagan declared that the US was about to enter an incredible new era of technology. Levitating high-speed trains, super-efficient power generators and ultra-powerful supercomputers would become commonplace thanks to a new breed of materials known as high temperature superconductors (HTSC). "The breakthroughs in superconductivity bring us to the threshold of a new age," said the president. "It's our task to herald in that new age with a rush."

But 20 years on, the new world does not seem to have arrived. So what happened?'

He shares what he found in this article."

Slashdot Top Deals

Don't steal; thou'lt never thus compete successfully in business. Cheat. -- Ambrose Bierce