3 Terabytes, 80 Watts 219
legoburner writes "The Enquirer is reporting that Capricorn have released a mini-itx based 1U-sized storage computer featuring four 750-GB hard drives and a 1-Ghz controller system with a typical power usage of an astounding 80 W per machine. A full 40U rack only uses 3.2 kW, which is less than 30 kW for an entire Petabyte!"
Ouch (Score:2, Funny)
Re:Ouch (Score:4, Informative)
Re:Ouch (Score:4, Funny)
3200 Watts for 120 Terra bytes - that's like two hand-held hair dryers!
Kids these days... we used to have these things called Mainframes. They had special 240v wiring, with BIG power cables. You could hear the circuitbreaker box HUM when you walked past. This is all a pittance.
Re: (Score:2)
Re: (Score:3, Interesting)
Yea, but most people dont run two hair-dryers 24/7
Re:Ouch (Score:5, Funny)
Yea, but most people dont run two hair-dryers 24/7
You haven't met my wife.
Re:Ouch (Score:5, Funny)
You haven't met my wife.
Those aren't hair dryers. [babeland.com]
Re: (Score:2)
Re:Ouch (Score:5, Interesting)
Or 28,000 kWh per year, i.e. $2800 at $0.10 per kWh (not sure what the going rate is nowadays).
Re: (Score:3, Funny)
3200 Watts for 120 Terra bytes - that's like two hand-held hair dryers!
Or 28,000 kWh per year, i.e. $2800 at $0.10 per kWh (not sure what the going rate is nowadays).
In other words, if you have the money to afford 120 terabytes of storage, let alone the need to have and use that storage, you can probably afford to pay for the electricity to run it.
And as a bonus, you could lease the exhaust from your 120 terabyte storage system to a nearby hair salon so you can kill two birds wi
Where do you live? (Score:2, Interesting)
And they would need all that storage to record their utility bills.
Where do you live that 80 watts is a big drain on financial resources?
My CPU consumes 39 watts and I consider that loverly, compared to the old CPU which sucked 70+ watts.
Re:Where do you live? (Score:4, Funny)
What's worse the rig I built to cool my harddrives is essentially a system of Peltier devices http://en.wikipedia.org/wiki/Peltier-Seebeck_effe
The very premise of the grandparent is ridiculous. Now if it were one-point-twenty-one-jigga-watts, I'd be worried, but 80 watts? I've got more on in light bulbs right now!
Re:Where do you live? (Score:5, Informative)
Placing peltier patties in series decreases the amount of heat they can move, but obviously increases the temperature differential, but only if the stack is properly designed. It is very easy to put two peltier in series and have worse performance than a single device. In your case, with a non-static system (i.e. the hard drives are actively PRODUCING heat that you wish to remove) heat handling seems more imprtant than massive temperature differential. In thermodynamics, there is no free lunch... your secondary peltier is not only moving the heat away from the drive, but has to struggle with the heat it produces (however much electrical power it consumes is heat) as well as the first stage cooler.
To optimize a multistage thermoelectric cooler, a rule of thumb is that each stage should recieve 1/2 to 1/3 as much current as the previous one. This roughly translates to an equivalent voltage ratio, though as the temperature and temperature delta change, the silicon has different resistances, and the Seebeck also changes the apparent resistance.
In a PC, if you really want to do multistage peltier patties, and assuming they are 12v devices, you would notice an increase in performance (i.e. less heat coming off the hot side, and a colder cold side) if you were to connect the hard drive peltier to the +5V rail, and the heat sink peltier to the +12 rail. This is a very crude system, but definitely better than running both on +12.
I still maintain that the coolers in parallel are preferable for nearly any computer usage. You have a metric library of congress of BTUs (slashdot measurement) to move quickly. Stacked units do this poorly.
I have some data at home to determine near-optimal steady-state stacked configurations. Google is a help too, though sorting through the deep research and crackpot FAQs is rather tedious in this realm.
Ouch? Obviously... (Score:2)
The article says a rack of 40 of these little babies consumes in the neighbourhood of 3.2 kW. That's rougly equivalent to two nice microwave ovens. Yes yes, I know you don't run microwave ovens 24/7. But if you didn't close your refrigerator door all the way and it ran all day it would cost about the same as this unit. It would hardly put you in the poor house, especially if you had the m
your file server structure? (Score:5, Interesting)
Re: (Score:3, Informative)
Re: (Score:2, Informative)
But other than that, there's nothing like BackupPC for a pain- and effortless networkbased backup system.
Re: (Score:2)
Re:your file server structure? (Score:4, Interesting)
CD based backups would be laughable considering that the disk is almost filled with downloaded TV shows and movies. Ditto for DVD's, not to mention the impracticality. USB HDDs; Backups are meant to be _more_ secure. Internet backup; not enough bandwidth. I never thought I'd say this, but I miss tapes.
It's going to give out. I know it, you know it, we all know it. Bloody shows aren't even that good... *grumble*....
Lies! (Score:3, Informative)
Yea, well, 1986 called, they want their CPU back.
Your system isn't a 386, though; old PATA IDE controllers on those things couldn't address more than 4 or 8gb (the firs
Re: (Score:2)
if you really want to be careful, you'd have a few to rotate and then you could drop one at the safe-deposit-box.
Re: (Score:2)
Re: (Score:2)
600GB RAID10 - backups (rsync snapshots or Second Copy via Samba)
Offsite backups are a set of 300GB external USB/Firewire drives, rotated periodically. Those drives pretty much only hold a snapshot at a current point in time. They also concentrate on non-replaceable data rather then system configuration.
System config is a combination of storing any edited config files in Subversion along with weekly snapshots via Dirvish / RSnapshot / RSync. The SubVersion al
Re: (Score:3, Informative)
You misspelled "synchronized". RAID != backup. What happens when you accidentally garble "Doctoral Thesis.odt" and automatically overwrite your only other copy with the new version?
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
The home LAN backup is starting to be a
Can I get that petabyte in Cornflower Blue? (Score:5, Funny)
Re:Can I get that petabyte in Cornflower Blue? (Score:5, Funny)
Re: (Score:2)
You can get that in any colour you like (Score:2, Funny)
Can I get that petabyte in Cornflower Blue?...As long as it's not silver, black or grey, I'm fine with adding another petabyte to my current configuration. If only my file system could handle it...
You can get that in any colour you like, as long as it's Beige
muah ha ha ha haaaa!
Re:Can I get that petabyte in Cornflower Blue? (Score:5, Insightful)
Sounds heavy (Score:5, Funny)
If information is power, then this thing is a perpetual motion machine.
I think I'll buy the house... (Score:5, Funny)
A) Because why would you want every reality TV program at your fingertips?
B) Because we already do (See http://bittorrent.com/ [bittorrent.com])
C) Because... just because.
Re:I think I'll buy the house... (Score:5, Funny)
Re: (Score:2, Funny)
Re: (Score:2, Offtopic)
Re: (Score:3, Funny)
How many Libraries of Congress is that?
It's still not big enough! (Score:5, Funny)
Not when I keep getting a new internet every few minutes. A whole internet every few minutes! Can you imagine how many libraries of congress that is? I don't know about you, but I'd have a lot of trouble stuffing an entire library of congress into one of those tubes! And since the library of congress is obviously a lot bigger than this storage computer, there's no way you could stuff it into it!
Until they come out with one of these that's bigger than the library of congress, I'm not buying!
- Senator Ted Stevens, computer guru extraordinaire
Re: (Score:2, Insightful)
Re: (Score:3, Funny)
Culture for blood clots?
Re: (Score:2)
Can Senator Stevens?
Re: (Score:3, Insightful)
Difference is, I don't have any decision power over his parliamentary procedure but he has decsion power over my technology.
Re:It's still not big enough! (Score:4, Informative)
Thats right bitch, I went to high school. (its easiest to remember it because its kind of like "closure" and they are really calling for deliberation to come to a close and are just bad spellers)
Re: (Score:2)
Re:It's still not big enough! (Score:5, Insightful)
I didn't know what cloture was. I had no need to. Then you posted about it, and I needed to. I googled for it and now I do. I still don't really need to know (I'm not a US Senator, or even a US resident), but because it seemed like it might be at least moderately important that I knew about it, I read about it and learned. This is what moderately intelligent people do; they spot important gaps in their knowledge and then do something about them.
What Ted Stevens did, was spout nonsense on a subject that it was very, very important that he understood well. He is in charge of a committee whose responsibilities include determining whether the US section of the Internet should be regulated. He was leading the debate on this very subject, and yet displayed a complete lack of understanding of the subject in question. From The Daily Show's coverage of the speech, it seems clear that the average American television viewer has a greater understanding of the way the Internet works than this person, who, nonetheless, felt it necessary to stand up and publicly air an opinion on the matter.
So...I read the article... (Score:5, Insightful)
"The next step up is the TB120 PetaBox, basically a rack of 40 GB3000s and an ethernet switch or two."
WOW! so far so good...then, things turn ugly
"If you need more space than that, I would say it is time to lay off the naughty pictures for a bit and seek serious help.
In any case, Capricorn is saying you can get into one to the TB120s for about $1.50 a GB, and a little math says a full rack would cost under $200K. If you think that is a lot, imagine the Tivo you could make out of one, you could have every reality TV program at your fingertips for a little less than the cost of an average house."
Yup, for a mere $200K, you too can have every reality TV program and/or naughty picture at your fingertips...and here I was thinking about mundane things like virtual libraries, genome sequencing, protien folding, etc.
I'm going to be sad now...
Re: (Score:2)
A typical computer power user may have the following:
1) 100G-200G worth of DV video from home movies
2) 10G-20G of MP3s
3) 5G-10G of digital camera pic
Re: (Score:2)
The problem with saying they can be used for genome sequencing, protein folding, etc is that the typical Enquirer reader isn't going to have a clue how much space those activites take. I consider myself an above average geek and I don't know what an educated guess
Re: (Score:2, Offtopic)
It would only be worth the $200K if that included the rights to all reality shows - so I could pull the plug on all of them...
RAID (Score:3)
Re: (Score:2)
Re: (Score:2)
Repopulating the drives takes as long in SW as in HW, limited by the HD write speed. Saturating a GB network is also not a function of the RAID, whether HW or SW.
You're talking about the bandwith constraints of moving local filesystems to network storage, which is another matter. Once the network and storage HW can accommodate the app bandwidth (and latency) requirements, SW RAID on a cluster of these cheap, cool, tiny a
Re:RAID (Score:5, Informative)
Not sure what rebuild rates would be on a RAID5, probably about half of that? So 6 hours to rebuild the array?
(That's using 750GB SATA drives with Software RAID on a PCIe motherboard.)
Re: (Score:2)
(Someone suggested ZFS, but that would require a single file server that would potentially become a performance bottleneck.)
Bah (Score:4, Funny)
Pricing (Score:3, Informative)
"The PetaBox nodes and racks are available now. Base pricing for the nodes (512K RAM, 10/100 interface, and no LCD) ranges from $1,595 (GB1000) to $3,395 (GB3000)." http://products.datamation.com/dms/sc/1156440622.
The GB1000 is the 1TB node and the GB3000 is the 3TB node. I think they might mean 512MB of RAM base, but who knows. Sounds like it's a Fedora linux based product which makes me wonder what services it provides, they don't list. I would assume basic NFS/SMB/AFS services but there's no mention of backup / replication services, mirroring between twin nodes, etc that competitive products offer.
Re: (Score:2)
$704.89 including tax & shipping.
1.5 GHz C7 w/1000-base-t Mini-ITX 2 IDE & 2 SATA ports (would have liked 4, but whatever, I can get a PCI card)
4-drive 1U 15" deep case [could be dual-racked I suppose]
1GB RAM
DVD writer [dual layer]
The drives going in it are coming from my old server and others I've got lying around, which is why I wanted PATA & SATA.
Even buying a SATA PCI adapter & 4 750GB drives I think I'd come out far cheaper.
And I
Re: (Score:2, Informative)
Re: (Score:2)
http://capricorn-tech.com/images/mobo176.jpg [capricorn-tech.com]
They're barely-visable at the top.
Certainly they are a good deal to a company buying a rack full, but for someone like me who's doing it as a hobby, the $3k isn't worth it given I'm only going to have one (ok, maybe 2
Still, I wonder if the Sun box that has the 48 drives (vertically) in 3U or 4U that was on slashdot not l
Re: (Score:2)
$4200 (12) 750GB drives
$0159 Thermaltake Armor VA8000BNS Black Chassis: 1.0mm SECC
$0190 Thermaltake toughpower W0117RU ATX12V/ EPS12V 750W
$0035 DVD-RW (BLACK)
$0050 misc parts (fans, cables)
$0182 MB-BA22658 AMD Athlon64 X2 4200+ AM2 (WINDSOR)
$0200 XXXXXXXXXX Asus M2N32-SLI Deluxe motherboard (ATX)
$0210 XXXXXXXXXX 1GBx2 ECC memory modules PC2 4200 DDR2 533
$0009 XXXXXXXXXX Assemble & Test
$0334 (2) INTEL PRO/1000 PT DUAL PORT EXPI9
Re: (Score:3, Informative)
2) Rebuild time - Rebuild time for a RAID5 array isn't all that great. And if a 2nd drive fails, you have a 100% chance of data loss (vs only a 50% chance with RAID10). There's the
Why power per storage capacity? (Score:4, Insightful)
WOW! But is it ready for the enterprise? (Score:5, Insightful)
Obviously, this is the kind of product that companies and perhaps even data centers will possibly take a very long and desiring look at. No doubt that's exactly what Capricorn is hoping for. 3.2Kw/hr is nothing compared to the power that's eaten up by a rack that's loaded with arrays and SCSI drives.
My concern is with reliability. For the most part, the general attitiude is that SCSI, while much more expensive than IDE or SATA, is also more reliable with a larger MTBF. Whether that's really true or not is up for debate, but that's the general opinion that out there. Of course, there's also the general attitude that more spindles means more throughput and more reliability if in a proper RAID configuration. From what I've seen with other solutions, we can probably assume with a wide margin of safety that 120TB for this Capricorn system is RAID 0. If a 1U system only contains four drives and they're all independent RAID configurations, then say goodbye to 30 TB just to add a modicum of redundancy with RAID 5, whereas if there were more spindles, the amount of lost space would be greatly decreased even though there would be the increased chance of a failed drive.
Looking at this system, my gut feel is that a more-spindle configuration might be a wiser move, unless the money saved in electricity goes to a better-than-average backup system. Maybe it's my bias towards SCSI/fibre channel, but I don't know that I can yet trust a low-spindle, IDE configuration to do the same thing in an enterprise environment.
Just out of curiosity, has anyone out there in Slashdotland had good luck with enterprise IDE solutions? Who knows. Perhaps some success stories might change my pro-SCSI/fibre view.
Re: (Score:2, Informative)
Low end? (Score:2)
Difference between SCSI and PATA (Score:3, Insightful)
But for a smaller company, cost savings are significant if you dare to take a chance.
The biggest difference between SCSI and PATA configuration is throughput performance. A PATA RAID 5 is very likely to save your da
Re: (Score:2)
The PATA ones make me a little nervous, but SATA (especially the ones with command tag queueing and such) is generally just fine.
Capricorn is the system build spinoff from the Internet Archive; a friend of mine who works there was doing some of the QA work on these units. H
Re:WOW! But is it ready for the enterprise? (Score:5, Informative)
The PetaBox TB120 says 120TB of space on 40 nodes. That's 3TB a node, and given 4 drives per node, that's 750GB drives.
So basically the RAID selection is left up as an exercise to the reader, they're just marketing raw diskspace with a very low power consumption.
Re: (Score:3, Interesting)
They don't use RAID at all. They use RAIC (which is an acronym I just made up for a Redundant Array of Inexpensive Computers). Each individual node is a file server. Each file is distributed over a number of file servers. When a machine fails, they just swap in a new machine. It then grabs a load of files that aren't mirrored as much as th
Not enough capacity (Score:2)
Uh...I'm not so sure. It can't be expanded (and only 4 slots is pretty cheesy), 750GB drives are commanding an insane price premium (300GB drives are under $100, and 750GB drives are about twice as expensive per GB at $400), and fewer drives = slower...why one would "look long and desiring" at this is beyond me when you can get 2+3U solutions with equal density, but far
Re: (Score:2)
Are they worth the price premium for reduced heat / power requirements? Maybe... especially if it lets you pack double the density into an existing enclosure without spilling over to another enclosure. (Enclosures p
Re: (Score:3, Interesting)
While this used to be true, modern drives are the same between IDE/SATA/SCSI except for the control board the drive is strapped to. The reason SCSI is still preferred over IDE/SATA in most cases is from this old belief, most devices for enterprise level storage are still built mainly around it, SCSI still offers more devices per controller (14 per cable, rather than 2 o
Re: (Score:2)
I thought the difference between SCSI/Fibre and everything else was in the quality control of the platters.
Random IDE drives are pulled from each batch for in-depth testing, while every SCSI drive recieves in-depth testing.
That's why SCSI drives can claim a higher MTBF, since all the drives are measured up to a certain standard. This is also why SCSI drives are more expensive, since each drive has to go th
Re: (Score:2, Interesting)
Yeah, kinda. We've got a tray of PATA in our EMC Clariion. Don't ask it to perform with multi-threaded I/O, and it's certainly slower than the FC stuff, but it works okay for test and backups. Can't say we've seen a higher failure rate on the disks than we have with the FC trays. I hear that the SATA stuff is much better about ha
Re: (Score:3, Informative)
As for your question, enterprise IDE can only be realistically used for back-up or archiving purposes
Re: (Score:2)
As always, if you treat the disks properly, they're going to be almost as reliable as SCSI/FC. Just don't expect the same performance (you'll need roughly 2x the number of spindles... maybe more). But you get a lot of capacity for 1/3 to 1/4 the cost of SCSI.
Make sure the drives have (a) quality power feeds and (b) proper
This is not a storage array. (Score:5, Insightful)
Re: (Score:2)
Actually, it is or can be one big disk or "just" a JBOD. Yes, each box is just a regular server running Linux, but the setup can go like this: internal RAID5 or so within each box, then concatenate the drives with RAID 0 over each of the raid5.
Its up to you. For at least one of the previous slashdot stories about this take a look here: http://linux.slashdot.org/article.pl?sid=05/06/22/ 0418253&from=rss [slashdot.org] B
Re: (Score:2)
Too expensive for large install (Score:3)
It's not raid but it is a ton of storage space. Even if you back out one drive for parity and one for spare in each enclosure the cost per GB only goes up to about $0.72.
Re: (Score:2)
Re: (Score:2)
Really, person-hours wise the cheapest way to do it is likely one big farking system with fibre channel or the like.
can be nice... or a nightmare (Score:2)
now i'm moving away from this to a more SAN-like system (AoE for now, maybe iSCSI next year). i could reuse the hardware, but unfortunately the mini-itx mb had only PATA and 100BaseT. if there's a little mb with 4 SATA and GbEt
Re: (Score:2)
Watts Life? (Score:2)
The average American home consumes about 5KW, for about 2 people, unless it's storing their life experience data in 67 racks, 240KW, 48x their electric bill. They might only need half the power if they add storage as they add experience at 54TB:y. Maybe if we start now, we could get the power demands closer to human biological power consumption of about 0.12W by the time a new person i
Re: (Score:2)
If I didn't know better, I'd say these "human" storage devices were designed and built by the content industry for the specific purpose of being endless consumers of media.
Re: (Score:2)
Re: (Score:2)
if the goal is low power (Score:3, Insightful)
If all I wanted was 4 drives why would I care? Why would I want a 1U rack? Why wouldn't I just stick them in my PC?
Drives Matching Motherboard? (Score:3, Interesting)
Astounding? (Score:2)
This is astounding why? My PC-based fileserver typically idles at 60W and has two hard disks. With one disk powersaved, this drops to 45W, so if I took it up to 4 disks, I'd estimate it would only use 90W. OK, so its CPU doesn't quite run at 1GHz, but I bet it has more RAM to cache stuff with than these devices do.
And it runs Linux.
Homebrew File Server Solution? (Score:2)
JOhn
Re: (Score:2)
Performance? (Score:3)
Here's the Motherboard Info:
Motherboard/Processor:
* 1GHz VIA C3 CPU
* VIA CLE266 Northbridge
* VIA VT8237 Southbridge
* DDR266 RAM - Up to 1GB
* 2 USB 2.0 ports
* 1 Serial port
* 1 Parallel port
* 1 VGA port
* PS2 mouse & keyboard ports
Anybody have performance numbers for these units? A 1GHZ CPU can be hard-pressed to run an OS, serve disk and support a gige connection at full throughput. I'd be weary of looking at these for a data center without knowing how fast they can serve out the disk over a single gige connection. In fact, I see a distinct lack of information about this unit functions as a "storage node". Are you buying a 1U, 3.0TB node on which you need to install an OS and fileserver? Doesn't look like it would have the horsepower to run an iSCSI driver in additional to software raid drivers and still produce any real transfer speeds.
While a rack of these sounds nice in cost/wattage terms, it appears that you would have just purchased a cluster of storage nodes. A cluster of storage nodes with no way to present the available 120TB's as any kind of coherent storage space. You might be able to run Lustre, PVFS or GFS on them, if that's even possible, but that's a level of complexity the price and performance don't warrant.
If you figure in the cost of a Storage Engineer and lack of performance, this looks less appealing at the full rack level. Doesn't mean some PHB's won't buy into the the whole "Cheap Cluster Disk!" theme though. I pity the sysadmins who get a 120TB of raw disk and 40 more nodes to admin.
*rolls eyes* (Score:2)
A comparison (Score:3, Informative)
Sun Fire X4500 (500GB drives): 24TB per 4U
Capricorn TB per 42u rack: 126TB
Sun Fire X4500 TB per 42u rack: 240TB
Capricorn watts per rack (80w/unit): 3360w
Sun Fire X4500 watts per rack (1500w/unit): 15000w
Capricorn watts per PB: 26667W
Sun Fire X4500 watts per PB: 62500W
Capricorn cost per rack: ~ $200,000
Sun Fire X4500 cost per rack: $470,995
Capricorn cost per PB: ~ $ 1,560,000
Sun Fire X4500 cost per PB: ~ $1,960,000
So yes, Capricorn's solution provides lower power usage, but also lower density (And less processing power and redundancy I'd imagine). So it's a tradeoff. Lower the power bills, but raise the rent bill and the risk.
It should be noted that for Sun's server, I'm using the 1500W rating of each of the redundant power supplies, the typical usage would actually be much less (just like how a PC with a 500w PSU might only use 300W under load). This also ignores processor power, as each Sun unit is a quad opteron. It also ignores RAID, as the Capricorn could do no more than 3 drive RAID5, while each Sun box could have a 48 drive RAIDZ or RAIDZ2, wasting a lot less for parity. And things might change if Sun put 750GB drives in their unit instead of 500GB drives. It's all about tradeoffs.
Re: (Score:2)
Re: (Score:2)
As for hot swap PATA - I have a hot swap PATA system - works perfectly. I've had drive failures and been able to throw new drives in, access the admin console and start the re-build without powering down.
Re: (Score:2)