Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Building a 10 TB Array For Around $1,000 227

As storage hardware costs continue to plummet, the folks over at Tom's Hardware have decided to throw together their version of the "Über RAID Array." While the array still doesn't stack up against SSDs for access time, a large array is capable of higher throughput via striping. Unfortunately, the amount of work required to assemble a setup like this seems to make it too much trouble for anything but a fun experiment. "Most people probably don't want to install more than a few hard drives into their PC, as it requires a massive case with sufficient ventilation as well as a solid power supply. We don't consider this project to be something enthusiasts should necessarily reproduce. Instead, we set out to analyze what level of storage performance you'd get if you were to spend the same money as on an enthusiast processor, such as a $1,000 Core i7-975 Extreme. For the same cost, you could assemble 12 1 TB Samsung Spinpoint F1 hard drives. Of course, you still need a suitable multi-port controller, which is why we selected Areca's ARC-1680iX-20."
This discussion has been archived. No new comments can be posted.

Building a 10 TB Array For Around $1,000

Comments Filter:
  • by eldavojohn ( 898314 ) * <eldavojohn@@@gmail...com> on Monday July 13, 2009 @02:05PM (#28680841) Journal
    One: The title is a borderline lie. Yes, you can buy 12x 1TB drives for about a grand. But if I'm going to build an array and bench mark it and constantly compare it to buying a Core i7-975 Extreme, the drives alone don't do me any good! (And I love how you continually reiterate with statements like "The Idea: Massive Hard Drive Storage Within a $1,000 Budget")

    Two: Said controller does not exist. They listed the controller as ARC-1680ix-20. Areca makes no such controller [areca.com.tw]. They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.

    Three: Said controller is going to easily run you another grand [newegg.com]. And I'm certain most controllers that accomplish what you're asking are pretty damned expensive and they will have a bigger impact than the drives on your results.

    Four: You don't compare this hardware setup with any other setup. Build the "Uber RAID Array" you claim. Uber compared to what, precisely? How does a cheap Adaptac compare [amazon.com]? Are you sure there's not a better controller for less money?

    All you showed was that we increase our throughput and reduce our access times with RAID 0 & 5 compared to a single drive. So? Isn't that what's supposed to happen? Oh, and you split it across seven pages like Tom's Hardware loves to do. And I can't click print to read the article uninterrupted anymore without logging in. And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links, god I love those with all my heart.

    So feel free to correct me but we are left with a marketing advertisement for an Areca product that doesn't even exist and a notice that storage just keeps getting cheaper. Did I miss anything?
    • by jo42 ( 227475 ) on Monday July 13, 2009 @02:18PM (#28681045) Homepage

      They need to keep 'publishing' something to justify revenue from their advertisers. Us schmucks in the IT trenches know better than to take the stuff they write without a bag of road salt. A storage array of that size is going to need at least two redundant power supplies and a real RAID card with battery backup and proven track record -- unless you want a solid guaranty to loose that amount of data at some point in the near future.

      • by Kjella ( 173770 ) on Monday July 13, 2009 @02:55PM (#28681613) Homepage

        A storage array of that size is going to need at least two redundant power supplies and a real RAID card with battery backup and proven track record -- unless you want a solid guaranty to loose that amount of data at some point in the near future.

        Depends on what you want it for. I got a 7TB server w/12 disks using a single power supply and JBOD - I could use RAID1 if I wanted, but I prefer the manual double copies and knowing at once when a disk has failed since the last time I messed with RAID I lost a RAID5 set because the warnings never reached me. Works like a charm with all disks running cool and stable as a rock, much cheaper than this. I'm also very aware of the limitations of this setup, it's in no way a redundant setup in any sense. If I wanted 10TB of highly available enterprise grade information then all the following apply:

        a) I wouldn't use my cheap gaming case
        b) I wouldn't use my single non-redundant PSU
        c) I'd get a server mobo with surveilance
        d) I'd get a real RAID card with staged boot etc.
        e) I'd get hotswap drive bays
        f) I wouldn't be using consumer SATA drives

        This sounds like the half-way being neither really cheap nor really reliable. What good is that?

        • Re: (Score:3, Interesting)

          by spire3661 ( 1038968 )
          I have to ask, are you really holding so much media data that you need to run 7TB in redundant RAID. How do you backup you 7 TB of data, since we all know RAID isnt even close to a backup. I guess my point is why give up so much storage space for redundancy for data that you probably dont need on hand at all times, and cant effectively backup without significant cost increases. My rule of thumb is, for every GB of STORAGE that is live on the network, I have to have at minimum 2x that amount for backups (
    • by T Murphy ( 1054674 ) on Monday July 13, 2009 @02:32PM (#28681227) Journal

      Two: Said controller does not exist. They listed the controller as ARC-1680ix-20. Areca makes no such controller. They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.

      He glued the 8 and the 12 together. Duh.

    • by TheMMaster ( 527904 ) <hp@tPARISmm.cx minus city> on Monday July 13, 2009 @02:37PM (#28681313)

      I actually did something similar around a year ago. 12 x 750Gb of diskspace including disks, controllers, system and everything for around 2000 dollars. It uses Linux softraid but I still get an easy 400MegaBYTE/s from it. I have some pictures here:

      http://www.tmm.cx/~hp/new_server [www.tmm.cx]

      Tom's hardware's idea is very late to the party ;)

      • by Hyppy ( 74366 )
        12 SATA drives that can "still get an easy" 3.2Gbits of pure data bandwidth saturation? Only if you're doing pure sequential reads on multiple SATA buses. Even then, you would be maxing the sequential IOPS of 9 drives to do that.
        • I really am seeing very high speeds when doing sequential reads, but it's not like I got some cheap SATA controllers in that thing! I know this is hardly proof but, here goes :


          $ sudo hdparm -t /dev/md1 /dev/md1:
          Timing buffered disk reads: 1194 MB in 3.00 seconds = 397.86 MB/sec

          This is while the box is in use doing it's normal business. Random reading of 31Gb of smaller files gives me a somewhat more humble 40-70megabyte/s. Please note that all of these measurements were done while t

      • And how do you back it up?
        • by dbIII ( 701233 )

          And how do you back it up?

          As my co-worker that has never thought about disk failures or the entire box failing would say: "you don't have to, it's RAID", right at the point where you wonder what the consequences of punching him would be.

          As for me, it really depends on what the data is and how difficult it is to generate it from the original source. If the answer is difficult enough to lose money then you can fit a lot of data in a box full of LTO4 tapes locked in a building as far away as you can easily m

      • by Dalroth ( 85450 )

        I have that case. That is an awesome case.

    • by Divebus ( 860563 )

      8x Seagate 7200.11 1.5TB Drives @ $119/ea from Microcenter
      1x Highpoint RocketRAID 2322 w/ cables @ $329.97
      1x 8 Drive SATA enclosure @ $225.00
      Plug into a Mac Pro = 600MB/sec RAID 5
      Sweet.

    • Re: (Score:3, Insightful)

      by iamhassi ( 659463 )
      "Did I miss anything?"

      You forgot reason Five, which is stated in the article: "we decided to create the ultimate RAID array, one that should be able store all of your data for years to come while providing much faster performance than any individual drive could."

      If this is suppose to be storing data for years, why am I dropping $1,000 on it today? Why am I (or anyone) buying "the next several years" of storage all at once? Did I win a huge settlement from suing myself? [slashdot.org]. Did I win the lottery? Did
      • by leenks ( 906881 )

        Maybe I already have 10TB of data that I want to store for years to come?

        • So you plan on buying 30 GB in total so you can properly back it up? Otherwise you arent storing, you are waiting for disaster to strike.
    • You cannot violate this rule:

      "Pick any two: performance, cost, availability."

      That applies to *any* cost. At $100/TB, it's "pick any one". Your average user is just looking for a place to stash his pr0n, so optimizing for cost is perfectly fine.

    • This article is stupid mainly because it spends over $1000 (something like $1200) on the RAID card, while spending another $1000 on 12 drives. A RAID card that supports 20 drives, not just 12, and mixed SAS and SATA drives instead of just the SATA it needs. Not to mention that the RAID itself can go in SW under the Linux kernel instead of spending on HW to do it. And that single card is a single failurepoint, making the 12x redundancy of the drives kinda irrelevant.

      Instead, 4 $25 4-port SATA cards are enoug

  • ...How is this news? (Score:3, Informative)

    by Darkness404 ( 1287218 ) on Monday July 13, 2009 @02:08PM (#28680871)
    How is this news? Yes, we all know traditional HDs are cheap. Yes, we know that you can buy more storage then you could possibly need. So how is this newsworthy? It really is no faster nor more reliable than SSDs. I think this is more or less a non-story.
    • by jedidiah ( 1196 )

      This is someone publishing their recipe for "a whole lot of disk".

      Reports on just how accessable this technology is are very newsworthy.

      Although this machine is not on the simple side of things.

    • by HTH NE1 ( 675604 )

      Yes, we know that you can buy more storage then you could possibly need.

      Reasonable Limits Aren't.

  • $1000 my ass (Score:3, Insightful)

    by Anonymous Coward on Monday July 13, 2009 @02:09PM (#28680905)

    That'll buy the disks. But nothing else. "Hey, look at my 10TB array. It's sitting there on the table in those cardboard boxes."

  • *gag* (Score:5, Informative)

    by scubamage ( 727538 ) on Monday July 13, 2009 @02:09PM (#28680909)
    Sorry, I saw Areca and I threw up in my mouth a little. Their controllers are terrible, and gave our company nothing but trouble in the short amount of time we used them in the past. Those that are still out in the field (sold to customers and have service contracts) are a constant nuisance.
    • Re:*gag* (Score:4, Informative)

      by Hyppy ( 74366 ) on Monday July 13, 2009 @02:38PM (#28681315)
      Indeed. If you want to be safe with a RAID controller nowadays, go 3ware or Adaptec. Expect to spend $500 for the cheapest model.
      • Perhaps to the kids, you are right.

        Adaptec however, has been the way to go for the last 20 years if you want the safe route with relatively good performance for a reasonable price. Hate to sound like a fanboy, but unless I'm paying out the ass for racks of disks and controllers like something from EMC or the likes, Adaptec has always been the right choice.

        3ware to me is: Wanna be RAID controller thats not really worth the effort. I realize this has changed somewhat since they first started selling control

        • You should definitely take another look at 3ware then. I felt the same way about Adaptec and to a point I still do, they are relatively safe but tend to lack any industry leadership. 3ware has some impressive software that comes with their controllers meant to support the single RAID deployment up to centrally managing many servers. You would probably have to fall back to something like Nagios or MOM once you reach a certain threshold though.

          While I've had no issues with Adaptec or 3ware beyond batteries f

        • All of my server deployments are on Linux, and after suffering for many years with awful Linux drivers from Adaptec I just gave up on them altogether some time ago (around 2002 I banned them altogether from my systems). It looks like they may have recently released products that work OK with that operating system, judging from things like Smartmontools and Adaptec RAID controllers [adaptec.com] where controllers that have basic SMART support are mentioned as finally available. From the perspective of a Linux admin, I w

      • Really, from my research, Adaptec and 3ware both make ok-but-not-really-enterprise cards. For the money you'd pay to actually get a controller-based chip from one of those brands, you might as well spend a little more on an LSI. The Megaraids are pretty hot, the 8708 is a good card.

        I quite like the dell PERC ones, too. I haven't seen many problems with them at all, and they are easy to manage / poll (for monitoring, etc).

        a0 PERC 5/i Integrated bios:MT28-9 fw:1.03.50-0461 encl:1 ldrv:1 rbld:30% batt:

    • by gfody ( 514448 )
      *sigh* at modding an anecdote "5, Informative"
      I work on a particularly IO demanding application and have found Areca controllers to be a godsend. We've had dozens in production servers for many years now and they have proven to be dependable. We rigorously tested many different controllers in their highest performing configurations and nothing came close to the battery backed ARC-1680 w/4GB. This included cards from LSI, 3ware and Adaptec with their respective maximum amounts of cache and battery backup un
      • Re: (Score:3, Informative)

        by scubamage ( 727538 )
        Not a shill, just someone who is tired of being on the phone with Areca tech support at 3am while i have radiologists screaming down my neck because they can't access their purdy little pictures. We were especially bad off with Areca SATA controllers. The storage devices that came with them had a few nasty habits. First, despite Areca claiming that they supported Sata300/NCQ they only supported Sata150 without NCQ. Funny part is even though the NAS units came with them set to 300/NCQ when that caused issues
    • I have decidedly mixed feelings about Areca's controllers as well. The performance has been good, but the management situation has been awful. I wrote about some of my problems that popped up after the first time I lost a drive on my blog [blogspot.com]. If you get one of the cards that uses the network management port as the UI for doing things, supposedly that's better than what I went through, but that still makes for a painful monitoring stack. Compare to the 3ware cards I've been using recently, where it only too

  • Misleading headline (Score:5, Informative)

    by supercell ( 1148577 ) on Monday July 13, 2009 @02:10PM (#28680919)
    This headline is very misleading. Sure you can buy 12x1TB drives for just under a grand, but you won't have anything to connect them to, as the controller itself is another $1100. Another eye-catching headline to get click through's, that' just wrong. Sad.
    • by gweihir ( 88907 ) on Monday July 13, 2009 @02:26PM (#28681151)

      but you won't have anything to connect them to, as the controller itself is another $1100.

      You don't need that. Get a port with enoigh SATA ports on PCI-E and add more ports per cheap PCI-E controller. Then use Linux software RAID. I did this for several research data servers and this is quite enough to saturate GbE unless you have a lot of small accesses.

      • Re: (Score:3, Insightful)

        by relguj9 ( 1313593 )
        Exactly.... you can even set it up to automatically identify which HD has failed (with like 2 or 3 drive parity), hot swap out the hard drive (or add more) and have it resort the array without a reboot. This article is st00pid. Also, the guy who says you need an 1100 dollar controller is st00pid.
      • Re: (Score:3, Insightful)

        GbE is 1,000 megabits/s in theory. That's no more than 125 megabytes/s. With four Intel X25-E drives you'll hit 226 MB/s random read and 127 MB/s random write [anandtech.com] throughput.

        I'm fairly certain you can settle for the four on-board SATA ports for that. And those four drives combined will more or less eat a few thousand IO/s as horderves.

        • by AHuxley ( 892839 )
          Deep the story (thanks to FF and Autopager :) )
          you find a great one liner
          ""still cannot reach the I/O performance and access time of a single Intel X25-E flash SSD (thousands of I/O operations per second)"
    • Another eye-catching headline to get click throughs, that's just wrong. Sad.

      Then we shall give them what they ask for and bring forth the slashpocalypse.

    • My $60 asus motherboardboard I bought 3 years ago came with 10 sata ports.

  • by guruevi ( 827432 ) on Monday July 13, 2009 @02:11PM (#28680939)

    What good are 12 hard drives without anything else? Absolutely nothing. An enclosure alone to correctly power and cool these drives costs at least $800 and that's only with (e)SATA connections. No SAS, no FibreChannel, no Failovers, no cache or backup batteries, no controllers, no hardware that can connect your clients over eg. NFS or SMB to it.

    Currently I can do professional storage in ~$1000/TB if you get 10TB, including backups, cooling and power that would probably run you $1600/TB over the lifetime of the hard drives (5 years).

    • What about backup? What good is 10Tb of data with no backup? RAID5 protects you against hard drive failure but nothing else.

      • by Hyppy ( 74366 )
        Just a clarification: RAID5 protects you against single hard drive failure, and only if there are absolutely zero read errors while it rebuilds.

        That being said, I completely agree with your point about backups. It doesn't take much to corrupt an array. Even on-network backups are horrible, in my opinion. Any data loss due to malicious activity will likely take out connected backup systems as well.
      • by ewilts ( 121990 )

        RAID 5 will not protect your data. The odds are extremely high that if you lose a drive in a 12TB array, you *will* get an error during rebuild. RAID 5 on an array this large is for those people who don't do storage for a living.

        RAID 0? Let me simply repeat what that 0 is for: the percentage of the data you will get back if anything goes wrong.

        Any time I see somebody build this kind of uber-cheap setup reminds me of a simple formula: good, fast, cheap. Pick any two. Yeah, you've built cheap, and mayb

      • by pla ( 258480 )
        Two words: "Build another".

        Not kidding... Around 5 years ago I started considering my desktop PCs disposeable and my fileserver as the first thing I'd grab if I woke up in the middle of the night with the house on fire. I beat my head against the problem of how to back up almost a terabyte (five years ago, backing up a terabyte even to horrid tape would have taken several $100+ tapes and a $10k drive) for about two years before I finally came up with a simple, elegant, even obvious solution...

        I realiz
  • We do this now (Score:5, Interesting)

    by mcrbids ( 148650 ) on Monday July 13, 2009 @02:12PM (#28680955) Journal

    We needed a solution for backups. Performance is therefore not important, just reliability, storage space, and price.

    I reviewed a number of solutions with acronyms like JBOD, with prices that weren't cheap... I ended up going to the local PC shop and getting a fairly generic MOBO with 6 SATA plugs, and a SATA daughter card (for another 4 ports) running CentOS 5. The price dropped from thousands of dollars to hundreds, and took me a full workday to get set up.

    It's currently got 8 drives in it, cost a little over the thousand quoted in TFA, and is very conveniently obtained. It has a script that backs up everything nightly, and we have some external USB HDDs that we use for archival monthly backups.

    The drives are all redundant, backups are done automatically, and it works quite well for our needs. It's near zero administration after initial setup.

    • by Dan667 ( 564390 )
      I opted for a hardware raid card and am using a 600Mhz machine, with no noticeable performance problems. Only problem I have had is to add a big fan to cool the terabyte drives. Have not had any problems and have not rebooted the machine in 9 months (ubuntu server distro even, but would probably go back to debian). Works great and was also cheap.
    • by JDevers ( 83155 )

      Do you know what JBOD is? Just a Bunch Of Disks, in other words exactly what you have setup.

    • by HTH NE1 ( 675604 )

      Drobo.

      Needs only a one-time configuration for maximum reported capacity (e.g. 16 TB), then it's a JBOD that configures itself. Hot-swap the smallest drive as bigger drives become available. I got mine used from someone who works at Pixar. There's a $50 rebate on the new 4-bay models w/FW800.

      • Ive tried many consumer NAS solutions and NOTHING comes close to the flexibility of a PC with a bunch of disks. Its always something. I have a Linksys NAS 200 that the PS3 hates, I have an early HP NAS (before WHS) that the xbox 360 hates, etc and so on. With a premade consumer NAS, you really limit your flexibility and future-proofing. Just my anecdotal note.
  • by zaibazu ( 976612 ) on Monday July 13, 2009 @02:16PM (#28681015)
    Another thing with RAID arrays that have quiete a few drives is, you have no method of correcting a flipped bit. You need at least RAID6 to correct these errors. With such vast amounts of data, a flipped bit isn't that unlikely.
    • Re: (Score:3, Informative)

      by gweihir ( 88907 )

      Another thing with RAID arrays that have quiete a few drives is, you have no method of correcting a flipped bit. You need at least RAID6 to correct these errors. With such vast amounts of data, a flipped bit isn't that unlikely.

      If the bit flip a bit earlier, i.e. in the bus, RAID6 is not helping there either and this is not the task of RAID in the first place.

      If you want to be sure your data is on disk correctly, do checksums or compares. They are really non-optional once you enter the TB range. Once the da

      • Isn't this the sort of thing that ZFS is for? Admittedly, that would add a lot of cost to the array, but it should provide substantial safety.

        At that point, you'd probably be paying somewhere in the neighborhood of 2-3k or a bit more depending upon specifics, but what good is 10TB of data if it's not properly set up. Of course, that doesn't include the cost of backing it up either, but hey.
        • by tbuskey ( 135499 )

          ZFS is for doing ECC. But it's not going to add to the cost at all.

          It *might* make it cheaper as ZFS works better with JBOD then a RAID card. The JBOD allows ZFS do ECC all the way to the disk.

      • Do you sha hash your md5sums to make sure they are always correct too?

        • by gweihir ( 88907 )

          Do you sha hash your md5sums to make sure they are always correct too?

          No need. You can cvalidate them with the original data.

  • And even cheaper (Score:3, Informative)

    by gweihir ( 88907 ) on Monday July 13, 2009 @02:24PM (#28681113)

    I did someting some years ago with 200GB (and later 500GB) drives:

    10 drives in a chieftec Big tower. 6 drives go into the two internal drive cases, 4 go into a 4-for-3 mounting with a 120mm fan. Controller: 2 SATA on board and 2 x Promise 4 port SATA conroller 300 TX4 (a lot cheaper than Arcea and kernel native support). Put Linux software RAID 6 on the drives, spare 1 GB or so per drive for RAID1 (n-way) system. Done.

  • by HockeyPuck ( 141947 ) on Monday July 13, 2009 @02:26PM (#28681141)

    Ok, so let's say you built one of these monsters. Or you rolled your own with linux and a bunch of drives.... How would a home user, back this up? They've got every picture/movie/mp3/resume/recipe etc.. that they've ever owned on it.

    • Blu-Ray DVD? Those have a capacity of 50GB
    • An old LTO-3 drive from eBay. They have a native (no compression) of about 400GB. So you'd still need 4-5tapes for all your data. Though this will cost you over a grand. Plus you'll need to buy a LVD external SCSI adapter.
    • Online/internet backup? Backup and restore times would be brutal.

    Anybody got any reasonable ideas?

    • by BiggestPOS ( 139071 ) * on Monday July 13, 2009 @02:31PM (#28681207) Homepage

      Build an identical one and keep it far enough away that you need to feel safe? Ideally at least a few blocks away, sync them over a short-haul wireless link. (encrypted of course!) and take the same precautions as you would with anything else?

      Oh yeah don't do a flat fire store, make it a SVN repository of course.

      • Find somewhere you can host a duplicate hardware setup--maybe a friend's place, in exchange for hosting a copy of theirs at your home. Sync them regularly via rsync-over-ssh with --bwlimit so that nobody gets cranky about their web browsing working poorly. This'll protect you against hardware failure, though you might want to do something involving revision control, as noted, to guard against other problems.

      • Re: (Score:2, Funny)

        by DeusExMach ( 1319255 )

        Do it like Granny did with her life-savings: Bury it in a mason jar in the backyard. Only with a cat-6 cable running into it.

      • A few blocks? really? A good 'master' offsite backup will be located ideally out of state (or at least a few hundred miles). Such a local backup fucks you in the event of natural disaster (widespread fire, flood, earthquake, etc.) And when you say short haul wireless link, I hope you are referring to microwave.
    • Re: (Score:3, Insightful)

      well I suppose you could build two of them. I still wouldn't trust important data to that setup.... but I don't know of any cheaper setup in the long run if you just want to make one copy of everything. What I was just thinking is for a home user, how would you ever collect that much data worth saving... then I remembered that my shitty verizon DSL is the problem (only real connection where I live). I suppose if I had a fast connection I could collect that much porn or something. seriously though, it seems
    • by asc99c ( 938635 )

      I backup to more hard discs. I've been running RAID systems at home for a few years, and fairly recently replaced a 1.6TB array made up of 5 x 400GB discs, with a single 1.5TB disc. Also, I've suffered the failure of 3 500GB discs which I RMAed for replacement, but due to the timescales, went out and bought new drives before the replacements arrived.

      So currently I have 3.5 TB of unused hard discs, which have become my backup discs. In primary storage, I've got the 1.5 TB disc and a 2.5 TB array, but I'm

    • Re: (Score:3, Interesting)

      by tbuskey ( 135499 )

      Or how would a photographer archive this? So that your kids could show your pictures to your grandkids. Like you were able to go through a shoebox full of negatives with good quality.

      1st, you'll want to partition your data. This I can lose (the TV shows you recorded on your DVR), that I want to keep forever (photos & movies of the kids, 1st house), these I want to protect in case of disaster (taxes, resumes, scans of bills, current work projects).

      Don't bother with the 1st case. Archive the forever t

  • Sigh... (Score:3, Insightful)

    by PhotoGuy ( 189467 ) on Monday July 13, 2009 @02:34PM (#28681257) Homepage

    From the .COM bust, I have two leftover Netapp filers, with a dozen or so shelves, about 2T of storage. Each unit was about $250,000 new. A half million dollars worth of gear. Sitting in my shed. It's not worth the cost of shipping to even give the unit away any more. I guess it'll probably just go to the recycling depot. It seems a bit sad for such a cool piece of hardware.

    On the cheerier side, it is nice to enjoy the benefits of the new densities; I have two 1T external drives, I bought for $100 each, mirrored for redundancy, that sit in the corner of my desk, silently, drawing next to no power. (Of course the NetApp would have better throughput in a major server environment, but for most practical purposes, a small RAID of modern 1T drives is just fine.)

    • Re: (Score:3, Informative)

      by Hyppy ( 74366 )
      I wouldn't be so quick to poo-poo those. A 10 or 15K drive from a few years ago is not all that much slower than one today. 2TB of fast (multi-spindle SCSI/SAS/FC) storage is worth a lot more than just the number of bytes it can hold. Businesses still routinely spend thousands upon thousands of dollars to get even a few really fast terabytes. Arrays full of 15K RPM 146GB drives are still being sold in quantity.
  • by btempleton ( 149110 ) on Monday July 13, 2009 @02:43PM (#28681393) Homepage

    Such a RAID is for an always-on server. Expect about 8 watts per drive after power supply inefficiencies. So 12 drives, around 100 watts. So 870 kwh in a year.

    On California Tier 3 pricing at 31 cents/kwh, 12 drives costs $270 of electricity per year, or around $800 in the 3 year lifetime of the drives.

    In other words, about the same price as the drives themselves. Do the 2TB drives draw more power than the 1TB? I have not looked. If they are similar, then 6x2TB plus 3 years of 50 watts is actually the same price as 12x1TB plus 3 years of 100 watts, but I don't think they are exactly the same power.

    My real point is, that when doing the cost of a RAID like this, you do need to consider the electricity. Add 30% to the cost of the electricity for cooling if this is to have AC, at least in many areas. And the cost of the electricity for the RAID controller etc. These factors would also be considered in comparison to a SSD, though of course 10TB of SSD is still too expensive.

  • I've been using FreeNAS 0.7 RC1 for a while. It works pretty well for a NAS, and does the job for my small business. However, I don't think it would be useful for a larger business that requires great performance and reliability.

  • What for? (Score:3, Interesting)

    by Seth Kriticos ( 1227934 ) on Monday July 13, 2009 @02:47PM (#28681463)
    I mean, who is the target audience for the article??

    People who just want massive amount of data storage for private use just buy a few NAS units, plug them in a gigabit Ethernet or USB hub and keep the more needed data on the internal HDD's.

    On the other side, people who want fast, reliable and a lot of data storage buy something like a HP Proliant, IBM or similar Rack server with redundant PSU's, RAID controller with battery packs and SAS HDD's at 10-15k rpm (and possibly a tape drive).

    The later setup costs more in the short run, but you spare your self a lot of head aches (repair service, configuration, downtime, data loss) in the long run, as this hardware is designed for this kind of tasks.

    So who is the article targeted at: wannabe computer leet folks? And why on earth is this article on the Slashdot frontpage??
  • 12 consumer level SATA drives by Samsung. What'd be interesting is to see how long it takes before it fails with complete data loss due to drive failure. Raid 5 isn't going to save this turkey.
    • by slaker ( 53818 )

      In my experience I see considerably lower failure rates from Samsung hard disks than any other vendor; around .5% (half of one percent), compared to ~2% to 3% for Hitachi and Seagate units in the three year lifespan of the drives. My sample size is only about 2000 drives total in their current warranty period, but for as long as I've been tracking hard disk reliability over my sample of client systems (roughly 10 years), Samsung has consistently been better than other brands.

      My highest rates of failure in t

    • 12 consumer level SATA drives by Samsung. What'd be interesting is to see how long it takes before it fails with complete data loss due to drive failure. Raid 5 isn't going to save this turkey.

      I think that applies to any one company. If you spread your RAID out among similar-sized disks from different manufacturers, you stand less of a chance of a bad batch of drives (Deathstars) or firmware (yeah you, Seagate) dying in a short period of time.

    • result in 11 GB net capacity in RAID 5 or 10 GB in RAID 6

    I'm pretty sure those are supposed to be TB and not GB.

  • more reliable if you're gonna make an array?
    sure, it's got some sort of leniency but RAID5 means only one drive can fail...and sorry, but I don't think samsung drives are known for their reliability, let alone Areca's controllers.
    Plus think of the downtime as it rebuilds the array when a drive does go out. That's definitely gonna throw a wrench in those average throughput numbers.

    I'd go for a 3Ware controller and enterprise class drives as they are meant to last longer.

  • Pictures of the setup up would have been cool, but they didn't do that. This article is dry and useless to say the least.
  • I've got half the uber setup they talked about and its works great for me. With 6 sata ports on my mobo and another 2 in a pci-x by 1 slot (I found a regular pci card for $10 with two ports). I've got plenty of space with only an additional $30 on the card. I use mdadm in a raid 5 with 6 x 1TB drives with one spare. One 300GB drive for the OS and I had the rest of the parts laying around. You could assemble the setup I've got for $500 if you have any old system with a large enough case. Add a backplane

  • Some advice (Score:4, Interesting)

    by kenp2002 ( 545495 ) on Monday July 13, 2009 @03:09PM (#28681801) Homepage Journal

    For those who are concerned about backing up large amounts of data. Please call your local data storage company. Yes they do exist, but I'll skip naming names as I don't like to shill for free.

    Simply ask them about external storage devices you can use. They'll often lease you the equipment for a small fee in return for a yearly contract.

    For 3 years I simply had a $30 a month fee for a weekly backup to DLT tape (No limit on space, and I used a lot back then.). They gave me a nice SCSI card and the tape drive with 10 tapes in a container that I could then drop off locally on my way to work. Did encrypted backups and had 2 months (8 week) rotations with a monthly full backup. With the lower cost LTO drives that came out a few years the costs should be minimal. Can't wait till all this FiOS stuff is deployed. I'm hoping to start a data storage facility.

    If you have your own backup software and media don't forget to check with your local bank for TEMPERATURE CONTROLLED SAFTEY DEPOSIT BOXES. Yes banks do have some location with temperature sensitive storage. Some of those vaults can take up to 2k degrees for short periods of time without cooking in the interior content.

    Where I currently am the NetOps is kind enough to provide me some shelf space in the server room for my external 1TB backup drive that I store my monthlys on. I have 3 externals giving me 3 full monthly backups (sans the OS files since I have orignal CDs\DVDs in the bank)

    For home brewed off site I suggest a parent or sibling in a basement but elevated. I used a sister's unfinished basement up in the floor joist inside an empty coleman lunchbox (annual backups).

    Now a days with my friends having sick disk space also we tend to just RSYNC our system backups to one another in a ring A -> B -> C -> D -> A with full backups each node syncing to the next on separate days during the day when we are not home.

    PSEUDO CODE
    ===========
    CHECK IF I AM "IT" IF SO
    SSH TO TARGET NODE
    CAT CURRENT TIME INTO STARTING.TXT
    RSYNC BACKUPS FOLDER TO TARGET
    CAT CURRENT TIME INTO FINISHED.TXT
    TELL TARGET "TAG YOUR IT"

    BACKUPS\
        A_BACKUPS\
        B_BACKUPS\ ...

    Put each node's backup folders under a quota if needed to ensure no hoarding of space.

    To really crunch the space you could try and pull off doing a delta save of A's backup such that B's backup is the delta of A diffed to the subsequent nodes (Might be important for full disk backups such that a lot of the data is common between the systems).

  • by jriskin ( 132491 ) on Monday July 13, 2009 @03:13PM (#28681875) Homepage

    I've done this every 2-3 years three times now for personal use and a couple times for work. My first was 7x120 and used 2 4 port ATA controllers and software RAID5. My second was 7x400 and used a Highpoint rocket RAID card. My third one is 8x750gb and also uses a Highpoint card.

    Lessons learned:
    1. Non RAID type drives cause unpredictable and annoying performance issues as the RAID ages and fills with data.
      1a. The drives can potentially drop out of the raid group (necessitating an automated rebuild) if they don't respond for too long.
      1b. A single drive with some bad sectors can drag down performance to a crawl.
    2. Software RAID is probably faster than hardware RAID for the money. A fast CPU is much cheaper than a very high performance RAID card low end cards like the Highpoint are likely slower for the money.
    3. Software RAID setup is usually more complicated.
    4. Compatibility issues with Highpoint cards and motherboards are no fun
    5. For work purposes use RAID approved drives and 3Ware cards or software.
    6. Old PCI will max out your performance. 33Mhz * 32bit = 132MB/sec minus over head, minus passing through it a couple times == 30MB/sec performance
    7. If you go with software RAID you'll need a fat power supply, if you choose a raid card most of them support staggered start up and you won't really need much. Spin up power is 1-2amps typically but once they're running they don't take a lot of power.
    8. Really cheap cases that hold 8 drives are hard to find. Careful to get enough mounting brackets, fans, power Y-adapters online so you don't spend too much on them at your local Fry's.

    For my 4th personal RAID I will probably choose RAID6 and go back to software RAID. Likely at least 9x1.5TB if I were to do it today. 1.5TB drives can be had for $100 on discount. So RAID5 $800 for ~10TB formatted or $900 for RAID6. +case/cpu/etc...

    I'd love to hear others feedback on similar personal use ULTRA CHEAP RAID setups.

    • by vlm ( 69642 ) on Monday July 13, 2009 @04:49PM (#28683225)

      Lessons learned:

      9. Software raid is much easier to remotely admin online while using SSH and linux command line. Hardware raid often requires downtime and reboots.

      10. Your hardware RAID card manufacturer may go out of business, replacements may be unavailable, etc. Linux software raid is available until approximately the end of time, much lower risk.

      11. The more drives you have, the more you'll appreciate installing them all in drive caddy/shelf things. With internal drives you'll have to disconnect all the cables, haul the box out, unscrew it, open it, then unscrew all the drives, downtime measured in hours. With some spare drive caddies, you can hit the power, pull the old caddy, slide in the new caddy with the new drive, hit the power, downtime measured in seconds to minutes. Also I prefer installing new drives into caddies at my comfy workbench rather than crawling around the server case on the floor.

      • Re: (Score:3, Insightful)

        by TClevenger ( 252206 )

        9. Software raid is much easier to remotely admin online while using SSH and linux command line. Hardware raid often requires downtime and reboots.

        I would imagine it's also easier to move a software array from one system to another. If your specialty RAID card dies, at a minimum you'll have to find another card to replace it with, and at worst the configuration is stored in the controller instead of on the disks, making the RAID worthless.

    • What about backups? Why bother putting an array in raid 6 for a home environment when you have to backup the data anyways. If you DONT necessarily care if the data goes pop (like say daily DVR files that you watch and erase) then why bother with (full) redundancy? Im jsut curious as the the NEED for all of this, when its REALLY hard to back it all up. RAID, for the most part is about high availability, not data integrity/storage.
    • I'd love to hear others feedback on similar personal use ULTRA CHEAP RAID setups.

      For software, use OpenFiler [wikipedia.org].

  • Provided you have the controller to cope with it.

    But why is this even on /.? Who cares about that personal story?

    Or can I do an "article" tomorrow, about the 127 $5 mice I connected to my pc,and how I got it to display 127 cursors and do coreographies with it on a beamer?
    Actually I think this would be more interesting than TFA. ^^

  • From building two or three of these at home myself, my practical experience for someone wanting a monster file server for home, on the cheap, consists of these high/low points:

    1. the other poster(s) above are 100% correct about the raid card. to get it all in one card you'll pay as much as 4-5 more hdd's, and that's on the low end for the card. decent dedicated PCI-E raid cards are still in the $300+ range for anything with 8 ports or more.

    2. be careful about buying older raid cards. I have 2 16-port and 2

Algebraic symbols are used when you do not know what you are talking about. -- Philippe Schnoebelen

Working...