Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Home Network Data Storage Device 649

It happened again- a machine on my home network died. Taking with it tons of data. It's mostly backed up. No huge loss. But I finally think it's time to get some sort of network raid disk. A unified place to safely store data accessible to the numerous machines on my home lan. So now I pose to Slashdot readers- what are your recommendations? I'm looking for something with RAID and SMB sharing. At least a quarter TB, probably a half, but with some room to grow. What have you used? What works? What fails?
This discussion has been archived. No new comments can be posted.

Home Network Data Storage Device

Comments Filter:
  • NAS with RAID (Score:5, Interesting)

    by sterno ( 16320 ) on Monday January 16, 2006 @05:22PM (#14485335) Homepage
    I've been looking on-line trying to find this sort of possibility and the only prefab system I've found that has configurable RAID in a consumer NAS is the Buffalo Terrastation []. I've seen lots of NAS devices but basically they are all just a single hard drive with a network connection.

    I have not used one of these and do not know if it's any good, but like I said, I haven't seen any other options for a prefab system. I've priced out what it would cost to roll my own system like this and it ends up being only a tad more expensive to get a prefab device. Actually, I think the price dropped on the terrastation so I'm not sure that's true anymore.

    Also, if you get something like this, you should seriously consider upgrading to gigabit Ethernet if you haven't already. I have a network mounted share for most of my files and it works pretty well, but when I try to do things like synchronize my ipod against it, it totally crawls. Having a networked file server works better if it doesn't feel like your files are on a network.
  • Buy a computer (Score:4, Interesting)

    by Sloppy ( 14984 ) on Monday January 16, 2006 @05:27PM (#14485379) Homepage Journal
    You need a computer with a bunch of hard disks. Duh?

    The only non-obvious thing (i.e. a lot of people are telling you to do the wrong thing) is that you should use software RAID instead of hardware RAID. The cheapest CPU that you can buy, will still be 99% idle.

    A less non-obvious thing (but some people still forget it) is that you want a well-cooled machine, because heat is what kills hard disks. Get a nice case; pretend you're building a machine that you wanna overclock like an 31337 h4xx0r, but then of course, don't really overclock it.

    Oh yeah, and keep an eye on /proc/mdstat -- when your first disk dies, you want to know it happened, instead of finding out a year later when your second disk dies. (I use a lil' python script that displays the array status on a VFD using lcdproc. But there are lots of other ways to deal with it. Just make sure you deal with it somehow.)

  • ZFS (Score:1, Interesting)

    by Anonymous Coward on Monday January 16, 2006 @05:38PM (#14485481)
    While you're out slapping drives into beige boxes to stuff in a closet, think hard about checking out ZFS running on OpenSolaris.

    You're a smart guy that knows how to use Google to look up the fuzzy bits, but if someone dropped 250-500G in my lap, ZFS lets me manage it in a much more dynamic way, and more simply, than most other solutions.

  • ReadyNas X6 (Score:2, Interesting)

    by phalanx ( 94532 ) on Monday January 16, 2006 @05:43PM (#14485530)
    ReadyNas X6 [] is very nice. It has support for upto 4 SATA drives and can grow the raid array if you want to only start with 2 drives. I would recommend this with the SATA 400GB western digital raid drives.
  • Linksys NSLU2 (Score:5, Interesting)

    by LodCrappo ( 705968 ) on Monday January 16, 2006 @05:45PM (#14485554) Homepage
    This might not be perfect for the original poster's needs, but it works great for mine which are somewhat similar. Basically the linksys NSLU2 is a little box with an ethernet port and two usb 2.0 ports. it runs a variety of linuxes, mine runs Debian. You can learn about the open source side of the device here: []

    You can hook up several hard drives (or other usb toys) via a usb hub. Performance is not great, but totally fine for storage of music and movies if you only have a few users on your network. It supports samba, ftp, nfs, http, probably any other way you'd like to access the files. You could do software raid or some other type of mirroring/backup if you'd like.

    The main reasons I really like this thing for an at home server:

    • Silent operation, no fans in the nslu2 and you can get fanless enclosures for the HDs
    • Takes very little space away from your home office
    • Very small power draw
    • Easy to add/remove drives without any reboots
    • Can power off drives that aren't used frequently, then turn them on when needed

    I was amazed at how quiet my office became after replacing my PC file server with this guy and PC firewall with a wrt54g. I could actually hear the gf talking again, which is the only downside so far.

  • Blah (Score:3, Interesting)

    by slaker ( 53818 ) on Monday January 16, 2006 @05:46PM (#14485559)
    1. Buy a large tower case. Or use an old one. Whatever. Make sure there are lots of 3.5" drive bays.
    2. Put in some kind of crappy, low-heat motherboard and CPU. Use the Celeron 300A you bought back in 1998. Whatever. Pop in 128MB RAM or so.
    3. Buy a large, name brand PSU. Enermax, Seasonic, PCP&P, something like that.
    4. Put in some kind of crappy boot drive. The 10GB drive that probably went with the Celeron 300A will be fine. Load Linux or Windows Server. Whatever makes you happy (yes, Windows Server will run on 128MB, especially if it's not doing anything but serving files).
    5. Install a multiport IDE or SATA controller. Sil, Promise, Via, whatever. They're all OK. You want to be able to handle at least four drives. I prefer SATA at this point, 'cause I like big drives.
    6. Speaking of big drives, 250GB disks are dirt cheap. Buy four of those. I prefer Samsung and Hitachi drives. We're using spanned 250GB drives 'cause 500GB drives by themselves cost four times as much.
    7. Configure some a nice spanned, mirrored volume (RAID10 or the like). Two copies of a 500GB volume will be just fine. I prefer to use software RAID, in case I have to move the disks to another machine that doesn't have the same controller, but if you have a hardware option for RAID10, more power to you. Remember that RAID mirroring doesn't protect you from your own stupidity and cheapo PCI disk controllers never do RAID volume management.
    8. Or don't mirror, and just use the second volume as a backup destination for the first.
    9. Stick the resulting PC in a closet someplace. Administer with VNC or SWAT or RDP or whatever makes you happy.
    Total cost for this project is probably $500 or $600, almost all due to the hard disks.

    Alternatively, you could use an NSLU2 + a 500GB drive in a USB enclosure. That would also be a $500 setup, and there's no redundancy there.
  • MAID (Score:1, Interesting)

    by Anonymous Coward on Monday January 16, 2006 @05:49PM (#14485580)
    Problem for the home user as much as the data center is MTBF and energy consumption. A constantly spinning disk will wear out quicker and consume plenty of electricity in a year. By the time you have 5 or 6 drives all spinning away in your RAID system you have a hot, noisy, hungry thing. Not what most people want at home. Luckily I was involved in a project last year on building Petabyte scale storage. One of the things I learned about was MAID (Mostly Idle Arrays..). You switch on the disk when you want to read or write from it. Unfortunately not many standard consumer drives support true full power down, especially over USB2 interfaces which just piggyback onto a partial implementation of IDE. At the BBC they have an old fashioned but elegant solution, they use 10 quid plastic caddies and pull the disks to a shelf in the basement. I decided to go one better for my home backup system and implement a real MAID array. It only has 3 disks so far but the extra cost is a 12v relay per disk and a parallel port controller in the host. Requests to LVM blocks in a certain range get intercepted and power up the disk, mount it etc... You have to tweak some other stuff (including a few lines of source and recompiles) to get the timeouts to all work properly, and you have to disable ext3 auto checking fsck so it always powers up in a guaranteed time. The next step is to make it all boot and run from a flash device
    on the main controller because at present I need at least one disk running. Anyway it's a good idea to separate your bootable rootfs from the actual data storage.
  • by Nutria ( 679911 ) on Monday January 16, 2006 @06:37PM (#14486071)
    If you're like me, you don't want to buy a bunch of identical disks at once for home use.

    Why not? A few 250GB drives in external firewire (which is my choice) or USB enclosures would give you excellent security by rotating them off-site and "in another room" on a monthly basis.
  • by dc2447 ( 920780 ) on Monday January 16, 2006 @06:54PM (#14486229)
    Total overkill. Here is how to get a 250 Gb Raid 10 storange free of charge 1: Get 125 Gmail [] accounts 2: Get Google filestem [] 3: Job done
  • by Anonymous Coward on Monday January 16, 2006 @07:03PM (#14486322)
    haha, pretty slick, but what happens to your data when google makes some subtle changes that no longer makes using a "gmail filesystem" possible (or at least very hard)?
  • by First Person ( 51018 ) on Monday January 16, 2006 @07:04PM (#14486331)

    Once you understand that RAID is reliability strategy and are prepared to have appropriate backup measures in place, then RAID 5 becomes an attractive option for the home network. I've recently looked at several options.

    • LaCie Biggest Disk [] - Cheap but of questionable reliability. Since RAID systems should be reliable above all else, I would rule this out.
    • Buffalo TeraStation []: An interesting product but again reviews are pretty mixed.
    • FirewireDirect Vanguard V5 []: Solid offering from a company that focuses primarily on larger scale storage solutions.
    • NetApp []: A well regarded product primarily aimed at corporate users.

    In my case, a three disk RAID 1 solution proved more appropriate than RAID 5. I value high reliability on the home system and wanted to use a rotating third disk as a backup in the event of catastrophic data loss (e.g. house burns to ground). FWIW, I also use a DAT for differentiatial backups. For many users this may be overkill -- sacrificing three disks plus fixed hardware costs to greatly reduce potential data losses -- but for priceless coding projects and digital pictures, this might be good for you as well.

    For some users working with video or having large audio collections, much larger disk systems may be desired. First make sure that you have an appropriate mechanism for backing up a terabyte or three. Then, the Vanguard V5 may be an excellent solution if the $2-3k price is acceptible.

  • by Anonymous Coward on Monday January 16, 2006 @07:12PM (#14486395)
    Maxtor sells external RAID1 units now that come with Firewire 800, 400, and USB2 interfaces. They are a bit more money than putting something together yourself, but they are very simple. Get an Adaptec (or other) FW800 card with a couple ports and you could add two 500GB RAID1 Maxtor units to your system. If you ever need to take your data with you -- emergency or not -- just unplug and go. Everyone has USB2, if not FW400. Make sure your system and your external drives are plugged into a good UPS.

    Messing with software RAID (which most posters have recommended) is simply not worth it. Too many bugs, too many issues. It works for some, but becomes a nightmare quickly for others. If you go for RAID inside your PC, get a 3ware card, say the 9550SX-4LPK, for $325 or so, and then add two Western Digitial 400GB RAID Edition drives and run RAID1. This gives you hardware RAID1 vs. cheaper software RAID1. You make the call depending on how much your data is worth.

    Of course, back up your data in two places. I had a fancy RAID server fall off of a moving truck. Not anticipated and it cost me 2TB of data lost. This is one reason that I suggest the smaller Maxtor external RAID1 units. They are not as fast or as fancy as a RAID server or PC RAID. But they can be easily replicated and put into big cases with lots of foam padding.

    Anyway you go, you are moving forward. Good luck.
  • by jbn-o ( 555068 ) <> on Monday January 16, 2006 @07:30PM (#14486540) Homepage

    Why choose a card (and the requisite set of drivers and/or other software) instead of a box that manages the RAID for you and presents a single drive to the host (like Raidweb [] boxes)? I don't work for Raidweb, but I know some of their customers and the people I know are satisfied with the devices.

    If a home media jukebox drive fails, who will be at home to replace a drive with a cold spare? Do people normally build their card-based systems with fallback power supplies and a hot spare?

  • Been working on that (Score:5, Interesting)

    by Qzukk ( 229616 ) on Monday January 16, 2006 @07:38PM (#14486601) Journal
    I've been putting together the specs for such a beast. I decided to go with SATA for cheap drives and "SATA-II" (or whatever you want to call it, since there isn't a standard name for NCQ and 3.0Gbps support) for future-proofing.

    1) The natural first choice was 3ware. 12 port SATA-II controller [] (9550SX-12), for about $800. 3ware products are very well supported on Linux. The only downside is that it's a PCI-X device (this is NOT "PCI Express"!), and PCI-X busses are generally only found on very high end motherboards for servers and workstations. Any athlon motherboard or single-processor opteron board claiming to have PCI-X is lying, they really mean PCI express (AMD chipsets did not support PCI-X at all until around the time dual opteron motherboards were being created)

    So since I didn't want to spend $500 on a motherboard that had built in scsi raid, support for 16GB of ram and dual opteron processors just to use that $800 card, I looked around some more...

    2) And found a serious contender, the 12 port Areca 8x PCIe ARC-1230 [] (also about $800). While most low end motherboards don't provide an 8x PCI Express slot, they DO provide a 16x slot which will work just fine for this card (after all, this will be the fileserver, so a motherboard with crappy built in video will do, we're not playing Doom 3 here). Linux drivers are provided as source, even including a kernel tree patch which will build the driver into the kernel rather than as a module, making booting directly from the RAID controller easy.

    Slap the Areca into Tom's Hardware's 37 watt computer [] (motherboard has built in GigE, but pentium-Ms are 32 bit processors, making giant files/filesystems a pain. An Athlon 64+cheap mini-ATX can be had cheaper, but uses more power), add in a stack of 10 watt 400GB WD Caviar Raid Edition 2 [] drives, and you're set for a very low power fileserver with a lot of storage.

    Now, my turn to "ask slashdot":

    Where do I get a 250-300 watt powersupply with 12 SATA power connectors?

    Alternatively, do the SATA drive cages (like 3ware's RDC-400-SATA [] (pdf) have their own SATA power connectors built in and use standard molex connectors on the outside? Do I need special cages to support 3Gbps drives (ok, not a serious problem for now, but futureproofing)? 3ware's website says it'll work, their product PDF doesn't.
  • by multipartmixed ( 163409 ) on Monday January 16, 2006 @07:53PM (#14486716) Homepage
    ..I'm getting an old Compaq rackmount server with a boatload of disk for nothing. Apparently, it's not longer useful because it isn't a multi-gigahertz platform. LOL!

    My plan is to slap Solaris 10/x86 on it, fire up SVM and do a RAID 10 disk set with two hot spares. Hopefully, that will last me long enough that Sun T3s will come into affordability for homeusers.

    Why SVM? Well, simple -- I use it all the time at work, and it will require minimal effort to make work. Assuming, of course, that SVM on Solaris 10 x86 is works the same as it does on Solaris 9/SPARC. Last time I ran Solaris x86 (version 7), I don't think it had the option to run DiskSuite (now called SVM).
  • by jotaeleemeese ( 303437 ) on Monday January 16, 2006 @08:21PM (#14486920) Homepage Journal
    You don't need 1000 CDs.
    You don't need 500 DVDs.
    You don't need hours and hours of shaking, badly focused home videos.
    You don't need 5000 bad pictures.

    (if you really do you know I am not referring to you).

    Nobody is going to watch all that crap, and unless you have not got a life, you are included on that select group.

    Prune your digital trash.

    You will find that a moderate amount of disk space is more than enough to hold all your files.

    If you want to make a datacentre of your home, go ahead and enjoy, but the lamest excuse is to house all those GBytes of data that are never going to be seen again.

  • by tverbeek ( 457094 ) on Monday January 16, 2006 @08:55PM (#14487117) Homepage
    ...A unified place to safely store data ...

    In other words, "I want one really good basket to keep all of my eggs in." What... did they stop teaching problem-analysis in the CS dept after I graduated from Hope? {smile}

    Might I humbly suggest that you buy/build/salvage a pair of inexpensive computers, each with a fair amount of RAM, a hard drive (or RAID 0) of your desired capacity*, and the fastest NIC your switch can handle. (Forget the fancy RAID controllers, and of course anything better than a PCI VGA card is wasted.) Install the OSOS of your choice on both. Turn on Samba on one of them: that one's your file server. On the other one, set up a nightly cron job to synch (without deletion) the shared directory on the first machine to its local copy of that data: that's your redundancy.

    This solution effectively protects you from fried electronics, accidental deletions, and even small fires if the boxes are in different parts of the house (and if the whole house is going up, you get your choice of which box to run back in and rescue), scenarios in which the really-good-basket approach will still scramble your eggs.

    *Consider getting different brands to reduce the likelihood of near-simultanous failure. I've had multiple drives from the same lot start failing within months of each other, and you don't want to have a second drive failure while you're still browsing for a replacement for the first.

  • by Rysc ( 136391 ) * <> on Monday January 16, 2006 @09:40PM (#14487371) Homepage Journal
    I had a similar experience with the YellowMachine. It was advertised as 1TB RAID, but the fine print reads "on some models" and the ones I could actually find only gave 650G in RAID5 configuration. No big deal. But it's slow. SLOW. It's got an ARM processor that runs at 100 bogomips and it has 64M of RAM.Mounting is slow. ls is slow. I got it to store media files but I find I can't play mp3s from it unless I tell my player to cache the whole song (and forget about crossfade). It came with telnetd running and no ssh, but fortunately it was based on Debian Woody so fixes are easy. And boy has it required a number of fixes.

    I'm thinking of doing something like you did--copying critical configuration info off of it and reusing its md in a faster x86 box.
  • by Andy Dodd ( 701 ) <atd7.cornell@edu> on Tuesday January 17, 2006 @12:55AM (#14488209) Homepage
    There are some great posts on this topic in a past Slashdot discussion (Taco should've done his Googling ffs, it was only 2-3 months ago that the discussion in question was on Ask /.)

    The discussion in question 37226 []

    The basic idea:

    Split drives into small partitions, say 20-25 GB each. Since most drives available now are a multiple of 50GB, I suggest going with 25GB or 50GB per partition. Make software RAID devices out of sets of these partitions, one on each drive. e.g. md5 = sda5 + sdb5 + sdc5 + sdd5. Take all of those smaller RAID drives, and then LVM them together.

    I just set up such a system on my dad's fileserver back at home, and will be doing the same with a machine I'm building within the next week or two. So far my opinion is that this approach ROCKS.

    There are more details on neat tricks you can do with such a RAID + LVM setup in the discussion I posted the link to. Among other things, if you have a 150GB drive and three 250GB drives, you can have four-drive RAID for the first 150GB of the drive set, then 3-drive RAID with the remaining 100GB.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.