Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

3.5 Terabyte NAS Reviewed 110

Steve Kerrison writes "Thecus' new N5200 NAS can hold five SATA drives, which with currently available drives means up to 3.5TB (or 2.75TB in RAID-5) of storage before formatting. From the review: '£600. That's roughly what this will set you back, minus hard drives. Add in five 750GB drives and you'll be forking out a number closer to two thousand. However, act a bit more modestly and you can still have a terabyte (even in RAID-5) for under a grand.'"
This discussion has been archived. No new comments can be posted.

3.5 Terabyte NAS Reviewed

Comments Filter:
  • Build one instead? (Score:3, Interesting)

    by JeffElkins ( 977243 ) on Sunday July 16, 2006 @09:38AM (#15727780)
    I need a good NAS to hold a video collection. I wonder though if I'd be better off to build one instead. Cheap headless linux box with 5 bays would work, yes?
    • by unts ( 754160 ) on Sunday July 16, 2006 @09:47AM (#15727801) Journal
      Hi there,

      The reviewer, in person, here. Yes, you certianly could build a cheaper solution and whack Linux on it (the N5200 uses Linux too, incidentally). Of course, it depends on what features you long, how much you like fiddling, and what sort of case you fancy building it into.

      Indeed, this thing isn't for everyone, but doesn't it look lovely?
      • I'd go for a cheap whitebox that I could cram 5 3.5 drives into. No LCD or other fancy stuff, just ssh for access.
      • by Storklerk ( 529418 ) on Sunday July 16, 2006 @11:55AM (#15728190)
        Yeah the N5200 does use linux. But I did not find any clear hint to this on the Thecus website.

        Also I'm missing any documentation of how to upgrade the firmware to your own linux system.

        If you want the source of their linux look here:
        ftp://ftp.gpl-devices.org/pub/vendors/Thecus/ [gpl-devices.org]

        They tried to hide the linux, but without success:
        http://gnumonks.org/~laforge/weblog/2006/02/24/ [gnumonks.org]

        So until they openly say they are using linux and offer a way to upgrade the software on the system I will NOT buy one of these.

        I did think about getting one of these. It has really nice features and if I could put my own linux system on one of the hard disk I could use it also as a dsl router and proxy (squid).

        Anyone knows of a similar device with an upgradeble linux?
      • by Karzz1 ( 306015 ) on Sunday July 16, 2006 @12:00PM (#15728208) Homepage
        I built a 2Tb storage device w/another 250Gb for the OS a couple years ago as a backup solution for ~30 colo servers. I used a Tyan dual Xeon motherboard (there is a lot of compressing taking place on this machine), A 3Ware hardware RAID [3ware.com] card, and a Chenbro 3u [chenbro.com] rackmount case with 12 SATA hot-swap bays and a single internal bay. I put 13 250Gb drives in it (2x250Gb software mirrored for OS, 10xRAID5 = 2Tb storage and 1 hot spare).

        At the time the cost was ~$4000 while commercial solutions were closer to ~$8,000. I used CentOS 3 as the OS (4 was still in beta) and had to use the centosplus unsupported kernel in order to use reiser on the 2Tb array -- ext3 didn't work for some reason that I don't recall. The 3Ware card showed up with stock kernel modules as a SCSI controller.

        I assume someone could build a similar system for about the same cost with much more disk space now. Also, if cost is a factor, the hardware RAID card (~$800) could be dropped in favor of software RAID and a single processor mobo could be used. I really** like the Chenbro case though and for the extra cost it leaves a lot of room for expansion if you were to start with only 5 drives and wanted to expand later.
        • by zuzulo ( 136299 )
          Another thing to remember when building high density storage appliances at the moment is that the MTBF for >=750 GB drives that use the new perpendicular recording tech (multiple layers of data - 2 at the moment - stored at each point) is actually higher than that for standard drives. That is, the new larger drives that use this tech are actually more reliable than smaller drives using the old tech. Seagate is the only drive manufacturer actually using perpendicular recording tech for retail drives, alth
          • I tend to treat MTBF as "interesting information" but not something I worry overly much about. They are probably useful in separating out the consumer-level 40hrs/week use drives from the ones that are capable of running 24x7.

            Instead, I make the assumption that the drive will fail and at the worst possible time. Which means RAID + hot-spare + rotating/generational backups for anything important.

            More important to me is the warranty period on the drives. Three years is nice, but five years is nicer. W
        • Very nicely done. But if I were you I would have done it slightly differently. I would not have used the internal disk bay (faster interventions, no need to open the case when replacing a disk). I would have only put 12 disks in the hot-swap bays (11 for RAID, and 1 hot spare). I would have used software RAID (hw and sw RAID are both capable of saturating a GigE pipe --I suppose you were doing backups over the network). Software RAID would have allowed me to create a small 100 MB RAID1 partition for bootin

        • used a Tyan dual Xeon motherboard (there is a lot of compressing taking place on this machine)

          How is that germane? We don't care about your dick size. Get to the point.
        • I really** like [...]

          (Noting that there's no double-asterisk explanation at the bottom...) Combined with your sig:

          Beware of he who denies you access to information, for in his heart he dreams himself your master.
          Hmm...
          • Ok, I have had a couple drinks, but that made me laugh out loud. I am of the habit of using "**" for emphasis :) I got it from an older programer that I IM with regularly. Could it be a Canadian thing....?
            • Could it be a Canadian thing....?

              It must be; I've never seen emphasis at the end of a word like that. Generally I see something like *emphatic* adjustment.

              Or, to use the markup that Slashdot allows, something like <b>bold</b> or <i>italics</i>...

              Glad to help the milk come out of your nose. ;-)

      • The reviewer, in person, here. Yes, you certianly could build a cheaper solution and whack Linux on it (the N5200 uses Linux too, incidentally). Of course, it depends on what features you long, how much you like fiddling, and what sort of case you fancy building it into.

        I did not notice any evidence of a battery-backed cache in either your article or the product website. IMO, it is important to have battery when using cached RAID-5 in order to avoid a write hole. Can you confirm whether or not the N

    • by Kjella ( 173770 ) on Sunday July 16, 2006 @09:55AM (#15727820) Homepage
      For your average video collection (unless you're a serious home video guy who need real backup) I suggest skipping RAID altogether (except maybe RAID 1 on boot or whatever). Since any disk tends to fill to capacity, I've figured I'd rather have more and afford to lose some than not having space to store them at all. Plain Linux server, if price sweetspot is enough (300-320GB at the moment) drop in 4-6 of those (depending on mobo chipset) and you have 1.2-1.8TB of storage. If that's not enough, start dropping in 750s (they're actually better value than most 500GB disks). I run a setup like that but with older disks (2x160+120+100 = 540).
      • Comment removed (Score:4, Informative)

        by account_deleted ( 4530225 ) on Sunday July 16, 2006 @10:22AM (#15727897)
        Comment removed based on user account deletion
      • For your average video collection (unless you're a serious home video guy who need real backup) I suggest skipping RAID altogether

        That's fine with a small collection, but as it gets larger you start to value it more because of the time it takes to rip and transcode all of those movies. The process is pretty automated, but ripping and transcoding three or four hundred DVDs takes a a great deal of time. I also have about 300 movies on VHS that I plan to digitize, and that's going to take a lot of time (e

        • NAS without RAID5? (Score:5, Interesting)

          by poptones ( 653660 ) on Sunday July 16, 2006 @11:44AM (#15728152) Journal
          No way would I use a machine like that without a RAID5 setup. I've lost countless hours (and access to music I no longer have, since the CDs were lost in a move or just quit playing). Whatever you spend on discs, going from 4-5 only adds 20% in cost, which even at $400 is pretty damn cheap compared to the work a TB or two of storage represents.

          Old machines with ATX type motherboards and such are far too cheap to justify shelling out $700 or more for a "dedicated" type solution. Get an old machine with a P2B-F motherboard and a decent PII cpu, throw away the old power supply and put in a shinty new $70 or so power supply, plug in a controller card if you wanna use SATA drives, and off you go - essentially for the price of the drives you want to put in it.
        • Comment removed based on user account deletion
          • Won't it be etter to just copy the iso and then mount the iso when needed?

            I do that with some movies, but not most, for two reasons. First, disk space. It's not yet cheap enough that I can afford to rip the whole ISO. At an average of about 7GB per DVD multiplied by ~400 DVDs, that's almost 3TB for my video collection. By ripping just the main title from each DVD and by transcoding it into a high-quality DivX file, I reduce the per-movie space to about 2.5GB. The other reason is that most of the time

        • I suspect the original poster was referring to the scenario where you can fit everything onto a single disk (500GB or 750GB) of content. In that particular scenario where you can afford a few hours of downtime, why not take the 2 disks and make one a backup of the other rather then a RAID-1 mirror drive?

          That way, if the primary disk dies, you simply put a new one in and restore from the backup drive. And if the primary disk gets corrupted, the backup shouldn't be affected. Under linux you could even ke
    • by Randolpho ( 628485 ) on Sunday July 16, 2006 @10:04AM (#15727842) Homepage Journal
      I need a good NAS to hold a video collection.
      Don't let the MPAA hear you say that.

      Er.... see you write that.

      Er..Yeah.
    • by Anonymous Coward
      sure, you could get 5 and have a NAS like everyone else does. Or you could get 24 [newegg.com]
    • I wonder though if I'd be better off to build one instead.

      I've just done that. I put together a Sempron 2800 powered rig with 4 Western Digital WD5000YS SATA RAID drives for AU$2,300. I'm using ClarkConnect for the OS, and running the drives in a RAID 5 array, which gives about 1.5TB of usable space. The box runs headless, and is hidden away in a cupboard in my office.

      • by TCM ( 130219 )
        Why is everyone always using 4 drives or 8 drives with RAID5? Considering most writes consist of 2^n bytes, you always need 2^n+1 drives in order to not waste any speed, i.e. 3, 5 or 9 drives.

        I am using a software RAID5 and the difference between optimal and non-optimal is 71MB/s vs. 8MB/s writes! Hardware controllers could overcome some of this with their buffer memory, but I still think you should be using the optimal number of drives there.
        • Why is everyone always using 4 drives or 8 drives with RAID5?

          Because speed isn't always the goal. I used 4 because that gave me the space and redundancy I needed, not to get a high transfer speed.

        • Do you have a link for more information, or can you explain this more fully? I have a 8x200gb software RAID5 that sustains just over 7.5 MBytes/sec (four of those drives are using the two IDE channels; the other four are Serial ATA, so that might contribute to poor performance as well).

          I wish I had known this before I moved 1 TB of data to the array.
          • by TCM ( 130219 )
            I could only find this from the developer of NetBSD's software RAID implementation called RAIDframe http://mail-index.netbsd.org/current-users/2002/0 4 /19/0011.html [netbsd.org]:

            The 'problem' with 4 disks is that you have (effectively) 3 data disks.
            Since most times you're doing a 'power-of-two' write (e.g. 16K or 32K),
            it's impossible to divide that power-of-two data by 3 and have a nice
            full-stripe write. That leaves you with doing partial writes all the
            time, and those are the ones that kill RAID 5 write performance.

            In

            • Another thing: It really helps if you have each disk on a dedicated channel. Never use 2 disks as master/slave on one IDE channel. I'd rather buy additional controller cards, even if it's just standard PCI.
        • Why is everyone always using 4 drives or 8 drives with RAID5?

          Most likely because it's exceptionally uncommon to find disk controllers with odd numbers of ports.

          Most everyone building big chunks o' disk value space over performance. Particularly when they're typically going to be accessing it using PCs with pitiful bus bandwidth and/or over <10G ethernet.

          I am using a software RAID5 and the difference between optimal and non-optimal is 71MB/s vs. 8MB/s writes!

          Your problem is(/was) elsewhere. I have

    • OS (Score:4, Interesting)

      by Kadin2048 ( 468275 ) <.ten.yxox. .ta. .nidak.todhsals.> on Sunday July 16, 2006 @10:20AM (#15727891) Homepage Journal
      It would seem to me that one of the strengths of the COTS solutions are that they have fairly slick integrated interfaces for managing access.

      If you roll your own, you might well have to set up Samba/CIFS/Netatalk all separately, which could easily become a huge pain. If you want a new share, you'd have to add it manually to all three, and deal with their varying authentication schemes.

      I did some Googling around for OSes specifically designed for roll-your-own NAS boxes (which it seems must exist), and came up with some stuff. One of the neatest projects looks like it has died, which is sad: Darma NAS OS [darma.com]. It seemed to be Linux-based and had a Java web-based management GUI, used the usual SMB/NFS/AppleShare, and supported ACLs and some other neat management stuff.

      I'm curious what people who've gone the DIY route are using to ease the management hassle that I could easily see a SAN becoming if it's OS is just straight Linux.
      • by jregel ( 39009 )
        Have a look at OpenFiler. It might be what you're looking for.
        • Very nice and based on my favorite distro as well CentOS [centos.org].

          However, for someone posting with such a low UID, I would have expected a link. [openfiler.com] Your /. user card is in serious jeopardy of being revoked :)

      • I'm curious what people who've gone the DIY route are using to ease the management hassle that I could easily see a SAN becoming if it's OS is just straight Linux.

        Done the DIY route.

        1. I use LVM2 to manage the discs and ReiserFS partition. No need to create new mount points for disc (no new "/data2" directory to add to all configuration), just add more storage space to the LVM pool and grow the partition (which can be done while system is live with ReiserFS). More space will automatically be available in th

    • Yep. FreeNAS is also a good solution, if you can deal without a full OS. I personally am building a FlexATX motherboard (VIA 800 Mhz) that has two PCI slots. The motherboard draws about 30W, so I am going to try and get by with an 80W PicoPSU Power supply. I haven't decided if I am going to go with a simple mirror or RAID 5 yet.
    • I was doing some server research and there was some 'free' as in beer, NAS software. It made it real easy on paper. There was also a real cheap way to build a box with used SCSI2 parts off of ebay. I think that it might have even been posted here on slash, before.

      The only real cost was an empty drive box. I wish for the life of me that I could remember the names. Ahh well.
    • How about - rip/transcode whatever.... Then burn onto DVD - keeps the costs down nicely. Just how many films do you own to justify a 2TB storage solution? How often do you watch them? With the money saved you could even go an watch a film at the movies (while on holiday with the rest of the saved cash).
      • "How about - rip/transcode whatever.... Then burn onto DVD - keeps the costs down nicely. Just how many films do you own to justify a 2TB storage solution? How often do you watch them? With the money saved you could even go an watch a film at the movies (while on holiday with the rest of the saved cash)."

        Won't work for me. I have about 400 transcoded movies, ripped from my DVD collection. DVDs are inconvienient when compared to central storage and theaters are a hassle when compared to a 60" widescreen LCD.
    • by Anonymous Coward
      I've got a 2 TB server that cost around $2000 6 months ago (a significant fraction of the cost went to the motherboard needing to have PCI-E, which was only available on 64-bit motherboards at the time, which meant an expensive athlon 64 too, which was really horrible overkill).

      I highly recommend the 3ware 9500S-8 controller, it is very well supported on linux (3/5 of the sections in the instruction manual were for installation on redhat, suse, and some other distribution), supports RAID-5, is SATA, support

    • For your stated purpose you might find that upcoming boxes from Yellow Machine might be a good fit. Up to 3 GB and built-in streaming and mostly automatic discovery and automation. Pricing seems to be in line with hardware costs with what I, a genuine cheepskate, would consider reasonable.
      The new models, about which I am writing, are starting to hit the market this month, with the really interesting ones coming in the fall and so forth.
      This is not an endorsement, just information, as I have not yet te
  • by cynicalmoose ( 720691 ) <giles.robertson@westminster.org.uk> on Sunday July 16, 2006 @09:39AM (#15727781) Homepage
    For those of you who don't know how much a pound is worth:

    £600 = $1100
    £2000 = $3700

    (Yes, the pound is one of the heaviest currencies in the world - in that one GBP is worth more than one unit of other currencies)
    • by Anonymous Coward
      (Yes, the pound is one of the heaviest currencies in the world - in that one GBP is worth more than one unit of other currencies)


      Yes, but it's nothing compared to kilos.

    • And if you dont know how uch a USD is worth (since it is declining faster then the trust in the US goverment), see pound in euros [google.com] or any other currency
    • Question: what common commodity has a value close to one pound sterling / pound weight?
    • by daBass ( 56811 ) on Sunday July 16, 2006 @10:09AM (#15727857)
      one GBP is worth more than one unit of other currencies
      There are some exceptions; probably amongst others, the currencies of Bahrain, Kuwait, Cyprus, Malta and Oman are all worth more in units than GBP.
    • I love the fact the pound is so strong at the moment (and the dollar so weak ;-) It's far cheaper for me now to buy stuff (mostly performance car parts) from the US, pay for shipping, and even get stung for VAT than to buy it here in the UK! Hopefully the dollar will drop a bit more, then I can buy more stuff ;-)
  • Hmmm...needs a lower pricetag, but can you think of ANYWHERE better to put your pr0n? :p I want one.
  • Under a grand? (Score:1, Interesting)

    by Anonymous Coward
    I guess if you don't count shipping, you might be able to pull it off.
  • Only 40 twenty-four days of DVD-quality porn. Less space than a Nomad. Lame.
  • Build it myself (Score:2, Interesting)

    by hak_addictk ( 988347 )
    I think I would much rather build a NAS instead of paying this much for one. Also I think it could be fun to build
  • Under A Grand? (Score:2, Interesting)

    by kneppercr ( 947840 )
    A thousand dollars (pounds actually but it is too early to convert stuff) is a ridiculus price to pay for a terabyte of space. I just got an external 500 gig from newegg. Price? 230 real dollars. Yeah its USB, but you know what? I paid about 50 cents for a gig. THATS a good deal.
    • Re:Under A Grand? (Score:2, Interesting)

      by eebra82 ( 907996 )
      A hot tip is to check out google's own currency conversion. Simply type in something like "500 usd in gbp" (without the quotes) and you will get the result, looking like this:

      500 U.S. dollars = 271.783443 British pounds

      Works the same way for converting Celcius to Kelvin, metrics to other systems and so forth. Calculator included!
    • Re:Under A Grand? (Score:4, Insightful)

      by rtaylor ( 70602 ) on Sunday July 16, 2006 @10:34AM (#15727927) Homepage
      I just got an external 500 gig from newegg. Price? 230 real dollars.
      $230 * 4 (redundancy to prevent data loss) = ~$1000 for 1TB.

      If you don't mind losing your data then this product is not for you. We can also ignore the performance difference between 4 individual USB drives and a single network attached device.
    • Huh? I just got done pricing storage, and if you get a $300 8-slot Hardware RAID card, plus 250s (or 300s) at around $90 a piece (You have to look to find them, but they're out there) you wind up a shave over a thousand and you get 1.75 TBs in a RAID 5. Forgo the RAID, and you can get 11 for under a grand, which is well over a TB.
  • by Bushcat ( 615449 ) on Sunday July 16, 2006 @10:06AM (#15727851)
    I have a couple of the Buffalo Terastation Pros (name depends on market). They seem to be a no-brainer at their pricing point if one doesn't get the largest-capacity model. Reason for two: one can do encrypted backups to the second, so my stuff is reasonably backed up and maybe secure. The things are almost silent in use, which is a way bigger factor then I ever thought it would be. Downside is the units don't support NFS out of the box, so they're just a tad too slow to stream video from. (Unless the problem is the Tvix5000U, a Korean product which is a great hardware design totally stuffed by abysmal software.)((As was it predecessor))(((And its portable equivalent)))((((Bugger, I spot a purchasing trend here I should have fixed))))
    • too slow to stream video?

      What kind of video are you trying to stream from it. It should just be disk IO, and a full HD video stream fills up about 5-8MB/sec.

      Hopefully, you are mistaken.
    • We have one of these. It cost GBP 500 (they were going cheap at Misco) for 1TB. It was easy to set up and seems to work fine. I don't know why the Thecus thing is 600 quid without any drives. It doesn't seem to offer much more.

      NFS isn't necessary for streaming video - CIFS doesn't have enough overhead to cause a problem in that area. It's probably your video player that has a problem.
      • Speed.
        Simple speed.
        Those Terastations have about 15Mbyte maximum write speed in raid5 mode, this one according to the review >35.

        If you want to actually use it as a NAS, and not as a media server, thats a huge difference.
  • What are people using for small office/home file servers? I'm looking for something that will hold about a terabyte in storage, and another terabyte in some sort of SAN backed disk. I.e., I want to be able to present arbitrary sized LUNs to machines on the network and also have standard file storage ability (CIFS/SMB, NFS, FTP). Right now I'm running Samba/NFS on Linux but have not figured out how to present LUNs to the clients. The iSCSI and Coda stuff in the kernel has not been updated in quite a while
    • Cheap storage is one thing. Having all the features a particular site wants, such as LUN's in your case, or NFS in others, or Active Directory authenticated CIFS for others, or high-performance streaming, or single partitions over 2 Terabytes for others, are another set of things altogether.

      Also beware the controllers. Good file servers have good quality RAID chipsets, like Adaptec or 3Ware. Cheap file serversz have those awful Promise or other low-end chipsets, with lots of wildly touted NEW! EXCITING! FEA
      • Actually $3K sounds very reasonable, cheap even, for two terabytes. Prices I've seen are in the $5K-$7K range :D Cost per gig, just for the disks alone, is in the $2 range. DO you know of a product in the $3K range that has 2 terabytes after the RAID is built?
    • SAN seems to be very expensive (doesn't seem to matter what flavor). iSCSI might be less expensive, until you look into the pricing of the iSCSI PCI/PCIe cards. I suspect, for companies who have less then a dozen servers, that SAN is not the way to go.

      Unless, of course, money is no object.

  • by nxtw ( 866177 ) on Sunday July 16, 2006 @10:29AM (#15727916)
    (All prices approximate.)

    This will support 4 drives over SATA, or 7 if you use all of the IDE channels:
    $105 4U case and 400w power supply
    $165 915G Socket 479 Motherboard w/ 4 SATA, 2 IDE, and gigabit ethernet.
    $71 Celeron M 370 (Dothan) CPU
    $25 DDR2 memory (256MB)
    $25 CompactFlash OS drive (1GB)
    $15 IDE to Compact Flash adapter
    $0-25 Linux OS -- there are specialized NAS distributions available commercially for those that afraid of setting things up themselves
    = $406-$431

    Which beats this device's $670 lowest price found on Froogle.

    Additions:
    $20 4x SATA I
    $60 4x SATA II
    $50-100 Replacement power supply
    +$60 1GB DDR2
    +$150 Pentium M CPU

    Sure, the Celeron M will use more power than a Celeron M ULV, and the included power supply may be inadequate for configurations with large drives (but that's more drives than the article's product supports). And this device doesn't have the USB device capaibility, either. But you've got the freedom to do things how you like.
  • NASLite (Score:2, Interesting)

    by Anonymous Coward
    NASLite from http://www.serverelements.com/ [serverelements.com] allows you to use quite ancient hardware (eg Pentium 1 or 2) and get a 4 (or even 8 with the latest version) hard drive NAS up and running with SMB, NFS, FTP and HTTP access. Took me about 10 mins (not including formatting), and only had to buy the hard drives since people virtually throw away machines that this can run on! Worth every cent of it's modest fee IMHO. (I have no affiliation to NASlite)
  • Why spend $1100 to get 5 drives when you can spend $150 for practically any cheap PC with 4 IDE slots? That's $600 for 8 slots, 6TB raw @750GB for another $3000 or 2TB @250GB for $500. It might not be as fast, but if you distribute the data right, you're getting access times across 8 IDE switches instead of 5 - and it's just as reliable. Spending all $1100 gets you 6 hosts, 24 slots; $1500 for 6TB @250GB or $9K for 18TB. You might spend more time replacing drives and components, but that might be worth the
    • I used to tinker a lot when my time was relatively cheaper than commodity not so expensive hardware(college years).

      Not anymore. My time is much more valuable than fiddling with linux boxes and I am not geeky enough anyway.

      I would recommand thecus YESBOX for video collectors. Put two 750GB and set the box to the raid1 mode. Box isn't the fastest box on the net but it is useful while maintaining it's size small, quiet, and has physical on/off button to turn on/off without loggin into box remotely. It comes wi
      • But I'm already doing sysadmin on the rest of my LAN. A few more boxes in the closet that don't host users, just a RAID, aren't going to significantly increase my workload, once I've installed it.

        But then, I find occasionally setting up a PC for myself relaxing. I guess because I'm both geeky and cheap. That's why the cheap PC approach seems better, especially for home systems that don't need the performance/manageability of an enterprise RAID.

        The role of commodity HW actually makes the cheap PC RAID more i
  • I have a 3 year old Pentium4 that I built. It currently houses 8 ata drives for a total 2.1 tera-bytes (I use cominations of RAID 10, RAID 1, and RAID 0 arrays on it). What did all of those drives cost? On average of $60 after rebates (I've recieved them long ago). If I needed more space, I could add a bunch of 500 gig ata drives on the cheap. However, I'd only have 4 terabytes by the end.

    So, in conclusion, SATA drives provide more space, have less need for drives and therefore save energy. (And they prev

  • No pics? (Score:2, Insightful)

    by fiendy ( 931228 )
    I guess with all the ads crammed on to the page, they don't have room for a pic of the actual piece of hardware they are reviewing.
  • I know I can't win "I remember when ..." rod-length-checks but this is a banner day for someone who paid barely under $1000 for his very first ever hard disk; a 10MB Seagate ST-506 with controller.
  • by Digital Pizza ( 855175 ) on Sunday July 16, 2006 @03:05PM (#15728892)

    I didn't see mention of what internal software was used, but a lot of NAS devices use Samba and won't work properly with Vista. Check out this link [emailbattles.com].

    That's the problem with NAS devices; Microsoft loves to change its network protocols with each new version of Windows, breaking countless NAS devices that are past vendor support.

    There are a number of NAS devices designed to work with Windows 2000 that don't work well with windows XP; the vendors won't provide updates and would rather you just chuck it and buy a new NAS device.

  • I'd like to experiment with ZFS [slashdot.org] on the cheap. In particular, I'd like to start off with two drives, and slowly add drives as my storage needs increase. Great, but: which controller card(s) to use to maximize my eventual capacity while spending the least on cards. I don't need hardware raid (with cards like the 3ware 9500S-4LP that someone else mentioned starting at $315 [newegg.com]). Does it make more sense to use 4 port cards for performance reasons? Or can I use 8 or twelve port cards? I imagine performance wil

It is easier to write an incorrect program than understand a correct one.

Working...