Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Review of Seagate's 750Gb Hard Drive 414

Zoxed writes "The Tech Report have a comprehensive review of Seagate's Barracuda-7200.10 'perpendicular' drive, including a primer on the technology. They ran performance tests against 10 other drives, checking the noise and power consumption levels. The Seagate fared pretty well, even on cost (per Gigabyte)." From the article: "Perpendicular recording does wonders for storage capacity, and thanks to denser platters, it can also improve drive performance. Couple those benefits with support for 300 MB/s Serial ATA transfer rates, Native Command Queuing, and up to 16 MB of cache, and the Barracuda 7200.10 starts to look pretty appealing. Throw in an industry-leading five year warranty and a cost per gigabyte that's competitive with 500 GB drives, and you may quickly find yourself scrambling to justify a need for 750 GB of storage capacity."
This discussion has been archived. No new comments can be posted.

Review of Seagate's 750Gb Hard Drive

Comments Filter:
  • Re:Scrambling? (Score:5, Interesting)

    by ericdano ( 113424 ) on Tuesday May 30, 2006 @11:30AM (#15428456) Homepage
    Which bring up the question, do existing RAID controllers support this drive?

    And, do firewire enclosures support them?
  • by Anonymous Coward on Tuesday May 30, 2006 @11:52AM (#15428617)
    ...but how do these 300MB/s SATA NCQ drives actually fare against U160/U320 SCSI drives for sustained thruput in something like a database server that normally benefits from the multithreaded i/o capability of SCSI? The "300MB/s" is pretty close to the "U320" rating of peak data xfer rate, but as we all know, the absolute very best and fastest disks themselves can generally only stream a continuous ~ 80MB/s due to mechanical limits of the hard drive regardless of the electrical interface, and most commodity-grade disk drives on the market today actually do well to reach and sustain ~50-60MB/s continuous stream rate, with ~30-40MB/s being common for low-end cheap drives.

    I'm betting that in a situation where you need the utmost in high-traffic-load, direct-attached storage like on a heavily loaded transactional database server running Oracle or similar, that the U320 SCSI disks connected to a good hardware-caching raid controller card still are the unbeatable king daddy paw-paw of sustained thruput.
  • by shodanx ( 950319 ) on Tuesday May 30, 2006 @12:01PM (#15428695)
    while buying this product doesn't make much sense economically (unless you consider the cost per "bay" in your server and you overpaid your server), there is one reason this drive is great for the rest of us since it came out on the retail market about 2-3 weeks ago the price of almost all the other drives dropped significantly here are the price per GB at my fav wholesaler (eprom.com) 40gb 1.2$CAD/gb 80gb 0.675$CAD/gb 120gb 0.608$CAD/gb 160gb 0.481$CAD/gb 200gb 0.425$CAD/gb 250gb 0.356$CAD/gb 300gb 0.386$CAD/gb 400gb 0.472$CAD/gb 500gb 0.546$CAD/gb 750gb 0.705$CAD/gb as you can see the 750gb has the second worst price per gb, of course part of this price is the extended warranty but from my experience the very high reliability of seagate drives makes warranty not all that valuable so unless you server has high cost per bay it's not really worth it (you server has to cost over 130$ per bay which is ludicrous considering my lastest 12 bay system costed only 41.08$CAD per bay) and don't get me started on electricity usage & heat & noise, since a proper case won't let much noise escape or will just drown the noise with it's fans or will be in a location where noise isn't important (it's a networked server who cares if it's in the attic) and the heat being only 40 btu per drive (or a third of the heat of a fluorescent tube not even counting it's ballast) and finally electricity usage is just as insignificant at 6$CAD per year of utilisation however this product is great because suckers will buy it at great markup for seagate which will cover a part of the cost of developpement of this technology which will last until the 5 terabyte drives and from now on seagate will probably start releasing higher capacity model more regularly meaning an even lower cost per gb in the near future (disk capacities more or less stalled in the past few months this will easily put hard drives back on track of the "capacity schedule") but I do have a question , how do you backup this much data at a low cost? tapes are out afaik since they're way more expensive per gb (reusability isn't helping much since you need at least 1:1 the storage capacity of your server at all time, but probably even more than that to compensate for failures and redundancy) than harddrives and also a lot more cumbersome to use dvd are even more annoying but this is offset by the fact that they only cost 0.0744$CAD/gb which make a disk burning robot probably a economically viable option next generation disk won't be below the cost of dvds for a long long time (if ever) so they're out too for now so is there any other option or if not what's the market for disk burning (and maybe disk loading, like a automated dvd carroussel with an integrated reader) robot these days ?
  • by Yeechang Lee ( 3429 ) on Tuesday May 30, 2006 @12:20PM (#15428857)
    Some keep saying that there's no point to ever-increasing drive storage numbers. I disagree. Huge drives will always be appreciated in media PCs, where good-quality video (even if compressed) takes up a good chunk of storage space.

    As the owner of a MythTV box equipped with dual HD cable boxes (*and* fortunate enough to have a cable provider that doesn't 5C encode its HD premium movie channels) and a HD over-the-air capture card, all of which I can use simultaneously, I can testify to that.

    Here's my experience with bandwidth use:
    * Digital non-HDTV channels generate the smallest files at about 900-1000MB/hour for a movie channel and up to 1200MB/hour for a cartoon (with probably a lower-quality feed).
    * Analog channels such as TCM generate about 2900MB/hour due to the extra noise.
    * HDTV premium movie channels generate about 4400MB-4700MB/hour.
    * A high-bandwidth HDTV channel (defined as HDNet or Discovery HD Theater and most network affiliates over cable or over-the-air) generates 7400-7700MB/hour . . .
    * Except for ABC and Fox, whose 720p programs record at about 5.8GB/hour.

    On the MythTV box's dedicated NAS, I have (according to MythWeb) 176 programs, using 1.6 TB (324 hrs 32 mins) out of 1.8 TB (111 GB free). Almost all of the programs are high-definition movies. Examples:

    * The Untouchables, 125 minutes, 16GB
    * St. Elmo's Fire, 120 minutes, 15GB
    * Shakespeare in Love, 125 minutes, 16GB
    * Ben-Hur, 215 minutes, 15GB
    * The Matrix Revolutions, 135 minutes, 11GB
    * A Passage to India, 165 minutes, 21GB
    * La Bamba, 110 minutes, 14GB
    * Mona Lisa Smile, 120 minutes, 6.1GB (Commercials transencoded out)
    * Spider-Man 2, 135 minutes, 12GB
    * Batman Begins, 150 minutes, 11GB
    * Seabiscuit, 180 minutes, 10GB (Commercials transencoded out)
    * Witness, 115 minutes, 11GB
    * The Passion of the Christ, 135 minutes, 9.8GB
    * The Lord of the Rings: The Return of the King, 205 minutes, 19GB
    * Doctor Zhivago, 215 minutes, 14GB
    * Emma, 129 minutes, 12GB
    * Bye Bye Birdie, 124 minutes, 16GB
    * Giant, 204 minutes, 26GB
    * GoodFellas, 154 minutes, 12GB
    * Bullitt, 124 minutes, 16GB
    * Real Genius, 119 minutes, 11GB
    * Pulp Fiction, 164 minutes, 12GB

    . . . etc., etc. Many of the larger-sized films were recorded off of HDnet Movies, which is an especial godsend for any movie lover. (I *can't wait* for the day TCM starts broadcasting in HD!) My all-time champion, now unfortunately lost in a box rebuild, was NBC's The Sound of Music annual broadcast. Four hours, including commercials, and 28GB!
  • by Dr. Zowie ( 109983 ) <slashdotNO@SPAMdeforest.org> on Tuesday May 30, 2006 @12:23PM (#15428874)
    Solar scientific data is growing too large to handle. The SOHO [nasa.gov] data are almost small enough to ship around by internet (the whole dataset is something like 20-30 TB for 10 years of operation), though data mining and such are starting to fall back on SneakerNet as the SDAC [nasa.gov] is shipping around terabyte lunchbox drives as their preferred method of bulk data export.

    But Solar Dynamics Observatory [nasa.gov], which is currently being built, will generate about 3 TB of data per day. We're all a little worried about how to distribute, store, and use such vast quantities of data. Perpendicular-storage drives like these just might save the day...

     
  • Seems to me that eventually I will trust my backup to a transparent, automatic, secure system that puts it on the Net somewhere. I currently use FTP to back up and transport from work to home and back. The hard disk is the sameone my website is on; and it is co-located. But someone will make better software to automate everything. I hope it is open source freeware.
  • How far are you distributing this data? Is it going places Internet2 doesn't go? Is it prohibitively expensive to hop on to Internet2, given the budgets of these sorts of projects?

    Seems to me that needing to distribute this kind of data is _exactly_ the sort of impetus needed to kickstart next generation internet infrastructure. Of course, this does nothing for storage problems.....

    One should be able to get ~ 1Gb/sec over fiber. Conservatively, assuming 500Mb/sec real throughput, that means 12 hours in transmission time, per day. That's faster than most sorts of not-too-expensive shipping techniques.

    Heck, 10 of Verizon's FiOS connections would be able to handle the bandwidth, assuming you didn't have to deal with Verizon's bottlenecks, or could somehow get the data on to their network.

    Keep in mind I'm not suggesting that the infrastructure exists right now to handle this sort of thing, but it seems that the technological barriers are long in the past, and the remaining barriers are fairly simple economic ones.
  • by Anonymous Coward on Tuesday May 30, 2006 @12:38PM (#15428984)

    For those who don't see the difference: Most boxes don't have controller capacity for more than four drives (two PATA channels and two SATA channels) and seven or eight drives will also strain your PSU and your cooling capacity. Might be hard to fit in your case, too.


    • Any modern nForce4-based motherboard handles raid arrays on 4 SATA drives, any additional chip is gravy (my 1 year old DFI handles 4 form the nForce plus 4 from a Promise chip)
    • Seven or eight drives won't strain a PSU (unless it's a cheap no-brand piece of crap), even a Raptor doesn't burn more than 15W when searchin, most 3.5" are at 8-12W tops. If we take 8 drives and 12W we're talking about top consumption of 96W. That's less than a modern CPU or GC alone.
    • Cooling is likewise, while hard drives don't cool very well (they don't have heatsinks or anythink) they produce very few heat, just put a low-speed 120mm fan (low speed as in under 1kRPM, I'm talking Papst or Nexus here) in front of your drives (a fan for each 3 or 4 drives) and they'll keep well under 40C in a room at 25C.

    Case room is the only real fact in here (most quality cases can only fit 4-5 drives, some go up to 6, some get as low as 2), and even then there are now several cases built specially for that kind of uses, such as Coolermaster's Stacker


    My cooling solution is low-tech, loud and very effective: The side of the case is off and I have a 30-inch box fan (the kind you mount in a window to cool your house) blowing into it.


    The only effective thing here is that your're slaughtering your case' airflow, while this is often the only way to cool crappy cases it doesn't work well in quality cases, unless you put so much brute strength in the cooling that the airflow doesn't actually matter anymore (which is what you're doing).


    One nifty trick I discovered is that if you slice all of your disks up into many small partitions, then create many RAID-5 arrays (using partition 1 on each disk to create the first array, etc.), then use LVM to bind all the arrays together you can add additional disks and rebuild the arrays without having to find some way to back up all of the data first.


    Some controllers are also able to extend (or even fully replace) arrays out of the box. You usually don't find them on consumer-grade motherboards though.

  • by jagilbertvt ( 447707 ) on Tuesday May 30, 2006 @01:16PM (#15429296)
    Some of the newer 7200.9 models (80gb, 120gb, and 160gb) also feature the perpendicular technology. I'd like to see a comparison between these and the older 7200.9 models that don't feature it.
  • by vallee ( 2192 ) on Tuesday May 30, 2006 @01:42PM (#15429509)
    I can tell from the tone of this review that a lot of pointy-haired purchasing managers are going to be dying to use these for enterprise database applications. I can feel the tense discussions coming on strong now.

    That's why I posted the following manifesto: 750G Disks are BAHD for DBS [pythian.com] a few weeks ago when these disks were released. Find out why huge disks are the bane of DBAs everywhere. My manifesto has been signed by the Oracle DBA industry's leading lights, please, use these disks for the purpose they were designed for, whatever that may be (home movies from your Canon S2 IS? I've got one of those and the on-board video compression is TERRIBLE!), and not for databases.

    This public service announcement has been brought to you by Pythian Remote DBA [pythian.com].

    --
    Paul Vallee
    President, The Pythian Group, Inc.
  • Find out why huge disks are the bane of DBAs everywhere.

    I read your manifesto, but still don't understand your premise. You don't adequately explain why larger sizes are inherently bad, save for the seek time issue. Given two drives with identical performance but a 2x difference in size, why is the larger worse if it's holding the exact same data?

  • by poopie ( 35416 ) on Tuesday May 30, 2006 @03:50PM (#15430702) Journal
    I remember when DBAs were screaming that they only wanted 1 and 2gb disks and as many spindles as possible, and at that time, it was the 9gb SCSI drives that were BAD because they we too big and people wanted more spindles

    Then the DBAs wanted to horde 9gb drives because 36gb drives were too large and they wanted as many spindles as possible.

    Now DBAs only want the 72gb drives because the 144s and 250s are too large and they want as many spindles as possible

    I guarantee that a few years from now, we'll read about the DBAs wanting only 750gb drives because the 3tb drives are too large and they want as many spindles as possible
  • by Zoxed ( 676559 ) on Tuesday May 30, 2006 @03:52PM (#15430731) Homepage
    One thing I noticed is that one of the photos shows the sticker on the drive and it includes a warning that the warranty is voided if the drive experiences greater than 350 Gs !! Can this drive really survice a 340 Gs impact ? I am not a scientist nor a mathematician but that sounds like a hell of a shock.

    Can any Slashdotter convert 350 Gs to real world units (eg dropped 5m onto concrete) ?
  • by kettch ( 40676 ) on Tuesday May 30, 2006 @04:14PM (#15430880) Homepage
    I have noticed that the more music I have ripped on my pc the less I listen to each song

    For me that's not entirely true. I still have music that I like to listen to. I make sure everything is tagged with the genre, and some days I just feel like one kind of music or another. My philosophy isn't that it's overload, but that it's having a song for every situation. It's being able to hit play on "Viva Las Vegas" (ZZ Top version) as you pass the welcome sign, or queueing up "Teenage Wasteland" when my friends' kids are having a "teen" moment (that didn't help the situation any, but it was funny).
  • by Anonymous Coward on Tuesday May 30, 2006 @04:33PM (#15430982)
    This isn't terribly accurate...

    The problem is not the drop, but he change in velocity (aka - acceleration).

    Dropping a pen on your desk from your hand resting on on the desk can be about 25Gs. This is about a 1" fall.

    Dropping it on a magazine reduces that to about 5Gs simply because the magazine provides a cushion and extends the decelaration time by a factor of about 2.

    350Gs (depending on the MASS BEING DROPPED and what it FALLS ON) may translate into about a 3" drop.
  • by evilviper ( 135110 ) on Tuesday May 30, 2006 @06:11PM (#15431517) Journal
    Analog channels such as TCM generate about 2900MB/hour due to the extra noise.

    GAH! Information... lacking... all... context...

    A high-bandwidth HDTV channel (defined as HDNet or Discovery HD Theater and most network affiliates over cable or over-the-air) generates 7400-7700MB/hour . . .

    HDTV streams have HORRIBLY poor compression. They encode with a constant bitrate, and use a very, very small GOP size (so you don't have to wait very long for the picture to appear when channel-surfing).

    Using a better codec (eg. lavc, Xvid, x264) with a much larger keyint, varible bitrate (2-pass) encoding, etc., you can get that down to at least 1/4th the size, with really no quality loss at all. Throw some good denoising into the mix (lavc's "nr" denoiser is great, and takes almost 0 CPU time) and you'll get it significantly smaller, still, and it will look *better* than the original.

    In addition, commercials are very fast, flashy, etc., and use-up much more than their fair-share of the bitrate. Editing them out will reduce the video length by 1/3rd, and reduce the overall bitrate even more (assuming VBR re-encoding).

    If you don't have a very fast CPU (~3GHz/3000+) h.264/x264 is out-of-the-question. However, MPEG-4 decoding is actually FASTER than MPEG-2 decoding with a decent codec.

    *And if your system is about 2GHz/2000+ or so, hardware decoding (XVMC) will use up as much or more CPU-time than decoding in software, unless you've got an AGP2x bus/card, or DMA doesn't work on your motherboard/videocard.

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...