Please create an account to participate in the Slashdot moderation system


Forgot your password?

Terabyte Hard Drive Put To the Test 376

EconolineCrush writes "As a technical milestone, Hitachi's Deskstar 7K1000 hard drive is undeniably impressive. The drive is the first to pack a trillion bytes into a standard 3.5" form factor, and while some may argue the merits of tebi versus tera, that's still an astounding accomplishment. Hitachi also outfitted the drive with 32MB of cache—double what you get with standard desktop drives—making this latest Deskstar a leader in both cache size and total capacity. That looks like a great formula for success on paper, but how does it pan out in the real world? The Tech Report has tested the 7K1000's performance, noise levels, and power consumption against 18 other drives to find out, with surprising results."
This discussion has been archived. No new comments can be posted.

Terabyte Hard Drive Put To the Test

Comments Filter:
  • Test? (Score:5, Funny)

    by Anonymous Coward on Monday August 13, 2007 @04:18AM (#20209465)
    Now, my porn collection, THAT is what would put this drive to the test.
  • by BadAnalogyGuy ( 945258 ) <> on Monday August 13, 2007 @04:19AM (#20209469)
    ge ge ge kanashhk shhk shhk fzzke kek shhk shhk

    I love the sound of head crashes in the morning. Smells like... a coffee break.
  • by FF8Jake ( 929704 ) on Monday August 13, 2007 @04:23AM (#20209493)
    I'm not losing my 1.5TB of porn to a single Hitachi Deathstar.
    • RAID 6 Please (Score:4, Interesting)

      by the_doctor_23 ( 945852 ) on Monday August 13, 2007 @04:57AM (#20209705)
      Make that RAID-6. With consumer grade drives I would not want to see a second drive die during a RAID-5 rebuild.
      For example a 3ware 9650SE-8LPML can be had for as little $520.
    • by tibike77 ( 611880 ) <> on Monday August 13, 2007 @05:03AM (#20209725) Journal
      Only 1.5 TB of porn ? That's like what, 350 DVDs worth ?

      That's 85-125 USD for your entire collection in one single copy.
      Or make that a nice round 200$ for two sets of copies.
      So, where can I get two 1.5 TB HDDs for 100$ each ?

      Sure, the "seek time" would suck, but then again who cares, it's porn, not like you'll die if you wait 15 more seconds before you start looking at it... or are you ?
  • Data loss (Score:3, Interesting)

    by B5_geek ( 638928 ) on Monday August 13, 2007 @04:24AM (#20209501)
    I feel bad enough when one of my 500GB drives goes tits up, I would hate to loose that much data on one drive.

    But on the other hand, a full-tower case loaded with those in a raid5 is enough to make me drool.
    • by ipooptoomuch ( 808091 ) on Monday August 13, 2007 @04:43AM (#20209631) Journal
      500GB of data loss?!? I CRIED for a half hour over a filled 160GB drive after it got killed by an electrical storm. Even though it wasn't technically covered under warranty, the fine folks at best buy still took it back after I said a defective flux capacitor on the drive started it on fire.
    • Re:Data loss (Score:5, Insightful)

      by jimicus ( 737525 ) on Monday August 13, 2007 @04:51AM (#20209681)
      RAID 1+0 is the way to go for redundancy. Unless you're unlucky enough to lose both drives in one of the pairs making up the array, you can survive more than one drive failing.

      It's also the way to go for speed - your controller doesn't have to calculate the parity bits for every write operation (yes I know the parity sum is simple - that doesn't stop it from adding a bottleneck).

      RAID5 is most useful where:

      1. You desperately need the space.
      2. You can't afford the drives (or, for that matter, power/larger RAID controller) required to acheive the same space in RAID 1+0.
      • Re:Data loss (Score:4, Informative)

        by Ex-MislTech ( 557759 ) on Monday August 13, 2007 @06:21AM (#20210049)
        Some problems with RAID 1+0:

        Not all hardware controllers will allow you to do a reconstruct to add more
        space and extend the partitions later on RAID 10 or 1+0.

        Recovering from a failed 1+0 is ok if it is a "simple" failure.

        I have had better luck recovering RAID5's than 10's or 1+0's.

        • Re:Data loss (Score:4, Informative)

          by zeromemory ( 742402 ) on Monday August 13, 2007 @08:25AM (#20210689) Homepage

          Not all hardware controllers will allow you to do a reconstruct to add more
          space and extend the partitions later on RAID 10 or 1+0.
          Likewise, many hardware controllers won't let you extend a RAID-5 array, (unless they implement some dynamic stripe size hack, a la ZFS's RAID-Z []).

          Recovering from a failed 1+0 is ok if it is a "simple" failure.
          Please explain what a !simple failure would be. Here, let me give you a 'simple' failure case where RAID-5 would be pretty difficult to recover from: a drive fails in your RAID-5 array, and you lose power or experience another hardware failure shortly afterwards, before you can replace the drive. Whoops, you just became another victim of the RAID-5 write-hole [] (see the section under RAID-5 performance).

          OK, here's why we use RAID-10 at my installation: it provides great performance and can survive multiple drive failures without the overhead of something like RAID-6. RAID-10 also has no 'write-hole'. Don't just take my word for it, though, check out this article [] from Adaptec comparing the merits of all the basic RAID levels and their nested brethren.
      • Re: (Score:3, Interesting)

        by drsmithy ( 35869 )

        It's also the way to go for speed - your controller doesn't have to calculate the parity bits for every write operation (yes I know the parity sum is simple - that doesn't stop it from adding a bottleneck).

        The "bottleneck" of parity calculations is so small as to be irrelevant. Parity-based RAID levels are bottlenecked by the much higher number of physical disk operations, not the parity calculations.

    • Nothing new, then (Score:3, Insightful)

      by Moraelin ( 679338 )
      Nothing new, then. At this point 1 TB may sound like "that much data", but then so did a 40 MB drive waaay back. Heck, at one point 1.4 MB meant a hard drive the size of a large washing machine. Nowadays that's called a floppy and already outdated.

      What I'm getting at is that it's sorta like "Moore's law" for hard drives. (And occasionally Murphy's law too;) What's "whoa, I'd hate to lose that much data" at one point, is just adequate in a couple of years, and not even enough for your system files and/or swa
    • Re: (Score:3, Interesting)

      by Jafafa Hots ( 580169 )
      I hear all of these stories of people having drives go bad, I don't understand it. I've owned hard drives since about 1981, I've gone through dozens, replacing them as they become obsolete and too small, and I have yet to have one fail on me - except the one I accidentally launched across a room. And even that one I managed to get most of the data off of.

      What are people doing with drives to make them fail?

      • by MrNaz ( 730548 ) on Monday August 13, 2007 @07:20AM (#20210347) Homepage
        You really, really need to buy a lottery ticket.
      • Re: (Score:3, Interesting)

        by MBGMorden ( 803437 )
        Must just have good luck. I've bee using hard drives since 1991 or so (until then I was on Commodores with floppy only :)). I've been through dozens as well, and MOST of help up just fine. Exceptions are: 1gb Western Digital. This was the first drive to fail on me, but it was 1 week after I had gotten into a car accident (rear ended) with the computer sitting in my back seat of the car. I'm thinking that jolt may have had something to do with the failure. The replace for that drive was a 5gb Micropoli
      • Re:Data loss (Score:4, Informative)

        by Feanturi ( 99866 ) on Monday August 13, 2007 @11:47AM (#20212823)
        What are people doing with drives to make them fail?

        I've got the same question, as I've gone through a lot of hard drives over the years but only due to upgrading, not failure. The only exception was the IBM Deskstar GXP75 that had the whole click of death thing going on. I don't count that one since it was a known issue that resulted in a class action suit, which I didn't bother to take part in. The first one failed within a month, so I replaced it at the store, and the replacement failed after a day. Replaced again. The third one failed after a week but I was tired of going back to the store by then so tried an experiment - the click of death was kicking in somewhere near 500MB after the beginning of the drive so I repartitioned it to leave the first 500MB unpartitioned. My experience with the drive up to that point told me that wherever the click of death manifested, it would consistantly happen in whatever part of the drive it first happened at. That drive has been in constant use ever since then (it's been like 5 years or so by now hasn't it?) and still works great, since it never accesses the 'bad part' anymore.
  • by Absolut187 ( 816431 ) on Monday August 13, 2007 @04:27AM (#20209513) Homepage
    Recording technology: Perpendicular.

    Ah, its finally here.
    I remember reading about this like 4 years ago.
  • whoops (Score:4, Insightful)

    by scapermoya ( 769847 ) on Monday August 13, 2007 @04:32AM (#20209545) Homepage
    FTFA: "Gigabyte drives were only "missing" 24 bytes, and that was easy to swallow."

    i think they meant 24 megabytes, which is easy to scoff at now, but wasn't when the first gigabyte drives dropped.
    • But they'd have still been way off.

      For a decimal megabyte versus a binary one, there's 48 1/2 KB difference.

      For a gigabyte, there's about 70 megabytes difference.

      The only case where you'd only lose 24 bytes would be if you had a kilobyte drive.

  • by Don Sample ( 57699 ) on Monday August 13, 2007 @04:33AM (#20209549) Homepage
    He spends a lot of time talking about the difference between binary and SI terabytes and gigabytes, and then comes out with:

    Back in the day, the gap between decimal and binary capacity wasn't big enough to ruffle feathers. Gigabyte drives were only "missing" 24 bytes, and that was easy to swallow.
    Um, 24 bytes is the difference between kilo meaning 1000 and kilo meaning 1024. A binary gigabyte is 1,073,741,824, or 73 megabytes bigger than an SI gigabyte.
  • by _Shorty-dammit ( 555739 ) on Monday August 13, 2007 @04:37AM (#20209593)
    This marketing BS always pisses me off. For years and years and years we've used 1024 in the computer world, since it's a power of 2, and computers deal with powers of 2. A 931GB drive is NOT a 1TB drive. And we don't need new stupid labels like tebi, we just need storage manufacturers to stop being retards.
    • Re: (Score:2, Insightful)

      When the marketing department figured out they could make their drives look 5-10% bigger than what they actually were to all non-techies they took advantage of it.
      • It's worse than that actually, because as the sizes grow, the disparity grows too.
        • When you say 1KB, the difference is 2.4% or 24 bytes.
        • When you say 1MB, the difference is 4.8% or 48KB.
        • When you say 1GB, the difference is 7.4% or 74MB.
        • When you say 1TB, the difference is 10% or 100GB.
        So, the higher the capacity, the more difference is there between binary and decimal units. 2.4% difference is significant enough, but it's not as bad as 10%. Lacking 100GB, or a full tenth of the capacity is however quite noticeable.
    • by Jugalator ( 259273 ) on Monday August 13, 2007 @05:00AM (#20209717) Journal
      Tera is the SI unit for 10^12 so unless you want to introduce special cases for the computer industry alone, we need a new prefix.
      • by _Shorty-dammit ( 555739 ) on Monday August 13, 2007 @05:04AM (#20209735)
        Way to pay attention. Nobody gives a rat's ass about "the SI unit." These are computers. And we've always used kilobyte/megabyte/etc as they applied to computers. You think you're right, but you're not. A kilobyte will always be 1024 bytes. A megabyte will always be 1024 kilobytes. A gigabyte will always be 1024 megabytes. And a terabyte will always be 1024 gigabytes...
        • by Moridineas ( 213502 ) on Monday August 13, 2007 @05:14AM (#20209783) Journal
          The revisionists are everywhere unfortunately..

          Every time I see a wikipedia page with MiB or mebibyte or whatever the heck, I want to change--fix--it!


        • by Lehk228 ( 705449 )
          and the correct SI units have always been used by Hard drive manufacturers.
          • O RLY? (Score:3, Informative)

   D A-L42S-40MB-3-5-HH-IDE-AT.html [] Hard Drive: IBM: WDA-L42S 40MB 3.5"/HH IDE / AT Cylinders: 1067 Heads: 2 Sectors per track: 39 Bytes per sector: 512 1067 * 2 * 39 * 512 = 42,611,712 bytes 42,611,712 / 1024 = 41613 kilobytes 41613 kilobytes = just over 40.6 megabytes This was sold as a 40MB drive. Not a 41MB, 42MB, or 43MB drive. A 40MB drive. And that's just what it was, a 40MB drive. So, I'm sorry to tell you, but lying about the drive's size was *NOT
            • Re: (Score:3, Interesting)

              by Ed Avis ( 5917 )
              I think that the '40 megabyte' branding is just rounding to a multiple of ten... but anyway, the first commercial hard disk, the IBM 305, had a capacity of five megabytes - five million bytes, exactly - and was sold as such. Actually, it could have held more, but marketing thought that five megabytes was a nice round number. (Some of the space was taken for error correction, though.)

              (The long series of calculations you have to go through in your post are the best argument for ditching the 1024*1024*1024 n
        • by Valacosa ( 863657 ) on Monday August 13, 2007 @05:52AM (#20209913)

          Nobody gives a rat's ass about "the SI unit." These are computers.
          Yeah. Making nomenclature consistent across industries is damned inconvenient! Why bother?

          Look, I hate marketing dishonesty as much as the next guy, but borrowing the SI prefixes honestly does nothing but add confusion. Hard drives are easy, because one can safely assume that the marketing 'tards went with whatever number was bigger. But what about my phone's data plan? Aside from the whole kB vs kb thing, how do I know which definition of "kilo" my provider has gone with? Do they consider themselves with the "computer industry" or with the rest of the world? And (this is the best question), will the not-very-well-paid support grunt even know the difference?

          Would you like it if you agreed to sell a dozen POS systems to a bakery, only to be told after the contract, "Sorry sir. This is the baking industry. You agreed to give us thirteen systems." Or if you got a $30 bill from your ISP with the explanation, "This is the computer industry. Though our adverts say this plan is $30 a month, that's hex. In base-ten dollars, you owe us $48."

          You hate marketing people skewing reality. Good. It is only through fighting ambiguity that they can be stopped from getting away with this.

          Do you know the difference between a pipe and a tube? If you get into any business involving either, I hope you don't repurpose the words everyone else has settled upon.

          You think you're right, but you're not.
          It's that extra bit of humility that really makes your post shine.
        • by PMBjornerud ( 947233 ) on Monday August 13, 2007 @07:32AM (#20210419)

          Nobody gives a rat's ass about "the SI unit." These are computers. And we've always used kilobyte/megabyte/etc as they applied to computers.
          Well, maybe electrical engineers would prefer to have 992 watts on the kilowatt, grocers would like to define a kg as 977 grams. Maybe 1023 tons of TNT is what fits on a standard truck, so it would be handier than that stupid 1000 for a kiloton. And the food industry, maybe they would like to redefine kilocalories as 1005 to the kilo, just because of some weird internal workings of molecular workings?

          But instead of going with whatever number that fits their specific field, they all went with 1000. Really, that IT people refuse to do the same makes us look utterly retarded.

          Not that it matters anyway. With 8 bits on the byte, we're doomed before we even start. There is no hope in sight until we just ditch this shit, get a clue from the network people, and start counting bits in multiples of 1000.
          • by poot_rootbeer ( 188613 ) on Monday August 13, 2007 @10:56AM (#20212179)
            Maybe 1023 tons of TNT is what fits on a standard truck, so it would be handier than that stupid 1000 for a kiloton.

            Are those "long" tons (2240lb), "short" tons (2000lb), or "metric" tons (1000kg)?

            Ambiguous terms of measurement do exist outside of the computer industry, too -- which, I should point out, is actually "the software development industry" plus "the hardware manufacturing industry" plus "the IT service industry" and so forth.

            Drive manufacturers have always used base-10 prefixes to describe the capacity of winchester drives. It's not a marketing ploy, it's historic convention.

        • Re: (Score:3, Insightful)

          by Firehed ( 942385 )
          There's absolutely no need for the power-of-two notation anymore, at least not when you're viewing the drive properties to check free space. The "mega", "giga", "tera", et al prefixes are globally defined to be powers-of-ten - 10^6, 10^9, and 10^12 respectively. If you want to keep with the old-school notation of 2^20, 2^30, and 2^40 respectively, be my guest, but don't complain when your numbers come up short.

          It's the operating systems - Windows, Mac OS and the *nixes alike that are mis-reporting drive s
      • Why use a new prefix when the suffix provides all the information you need. If we're talking bits and bytes, then we use the base 2 values. Simple.
      • Re: (Score:3, Informative)

        by clarkcox3 ( 194009 )
        So? "Byte" is not an SI unit.

        KB, MB, GB, TB, etc. have had a well-defined meaning for decades (probably over a half century by now). According to The Oxford Pocket Dictionary of Current English:

        n. Comput. a unit of memory or data equal to 1,024 (2^10) bytes.

        ... so get over it, a kilobyte is 1024 bytes.
    • by billsf ( 34378 )
      One TByte is 2^40 bytes. I wouldn't say it doesn't exist as those daring can probably low-level format this machine to that. This 'salesman talk' is deceptive, but One Trillion Bytes is metric. Don't forget every file system needs some overhead, to at least index the files and 'free space' (non-MS) to avoid fragmentation. Every modern OS needs swap space. If you get 900GB of space _you can use_ you are doing very well. Only when used as a 'tape streamer' can you expect to get all available _formatted_ space
      • Re: (Score:2, Redundant)

        Formatting has nothing to do with it. Neither does swap space or file system overhead, or anything else like that. The "lost" space isn't lost to anything like that. The 1000 vs 1024 math is the only culprit. The fact that their drive has the capability to store 1 trillion bytes doesn't make it a 1 terabyte drive. When they release a drive that can store 1,099,511,627,776 bytes, *then* they have a terabyte drive. A trillion bytes is only 931.3GB, period.
    • by Ed Avis ( 5917 )
      Yeah, and while we're at it, let's stop all that marketing bullshit in the network card industry, which sells so called 'gigabit Ethernet' cards that actually only manage 1_000_000_000 bits per second, and in processor speeds, which give you a processor several percent slower than you were expecting by using a crooked definition of gigahertz.
    • by this great guy ( 922511 ) on Monday August 13, 2007 @06:39AM (#20210143)
      Contrary to common belief, power-of-10 prefixes (as used by disk manufacturers) are much more commonly used than power-of-2 prefixes in the IT world. People claiming the contrary are wrong. Here are a few examples:
      • A 128 kbit/s audio stream is 128 * 10^3 bit/s (power of 10)
      • A 100 Mbit/s ethernet card is 100 * 10^6 bit/s (power of 10)
      • A 480 Mbit/s USB2 link is 480 * 10^6 bit/s (power of 10)
      • A 500 GByte disk is 500 * 10^9 bytes (power of 10)
      • A 56 kbaud modem is 56 * 10^3 baud/s (power of 10)
      • A 1.5 GHz processor is 1.5 * 10^9 Hz (power of 10)
      • A 6 Mbit/s DSL line is 6 * 10^6 bit/s (power of 10)
      • A 650 MByte CD is 650 * 10^6 bytes (power of 10)
      It is a total mystery to me why people think that power-of-2 prefixes should be the norm, when the only few places where they are used are to refer to the size of files and RAM sticks.

      Spread the truth. Mod me informative ;-)
      • by SoapBox17 ( 1020345 ) on Monday August 13, 2007 @06:53AM (#20210201) Homepage
        If you notice, everything listed in the parent is in powers of 10 bits (or Hz) except for disc capacities. Like everyone else said, this is because disc manufacturers want to confuse you. When talking about m/g/k bits the convention is to use powers of 10, and when talking about bytes it is to use powers of 2. Hence, as the parent said, powers of 2 are used for file sizes and RAM sizes... because those are usually in bytes.
        • Re: (Score:3, Informative)

          So you are still not convinced ? Here are some examples not based on bits or Hz:
          • A 1x 250MB/s PCI-e lane is 250 * 10^6 byte/s (power of 10)
          • A PC3200 DDR400 memory stick is 3200 * 10^6 byte/s (power of 10)
          • A 56 kbaud modem is 56 * 10^3 baud/s (power of 10)
          • A 650 MByte CD is 650 * 10^6 bytes (power of 10)
          • A 300 MB/s SATA link is 300 * 10^6 byte/s (power of 10)
          • A 4000 MB/s HT1 link is 4000 * 10^6 byte/s (power of 10)
          • And of course, a 500 GByte disk is 500 * 10^9 bytes (power of 10)
        • Re: (Score:3, Insightful)

          by MrHanky ( 141717 )
          No, when talking about RAM, where a MB is 1024 KB where a KB is 1024 bytes, you're talking about stuff connected to a memory controller that addresses this in a certain number of two, so that a 32 bit controller can address 4,294,967,296 bytes or 4 GiB. A disk controller works in a different way, and a disk is addressed in a different way. The only reason for demanding the same kind of numbering from a disk is when you need to know how much RAM a file will consume when you load it. Which is why a file's siz
    • Re: (Score:3, Interesting)

      by Lord Ender ( 156273 )

      For years and years and years we've used 1024

      And we were wrong to do it. Metric prefixes meant base 10 for "years and years and years" before people started trying to use them for base 2. In every industry, and part of the computer industry, metric prefixes mean base 10.

      Why fight the rest of the world over this? Now that we have binary prefixes, let's use them! This idea that metric prefixes are base 10 in networking and base 2 in storage is embarrassingly inconsistent. Let binary prefixes mean binary, and

  • Since when do we use base 1024 for counting anything but RAM? Network cards, harddisk capacity, etc. seems to me is ordinary prefixes a thousand at a time. Why the author has to go into an elaborate explanation on how you were ripped off seems pretty silly to me.

    Maybe because a few OSes decide to measure overall filesystem capacity that way, but that doesn't make it right. It really only makes sense to measure files that way when you are dealing with memory mapped files, something users are almost never awa
    • Throwback to an era of systems that had drive letters I suppose.
      You mean like, today?

      Just because you don't care about Windows doesn't mean that it isn't running on countless millions of computers right now, drive letters and all.
      • You realize that all non-removable media can be a single drive letter on Windows right? It's trivial to configure on win2k-XP. I assume Vista is the same (it might even be easier)
        • Re: (Score:3, Interesting)

          by llirik ( 1074623 )

          You realize that all non-removable media can be a single drive letter on Windows right? It's trivial to configure on win2k-XP
          Yeah, except for a small caveat that even Microsoft installers can't deal with it. I had to go back to letters once Visual Studio 2005 refused to install claiming there is not enough space, while in fact there was plenty of space at the mount point where I wanted it to install, but it stubbornly insisted for checking space at the root.
    • > Since when do we use base 1024 for counting anything but RAM?

      In the days of the Apple II people and marketing used power of 2 for both ram and storage, as it's quite impractical to do otherwise when you worked so close to the metal (apple commodore and spectrum users often knew the address of ram and rom blocks for their machine).

      Then some clever biz heads started using power of 10, but it was several years later.

      Unfortunately, using the kilo- mega- etc. prefixes is accurate for base 10.
      • The "clever" marketing company was Atari with the 520ST - they wanted to make it sound better than the Amiga with 520K of memory (it had 512K like anything else, but it was 520 in marketing terms). The same reason they has the 1040ST.

        Note that it was sometime after that point in time (don't have the exact year) that some hard drive manufacturers started to play the same games. (Only with megabytes). Back then it was common to look at a 30meg vs 32meg drive and pick the 32meg drive. So when a marketi

  • by Zebedeu ( 739988 ) on Monday August 13, 2007 @04:44AM (#20209641)

    The Tech Report has tested the 7K1000's performance, noise levels, and power consumption against 18other drives to find out, with surprising results.

    Come on! Just tell us what the results were directly, don't make us have to break Slashdot law and RTFA!
  • Conclusion [] in the article: Too expensive.
  • 32 MB cache? (Score:3, Insightful)

    by dabadab ( 126782 ) on Monday August 13, 2007 @05:01AM (#20209721)
    Is there any point to these "huge" caches? My Linux system uses a few hundred MB's as disk cache so I don't really expext another few MB's on the disk to make any noticable difference (and, if I recall it correctly, when disks with 8 MB caches were new they did not really gave any performance advantage compared to models with only 2 MB of cache).
    • Re: (Score:3, Interesting)

      by dargaud ( 518470 )

      s there any point to these "huge" caches?
      Depends on your use... I work with a lot of images and my drive has a 16Mb cache. When I save an image that's <16Mb, it's almost instant and I can start work on the next one. If the image is >16Mb, it takes a good 5~15 seconds for the drive to thrash around until it's saved it. For me, yeas, a large cache makes a difference as most of my images are in the 10~50Mb range.
      • by dabadab ( 126782 )
        That's probably because write-back caching is enabled on your HDD but disabled in your OS (that's the default with Linux). Turning on write-back caching on your OS which will probably will make much more difference than a larger HDD cache.

        However, write-back caching is dangerous, since in case of a power failure, it may seriously damage your filesystem.
  • by emj ( 15659 ) on Monday August 13, 2007 @05:12AM (#20209777) Journal
    The problem is this will be full in 24h with a 100Mbps connection anyways, or ~6 hours if you live in sweden.
  • by Anonymous Coward on Monday August 13, 2007 @05:19AM (#20209795)
    Yes, but does it Destroy Planets ?
  • Meaningful tests? (Score:5, Insightful)

    by mrkh ( 38362 ) on Monday August 13, 2007 @05:27AM (#20209825)
    I'm not that convinced by the testing methods here. The boot and load times page shows 20 seconds difference between the slowest and fastest drives which they barely comment on, and yet the drive with the slowest boot time is among the quickest when loading Far Cry and Doom 3? Something is not right there.

    And if they're really timing level loads with a stopwatch, why on earth are they quoting 2 decimal places (and besides, the variability in reaction time is accounting for most of the supposed differences in any case). Half of their tests don't appear to tell anybody anything significant, and the most worthwhile page in there is the conclusion. Pretty graphics though.

    • Agreed. Their testing methods is pretty weird and their results doesn't show anything it all. Caviar SE16 is really fast on one test and then slow at another. Instead of benchmarking how fast Doom 3 loads a level or how fast Windows boots, it would have been much more interesting to see some low-level performance tests. How fast can the disk write 1k bytes to 1000 10k bytes spread out on the disk when it is full? Synchronously? Test the same thing for reads. Such tests would have tested the seek times for t
  • Real-world use (Score:5, Interesting)

    by zuki ( 845560 ) on Monday August 13, 2007 @07:31AM (#20210399) Journal
    Been using this drive as my primary music streaming audio drive while on the road, with rugged real-world everyday mission-critical use
    in front of thousands of people, where one mis-hap is already too much.

    So far things have been flawless, and it has made a huge difference for me due to portability compared to anything else of the same capacity.
    as previously this meant a two-drive combo with heftier power supply.

    The weight and size make it easier to have it as a carry-on item, rather than in my checked luggage!
    As far as performance, it has been able to handle 4 simultaneous 24-bit / 96 kHz audio tracks playing back with no hiccups whatsoever.
    The drive-to-drive copying in Firewire 800 or SATA has been quite speedy and error-proof.... (copying 900 gig at a time is always a good test)
    Dream come true if you ask me.... I still carry a backup anyway, LOL!
    (ymmv(TM), batteries not included, kids don't try this at home, etc....)

  • by AbRASiON ( 589899 ) * on Monday August 13, 2007 @07:36AM (#20210433) Journal
    So this baby has 200gb platters, it sounds all impressive and all, except we've had 188gb platters for ages now.
    Seagate has announced (and released, I think?) their 1TB HDD with only 4 platters (cooler, quieter, less power, less weight, less cost to manufacture) that's 250gb a platter

    Samsung have announced the F1 using 333GB per platter! 1.6TB if they copy Hitachi and slap 5 of them in a 3.5" unit - or rather 333gb single platter, light, cheap drives, be damned if anyone can find the F1 yet though :/

  • Solid State? (Score:3, Interesting)

    by SharpFang ( 651121 ) on Monday August 13, 2007 @07:41AM (#20210459) Homepage Journal
    Interestingly, this form factor would neatly fit some 512 MicroSD cards leaving enough room for mechanics (slots, frame) and electronics. Take 512 2GB cards, you get 1 terabyte of solid state memory. Each of the cards can work independently from the others = easy RAID of 512 disks = quite insane speeds possible, and cheap replacement of failing parts (you replace a single failing card, not the whole device). Of course the price would be higher, but still the 1TB drive isn't cheap for sure, and without RAID.
  • 5.25"? (Score:3, Interesting)

    by Doc Ruby ( 173196 ) on Monday August 13, 2007 @08:40AM (#20210787) Homepage Journal

    The drive is the first to pack a trillion bytes into a standard 3.5" form factor

    Hard drives used to be physically much bigger [], when the interface tech was "MFM: 5.25" diameter, and "Full Height" was about 3.5".

    Physically smaller discs have faster access times and lower power consumption. But why not use larger discs for their higher data capacity, without wrapping each smaller chunk in the same electronics overhead for rotation and data transfer? And get the faster data transfer at the outer cylinders from their faster angular velocity?

    At a guess, I'd say that a 5.25" full height HD could have 2.5x the 3.5" capacity per platter, and probably at least 5x the platters, for about 12x the capacity. The access times across the large areas would be larger, but for large files that wouldn't matter as much (as long as they're kept defragmented).

    These truly "large" drives could be the best for archiving, thrown back in place after an emergency and gradually replaced with 3.5" disks (if necessary) as they continue to run.

    We could have 12TB drives with the same encoding tech as these Hitachis. And they'd cost less per TB than the 3.5" ones, because they'd have more storage per overhead hardware. Where can I get one?
    • Re:5.25"? (Score:4, Insightful)

      by Fweeky ( 41046 ) on Monday August 13, 2007 @09:59AM (#20211511) Homepage
      Except they'd have more parts, more complexity, and the larger components would need to be made to even finer tolerences since they need to remain well aligned over a much larger area (and they'd need to be stronger if you wanted to keep the same sort of RPM). They'd be much more expensive, and you'd probably still have to drop the density per platter a lot to keep it within the realm of sanity, not least because of things like thermal expansion having a much larger effect.

      File next to the disk with multiple drive head assemblies; possible, but just not worth it when you could just fit more, smaller, cheaper, independent disks in the same space.
  • by Pigeon451 ( 958201 ) on Monday August 13, 2007 @09:01AM (#20210967)
    Several years ago when IBM released their much hyped Deskstar performance series hard drives, I bought one. It was more expensive than the others, but since I was doing some video work at the time, I figured I would splurge even though I was a student.

    It died a horrible death only three years later, just outside of warranty. Despite a class action lawsuit against IBM (in the US, not Canada) I couldn't get it replaced. There was apparently a fix for it, simply by downloading a program, but really, who looks for updates to their hard drives?

    IBM further went into my bad books, after it simply sold off the business to Hitachi instead of fixing their mess. It really left a sour taste in my mouth for IBM ...

  • by Brane2 ( 608748 ) on Monday August 13, 2007 @09:11AM (#20211037)
    While Hitachi uses 5 platters for 1TB, Spinpoint F1 manages to pack that space on only 3 platters, so it should be faster, more quiet and lower power than Hitachi. Not to mention good deal cheaper.
  • On another note... (Score:4, Insightful)

    by gerardrj ( 207690 ) on Monday August 13, 2007 @12:14PM (#20213123) Journal
    TOO states " As the first hard drive to reach the terabyte mark, Hitachi's Deskstar 7K1000 will be remembered, too. Squeezing a trillion bytes into a 3.5" hard drive form factor is a monumental engineering achievement"

    I doubt that anyone will remember this in a year. Quick; what was the model and manufacturer of the first drive to pass 500GB, or 1GB. Both were monumental engineering achievements in their time. These milestones will not be remembered because they are all evolutionary; a 10-30% jump in capacity. When we see 10x capacity increases in one generation, THAT name might be remembered.

    That said.. good job Hitachi, but we all know that WD and Seagate will be out with their versions in a month or so.

The absent ones are always at fault.