Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Changes in HDD Sector Usage After 30 Years 360

freitasm writes "A story on Geekzone tells us that IDEMA (Disk Drive, Equipment, and Materials Association) is planning to implement a new standard for HDD sector usage, replacing the old 512-byte sector with a new 4096-byte sector. The association says it will be more efficient. According to the article Windows Vista will ship with this support already."
This discussion has been archived. No new comments can be posted.

Changes in HDD Sector Usage After 30 Years

Comments Filter:
  • Cluster size? (Score:3, Interesting)

    by dokebi ( 624663 ) on Friday March 24, 2006 @03:13AM (#14986251)
    I thought cluster sizes were already 4KB for efficiency, and LBA for larger drive sizes. So how does changing the sector size change things? (Especially when we don't access drives by sector/cylinder anymore?)
  • by AngelofDeath-02 ( 550129 ) on Friday March 24, 2006 @03:18AM (#14986270)
    Best analogy is a gym locker room
    You have say, 10 lockers up and 20 lockers accross
    You can only put one thing in a locker, so you cant put your gym shorts in the same one as your shoes. But if you have lots of socks, you can pile them in, and take up two or three if neccessary.

    Space is wasted if you have a really big locker, but it's only holding a sock.

    Now, you've got to record where all of this stuff is, or you will take forever to find that sock. So you set asside a locker to hold the clipboard with designations.

    Now to bring this back into real life. There are a _lot_ of sectors on a disk. So keeping track of all of them starts requiring a substantial amount of resources. I imagine they are finding it easier to justify wasting space for small files in order to make it easier to keep track of them. Average file sizes are also going up, so it's not as big of a problem as it used to be either. It's all relative...
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Friday March 24, 2006 @03:19AM (#14986276)
    Small devices like cellphones typically save files of several kilobytes, whether they be the phonebook database or something like camera images. Whether the data is saved in a couple large sectors or 8 times that many small sectors isn't really an issue. Either way will work fine, as far as the data is concerned. The biggest problem is the amount of battery power used to transfer those files. If you have to re-issue a read or write command (well, the filesystem would do this) for each 512-byte block, that means that you will spend 8 times more energy (give or take a bit) to read or write the same 4k block of data.

    Also, squaring away each sector after processing is a round trip back to the filesystem which can be eliminated by reading a larger sector size in the first place.

    Some semi-ATA disks already force a minimum 4096-byte sector size. It's not necessarily the best way to get the most usage out of your disks, but it is one way of speeding up the disk just a little bit more to reduce power consumption.
  • by danmcn ( 960457 ) on Friday March 24, 2006 @03:29AM (#14986310)
    Isn't this what apple tried to do 5+ years ago with HFS+
  • by dltaylor ( 7510 ) on Friday March 24, 2006 @03:29AM (#14986312)
    Competent file system handlers can use disk blocks larger or smaller than the file system block size, but there are some benefits to using the same number for both. Although it may provide more data-per-drive to use larger blocks and you can index larger drives with 32-bit numbers, the drive has to use better (larger and more complex) CRCs to ensure sector data integrity integrity, the granularity of replacement blocks may end up wasting more space simply to provide an adequate count of replacements, and there are still some disk space management tools that insist on working in terms of "cylinders", regardless of the fact that the disk drives have had variable density zones for ages. The range from 4K (common disk block size) to 16K works as a decent compromise.

    "Back in the day" running System V on SMD drives, where you could use almost any block size from 128 Bytes to 32K (the CRCs were weak after that) and control the cylinder-to-cylinder offset of block 0 from the index, I spent a few days trying different tuning parameters and found that, due to the 4K size of the CPU pages, and of the file blocks and swap it really did give a significant improvement in performance. I tried 8K and 16K, because the file system handler could be convinced to break them up, but didn't get any better performance, so used 4k for the spares granularity.

    Perhaps I should take one of my late-model SCSI drives, which support low-level reformatting, and try the tests again. 16KByte file system blocks on 16KByte sectors might really be a win now. Have to do some research to see what I can do with CPU page sizes, too.
  • by Animats ( 122034 ) on Friday March 24, 2006 @04:05AM (#14986391) Homepage
    The real reason for this is that as densities go up, the number of bits affected by a bad spot goes up. So it's desirable to error correct over longer bit strings. The issue is not the size of the file allocation unit; that's up to the file system software. It's the size of the block for error correction purposes. See Reed-Solomon error correction. [wikipedia.org]
  • by bjpirt ( 251795 ) on Friday March 24, 2006 @04:50AM (#14986499)
    I wonder if the 4096 bytes are before or after error correction. If it's after, it might make sense because (and I'm sure someone will correct me) isn't 4K a relatively common miimum size in today's filesystems. I know that the default for HFS+ on a mac is.
  • File sizes (Score:3, Interesting)

    by payndz ( 589033 ) on Friday March 24, 2006 @04:59AM (#14986518)
    Hmm. This reminds me of the time when I bought my first external Firewire drive (120Gb) and used it to back up my 10Gb iMac, which had lots of small files (fonts, Word 5.1 documents, etc). Those 10Gb of backups ended up occupying 90Gb of drive space because the external drive had been pre-formatted with some large sector size, and even the smallest file took up half a megabyte! So I had to reformat the drive and start again...
  • by maxwell demon ( 590494 ) on Friday March 24, 2006 @05:46AM (#14986636) Journal
    This is of course only true for file systems which cannot allocate partial blocks.

    Of course one effect of the new sector size will be that old filesystem drivers, esp. those which come with old OSs, will likely not be able to use those disks. Which in effect means that if you want to use such a disk, you absolutely will have to upgrade your OS.
  • by TapeCutter ( 624760 ) on Friday March 24, 2006 @06:07AM (#14986695) Journal
    "So, all they doing is pushing this abstraction layer to the hardware, thus getting rid of an unnecessary layer, if I understand it correctly?"

    Nah, nothing that significant. The operating system does/should not "know" anything about how the data is physically stored by a device. The existing O/S storage abstractions will remain. (You may have trouble running a very old O/S but that would be just one of your problems)

    Every modern O/S uses disk space as virtual memory by reading and writing chunks of RAM to the HDD when it runs out of physical RAM. The standard HDD sector size is changing to the most commonly used O/S size for memory "pages" (RAM chunks written to disk).

    The larger size will (in theory) speed things up a tiny amount. The the HDD will now read/write a "page" to disk in one sector rather than four. Meaning the HDD will perform less administrative functions to swap RAM back and forth to the disk. Hardly anyone will notice this but constant minor tweeking of HDD internals has evolved them very rapidly. eg: In 1990 I paid $200AU for a second-hand 20MB HDD (~0.2 SECOND seek time!).
  • Re:4MB (Score:4, Interesting)

    by diegocgteleline.es ( 653730 ) on Friday March 24, 2006 @08:00AM (#14986960)
    Also, 4 KB is the size of a page in the x86 architecture. Some operative systems would have problems (ie: they'd need to rewrite something) to handle block sizes bigger than 4 KB.
  • by WWWWolf ( 2428 ) <wwwwolf@iki.fi> on Friday March 24, 2006 @11:37AM (#14987993) Homepage

    Well, current Linux bootloaders probably deal with lack of space just fine. For example, GRUB installs itself as 512-byte stub loader ("stage 1") + the rest of the boot loader stored in an ordinary file in the filesystem ("stage 2"). I don't think GRUB's design will change much: It's meant to be so that stage 2 and the menu.lst can be updated without touching the boot block, anyway.

    And it's probably not the OS or boot loader that sets limits to the boot block size, it's probably the BIOS that loads the stuff to memory...

  • Re:LBA (Score:3, Interesting)

    by jesup ( 8690 ) * <randellslashdot&jesup,org> on Friday March 24, 2006 @01:10PM (#14988836) Homepage
    Not all operating systems use block/sector numbers at the device-driver level (and there are good arguments against it, though most OS's do it).

    The Amiga used byte-offsets and lengths for all IO's. This did eventually cause problems when disk drives (which started at 10-20MB when the Amiga was designed) got to 4GB, but a minor extension allowing 64-bit offsets solved that. 64-bit offsets shouldn't overflow very soon....

    For the device driver, it's no big deal to shift the offset if the sector size is a power-of-two, and it allows for weird-ass devices with non-power-of-two sector sizes (like old MAC SCSI drives), devices without a sector paradigm, etc all using the same API. Thus you can mount a 2048-byte block FS on a 512-byte sector device without knowing or caring; you can (with a cooperative device driver) mount a 512-byte FS on a 2048-byte sector device (if the device is willing to accept arbitrary-offset transfers, which they can, though it hurts speed), or mount a block-oriented FS on a bytestream-oriented device (like a file...).

  • by InfiniteWisdom ( 530090 ) on Friday March 24, 2006 @03:17PM (#14989898) Homepage
    You could easily have a "compatibility" mode where the interface returns 512 byte blocks even though its stored internally as 4096-byte blocks. You'd sacrifice performance, of course, but that probably not a huge issue when you're running legacy systems on newer hardware.

I've noticed several design suggestions in your code.

Working...