Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Automated Tiered Storage Coming to Desktops? 110

roj3 writes "Tiered storage has been the scourge of administrators because the vendors tell us to hold meetings with all departments and then classify data to storage tier based on its type or relative importance. eWeek has a story about a new approach to tiered storage — sorting it all by usage patterns. Regularly used data goes on high-performance storage, idle data goes on slower/cheaper storage. Volumes and files even span several types of drives or RAID levels. Is automated tiered storage headed to desktops?"
This discussion has been archived. No new comments can be posted.

Automated Tiered Storage Coming to Desktops?

Comments Filter:
  • Networks, sure. (Score:5, Insightful)

    by celardore ( 844933 ) on Monday June 26, 2006 @01:58PM (#15607018)
    I can see the usefulness of this technology over a busy network with multiple users and masses of files and storage... I just can't see needing anything more than a mirror&stripe RAID array on a PC with only one user. Even that could be considered excessive.
    • by sseaman ( 931799 ) <sean.seaman@nOsPAm.gmail.com> on Monday June 26, 2006 @02:05PM (#15607096) Journal
      From its beginnings, the Hard Drive has leveled the playing field for all files. Everday files can have their content read by thousands, even millions of processes.

      The Coalition of Unused Files believes that the desktop is a crucial engine for personal and economic growth. They are working together to urge System Admins to preserve IDE Neutrality, the First Amendment for the Desktop Hard Drive that ensures that the Desktop remains open to innovation and progress.
    • Re:Networks, sure. (Score:5, Interesting)

      by dsginter ( 104154 ) on Monday June 26, 2006 @02:26PM (#15607269)
      I think we'll actually see the opposite:

      With multiple PCs per household, it makes sense to get rid of the hard drives at the PC level and put them in a RAID enclose that is secured into a wall.

      This, however, is a threat to Microsoft because you'll be able to PXE-boot any image of your choice (just think that perhaps your employer or bank supplies their own secure image in order to connect to their resources). Someone needs to get Windows to PXE boot at the hardware level (emulate IDE or something).

      This will be huge but we've got to squeeze Microsoft into it, first. Then, everyone will be free to try linux and see what we've all been jabbering about.
      • Re:Networks, sure. (Score:5, Insightful)

        by 0racle ( 667029 ) on Monday June 26, 2006 @02:50PM (#15607476)
        That only makes sense if the people in the household wish to learn how to use what you've mentioned. Since current evidence points to the fact that most people look at computers as a magical box that can not be understood, the chances of them learning how to do a fraction of what you suggest is about as likely as you winning the lottery.

        The XP file sharing wizard is too much for a lot of people and you think a raid array sharing up OS images over a network via PXE makes sense?
        • It will be installed by the same people that currently setup file sharing for people, the neighbor kid.

          It doesnt really matter if the common person can install something, they wont be doing it anyways.
    • At my home I use tiered storage of a type. I have a 10K sata drive for my main OS and for all video files being worked on. (And of course my battlefield and Oblivion home folders). I also have a couple standard 7200RPM sata drives and 1 IDE drive for mass storage. I also have a shared folder for PCs that connect to my network on a separate server.

      Given I am an IT professional who can manage all this. I think that we will definately see the average home user get into tiered storage. Think about digit

      • Re:Networks, sure. (Score:5, Insightful)

        by jwjcmw ( 552089 ) on Monday June 26, 2006 @03:09PM (#15607651)
        "Life is changing to the digital a bit more evey day. And just as we have cardboard boxes in our attic holding the things we dont use, file cabinets in our office alphabetized, firesafes for important documents, and Safe Deposit boxes for wills. The average home user will need to know and use the digital equivalents."

        Or, if you are like many people, you have documents on your desk and in piles on the floor that you will never use, your kids birth certificate is in a stack of papers from when you had to take it to school for registration, your file cabinets have partially labeled folders that are in chronological order...as in the order that you stuffed them in the filing cabinet, your will is in the "to be filed" folder in the bottom of said filing cabinet and you could fill the bathtub with your old phone and electric bills.

        Hopefully the digital equivalents will be better for the organizationally challenged.
        • you have documents on your desk and in piles on the floor that you will never use

          And your wife agrees with this system?

          *dumps gf*

          QUICK WHERE DID YOU GET HER??!

    • Back in Mac OS 8 days, I use to use DiskExpress Pro [alsoft.com]. I had configured it to put the most used files at the outer cylinder (i.e. fasted part) of the drive, and the less used files on the inner cylinders.

      The software would analyze file usage, and move them around every day. The anecdotal evidence I have that it worked on such small scale was that my girlfriend later asked me how I got the computer to start responding faster.

      I don't know how well this technology would help on newer systems. I suspect at lea
      • "but I do think that people should be at least mirroring their drives"

        But people don't seem to think the same. And its their data, after all.

        "I have heard too many people complain about losing something important because of hard drive failure."

        Did they complain to the point to ask a hardware vendor for a RAID1 off-the-shelf (of course, they wouldn't ask "give me a RAID1", but they'd answer positively to a hardware vendor advertising "no more data loss! our patented 'doubledisk' technology secures your data
    • I think when referring to desktops, they mean user desktops in a corporate setting.

      In our case, we do not back up the desktops and constantly remind people that if they do not sync their data to the file server, they will deserve the pain when (not if) the disk crashes. Everyone gets a quota on their personal area and we tell them not to save crap like MP3s or AVIs to the server or the files will not last long. This saves backup tapes for the actual corporate data.

      Project data gets saved in project speci
    • For the average Joe, true. And at $50k, that's not for the desktop. But extended to a scaled-down version, this tech could save me time and make the entire disk subsystem more efficient.

      I'm a Windows web/database developer by day, and when I have 4 different .Net projects, 3 Visual Foxpro and a Foxpro 2.6 project open at once like I did today, even a gig of RAM gets eaten up. Windows loves to use that swapfile even if you've got a gig free, so that disk was working overtime as I switched between them. I
  • Great Idea (Score:5, Insightful)

    by Jazz-Masta ( 240659 ) on Monday June 26, 2006 @01:58PM (#15607022)
    This is exactly what everyone is looking for. People defrag their hard drives in the hopes to increase performance. There is no reason why storage that is accessed more shouldn't be on the high performance drives. Or at least some sort of class rating that defines what storage may need high performance. For example, automatically installing and saving 3D Max to a RAID 0 media, and saving word documents to the lesser-performing drives.

    I try to follow this idea all the time with my system. Fast stuff goes on RAID 0, slow stuff, and backup stuff goes on the ole' 200 GB backup drive.
    • Re:Great Idea (Score:5, Informative)

      by mollog ( 841386 ) on Monday June 26, 2006 @03:09PM (#15607654)
      Hewlett-Packard Company developed a product that did this automagically. It was an external RAID system that connected via one or two SCSI busses to a host. All incoming data was stored in RAID 0/1; striped and mirrored. (aka RAID 6 and RAID 10). As the storage filled up, unused data was automagically migrated to more space-efficient RAID 5. Data that had been accessed recently remained in RAID 0/1. You could add disk drives and it would automagically include the drives (but you would have to use LVM or other utilities in the OS to increase its file system.) You could mix two drive sizes, say, 18GB and 36GB, without trouble. If a drive failed, the array would rebuild reduncancy. If another drive failed, ditto. It was fast, it was fully redundant.

      But it was a lot smarter than the admins who had to use it so it wasn't very popular.

      • Raid 0+1 [wikipedia.org] and raid 1+0 [wikipedia.org] are subtly different. And raid6 [wikipedia.org] is completely different. There is no mirroring at all in raid6. Raid5 is a special case of an m+n parity scheme where n=1. Raid6 is a special case where n=2. It allows for the simultaneous failure of any 2 drives in the array without data loss. The raid6 algorithm is somewhat more computationally intensive than the raid5 algorithm, but this is typically only of practical importance to embedded systems and software raid arrays running applications a
    • Re:Great Idea (Score:4, Insightful)

      by pla ( 258480 ) on Monday June 26, 2006 @03:46PM (#15607945) Journal
      This is exactly what everyone is looking for.

      No.

      You (and a number of other posters on this topic) have described what we look for - Geeks who want to get the most out of their systems with the least expense. If I could get killer performance with a RAID0 of tiny but fast drives (think Raptors, or even Cheetahs if you don't mind dealing with SCSI), while still having the capacity of a cheap 400GB IDE drive - Of course I'd have such a setup (and in fact, many of us already do, we just manually transfer things to/from the big-n'-slow).

      Most people, however, do not want this. For starters, most people don't even need the huge drives they already have - If you gave them just the pair of RAID0 36GBs, they'd never use even half that the capacity, so no need for ever moving files to the slow storage. Then failing that, the members of the Sixpack family that manage to store hundreds of GB only fill it with downloaded porn, music, and movies - Uses that really don't need fast drives, just tons of space.


      So while it sounds useful in theory - in practice, such a setup would just add cost and complexity without providing any tangible benefit to most users. I suspect even most Geek users would rarely notice the difference (aside from OS load times), and would only make such a setup for bragging rights.
  • by Kaenneth ( 82978 ) on Monday June 26, 2006 @01:58PM (#15607029) Journal
    Registers, CPU cache, on-chip cache, RAM, local disk, Network/Removable Media, Paper/Human memory...

    It's all about feeding that data hungry CPU, as quickly as possible.
  • Not so new... (Score:5, Interesting)

    by Duncan3 ( 10537 ) on Monday June 26, 2006 @01:59PM (#15607032) Homepage
    I was using systems that did this 10 years ago. Granted, back then it was disk+tape not different speed disks, but it's the exact same thing.

    Looks to me like an excuse to charge 8-10x what you should be paying for storage of that size.
    • Re:Not so new... (Score:3, Informative)

      by truthsearch ( 249536 )
      Ten years ago you had something automated that determined where the files should go and moved them appropriately? It analyzed usage patterns? I'd really like to know what older systems had such features as I've never seen them.
      • Re:Not so new... (Score:3, Interesting)

        by drinkypoo ( 153816 )
        I know bugger all about them so I can't vouch for the accuracy of this information but someone who worked in the basement of the Santa Cruz County Courthouse, where the county's servers are - some of those big goofy IBM mainframes that require their own AC system - have been ticking away since time immemorial... and according to one of the sysops they have tiered storage which automatically will put stuff on magtape, and then ask them for the tape again later when the records are accessed. I guess a lot of
        • That's not the same thing. At all. Read the article again.
          • Re:Not so new... (Score:3, Informative)

            by drinkypoo ( 153816 )

            That's not the same thing. At all. Read the article again.

            You know, I did read it, and what they're talking about is that data that is less used/less critical gets moved to slower/less reliable storage automatically.

            And when you have only two kinds of storage, a DASD bank and mag tape, and your system automatically writes least used data to tape and tells you to file it, and asks you for tapes when it needs them - well, I'd say the two are highly analogous. The fact that the slower storage is offline

      • Re:Not so new... (Score:5, Informative)

        by dpilot ( 134227 ) on Monday June 26, 2006 @02:53PM (#15607499) Homepage Journal
        It was called HSM, (Hierarchical Storage Management) it ran on IBM's MVS on mainframes, and it moved your less-used data to cheaper storage, in several stages. IIRC, the first stage was just compression on a different disk, the second stage was a tapes in a jukebox-type thing, and the third stage was tapes that an operator fetched and loaded. Somewhere way back there, data never used for 5 years fell off the end of the belt, but you got warned, first.

        The day after vacation, when you kept getting the message, "DFHSM is recalling dataset xyz for user jkl" as it pulled all of your storage back online was a pain, and we all thought it would be neat to get rid of, as we migrated to workstations. But in retrospect, HSM was great, never having to worry about your data quantity. That's compared with having to root through $HOME every few months to take care of quota problems.
        • Any word on something like that for Linux fileservers? I am envisioning (as a first pass thought) a cron job with find -ctime that replaced the file with a symlink to the online compressed storage, but you may need some kind of hook into Samba or a lower level hook into the FS itself that grabbed it out of "cold storage" so to speak.

        • Re:Not so new... (Score:2, Informative)

          by Anonymous Coward
          Guys, guys, guys, you talk about HSM like it is old and gone. On the contrary, it is the future! We use it everyday where I work, using a program called SAMFS. We have a tape robot and a large disk cache. Data that is used often stays on the cache. Less used data goes to tapes. SAMFS sorts out when/how to do that. The system is great not only because the software figures all of this out for you, but also because it works as your backup software. We switched to this system about 4 years ago when we r
          • While it is true that getting less used data OFF of the tape takes some time, it is not that bad if it is data you don't use very often. Depending on the file, it seems to take about 8 minutes or so for us. I think that's because the tape has to find the file, but once it has, it can just copy it off.

            Something to check on - when I was actually looking at these kinds of systems 10 years ago, they helped that issue by using Magneto-Optical disks in between disk and tape. It may be that your installation is l
      • Re:Not so new... (Score:4, Informative)

        by Doctor Memory ( 6336 ) on Monday June 26, 2006 @03:05PM (#15607608)
        you had something automated that determined where the files should go and moved them appropriately? It analyzed usage patterns?


        Oh yeah. BITD, there was the archiver, a job that ran every night and moved files that hadn't been accessed in the last N time periods to tape. It left the VTOC entry (kind of like an inode), just marked it "archived" and the label of the tape. Then, the next time that file was accessed, a hook in the open() call would send a message to the console operator telling them to mount tape such-and-such. When the tape was mounted, the archiver would automatically copy the file back into place, the open() call would complete normally, and life was good. Basically transparent to the user (they'd look at their directory and all their files would be there), except for the fact that the file open would take two-three minutes. Then again, since they were paying for disk storage by the block-day, they were generally pretty happy to only pay for a fifty-cent tape mount every quarter instead of keeping that 1200-block file on-line for three months when they weren't using it.
      • Re:Not so new... (Score:1, Informative)

        by Anonymous Coward
        I'd really like to know what older systems had such features as I've never seen them.

        Novell.
    • Re:Not so new... (Score:3, Insightful)

      by hotrodman ( 472382 )

      No kidding. So they find a way to put less-used data on slower disks, that still COST NEARLY AS MUCH. The entry price is still listed as $50,000. Big fuckin' deal. Let me know when you take a bunch of garden-variety servers, and do this, with the super cheap clone raid server with 40 terabytes of SATA as the 'last tier' for slowest files, where I can build 100 terabytes for $50,000.

      And yet, managers will get a woody over this buzzword compliance and want to give these guys million
    • "I was using systems that did this 10 years ago. Granted, back then it was disk+tape not different speed disks, but it's the exact same thing."

      Got you beat there. I was using a similar system nearly 30 years ago at university - again disk and tapes. The O/S was GEORGE III running on a ICL xxxx (can't remember). Very usful in the days when your disk quota was measured in kilobytes but totally automatic migration to tape tended to keep your disk usage down.

      One problem, though. After the summer break, all
  • Put two 10k Raptors in Raid 0 for your games and other stuff you need REALLY FAST, and then have a big 250GB 7200RPM drive for everything else. People are doing that already.

    All you would need is some software for automatically moving it around. Though most people with desktop rigs like that probably would rather control what is on which drives themselves.
    • Put two 10k Raptors in Raid 0 for your games and other stuff you need REALLY FAST, and then have a big 250GB 7200RPM drive for everything else. People are doing that already.

      You just described my desktop exactly. :D
    • by COMON$ ( 806135 ) on Monday June 26, 2006 @02:59PM (#15607552) Journal
      Because 2 10K Raptors in Raid 0 isnt worth the speed increase. Last time I checked you may get a 20% increase, and reduced data integrity. I did some research into this a while ago, check out this article, very informative

      http://www.anandtech.com/printarticle.aspx?i=2101 [anandtech.com]

      • I read through the document you linked to, but couldn't see any detail on whether they aligned the sectors to the disk boundary. Specifically, I understand that Windows XP uses a 63 sector MBR, whereas a 64 sector offset will align I/O to disk boundaries. The disadvantage of Windows's standard configuration is that certain small I/Os will overlap two disks, forcing two hardware I/O operations for one software I/O request.

        Microsoft does a handy tool called Diskpar.exe (it's included with the Resource Kit)

        • Sorry I didnt reply earlier. I am not aware of how they set up the RAID in the article. But I am not a big fan of striping, too much data loss unless you can afford the mirror. In the case of the mirror then you had better be preparing for some serious disk IO from a SQL cluster or heavily used Exchange box because you are putting a pretty penny into storage...Thanks for the article though, good review for me.
    • Why? For the price of a 36Gb Raptor you can get 300Gb el cheapo drives. I put 4 of those in a RAID5 on my SATA1 controller and get write speeds of 130MBytes/s (reads at 180MBytes/s according to dd). More diskspace with a higher reliability compared to RAID0 without the need to move suff around.

      Sure the drives are more likely to fail, but then again so is that single 250Gb "for everything else" drive.
    • Because 2 10K Raptors in Raid 0 isnt worth the speed increase. Last time I checked you may get a 20% increase, and reduced data integrity. I did some research into this a while ago, check out this article, very informative

      --------

      Why? For the price of a 36Gb Raptor you can get 300Gb el cheapo drives. I put 4 of those in a RAID5 on my SATA1 controller and get write speeds of 130MBytes/s (reads at 180MBytes/s according to dd). More diskspace with a higher reliability compared to RAID0 without the need
      • But this idea is just plain silly for a desktop: added complexity with a diminishing speed gain the more disks you add.

        What the article says is not important since it's about expensive hardware where as a desktop raid is cheap disks + some software glue.

        To flood a SATA150 bus you only need 2 high performance disks. So you suggestion would most likely be 2 raptors raid0 and 2 lowends raid1/0 (on a 4 port controller). When the cheap storage is idle one will get max read/write, less when active (hard to guess
  • Oh....good.. (Score:5, Insightful)

    by JerBear0 ( 456762 ) <jerbear0NO@SPAMhotmail.com> on Monday June 26, 2006 @02:00PM (#15607045)
    "idle data goes on slower/cheaper storage"

    So that special little something that you need once a year, but when you need it, you need it RIGHT NOW is tied to the foot of a pigeon fluttering around the warehouse somewhere. Frequency of use does NOT denote importance.
    • Apply "frequency of use = urgency" to BIGNUM pieces of data and you will have a very useful albeit sub-optimal algorithm.

      Yes, there are exceptional cases, like the President's access to the Nuclear Briefcase. It hasn't been used for real in a long time if ever but when he needs it it had better be close at hand. However, these special cases can be treated as the special cases they are.
      • by cperciva ( 102828 ) on Monday June 26, 2006 @02:30PM (#15607301) Homepage
        Yes, there are exceptional cases, like the President's access to the Nuclear Briefcase. It hasn't been used for real in a long time if ever but when he needs it it had better be close at hand.

        Oddly enough, I think most people in the world would prefer that it wasn't close at hand when Bush decides he wants it.

        A better example is fire extinguishers -- most of them will literally never be used, but there's a very good reason to ensure that they are readily available.
    • Frequency of use DOES denotes importance, at the very least STATISTICALLY. Just because you want "that special little something" once a year; does not mean you can degrade the speed of information which is instantly needed. This is an obvious fact
      • by Medievalist ( 16032 ) on Monday June 26, 2006 @02:31PM (#15607311)
        Decades ago, we used to laugh at the mainframers and their automated hierarchical storage systems because they'd make exactly these kinds of statements.

        Frequency of use DOES denotes importance, at the very least STATISTICALLY.
        No. Absent other data, it only denotes frequency of use, period. Playboy.com gets more hits than the general ledger webapp if you unblock your company firewall, but the general ledger is more important to the company.

        Just because you want "that special little something" once a year; does not mean you can degrade the speed of information which is instantly needed.
        There is actually very little correlation between what the average user wants and what s/he needs, as is empirically obvious. If the image from the "fly-fishing.com" website that they've set to come up as their background image every morning fails to load, they can still work, but if the once-a-year corporate audit checklist gets put on slow, old storage and then gets lost in a hardware failure, the company stock price may flutter and certainly heads will roll in the corporate IS department.

        This is an obvious fact
        I don't think that word means what you think it means.
    • if the slower storage is still online and accessable at say 1998 hd speeds that'd still be good enough, without reading the article it seems like a good idea

      Hell, if its a text document you need only once a year then 1970 hd speeds might be good enough (for reading the doc, not MS Word)
    • Re:Oh....good.. (Score:4, Informative)

      by Red Flayer ( 890720 ) on Monday June 26, 2006 @02:15PM (#15607180) Journal
      That's what metatagging is for. Tag files that are not to be moved to slow storage no matter how infrequently they are accessed. RTFA.
      • That's what metatagging is for. Tag files that are not to be moved to slow storage no matter how infrequently they are accessed. RTFA.
        And so much for the automated part of automatic hierarchical storage management.
    • So that special little something that you need once a year, but when you need it, you need it RIGHT NOW is tied to the foot of a pigeon fluttering around the warehouse somewhere.

      But look at it this way, at least the pigeon won't put you on eternal hold when you need rapid tech support in your time of crisis.
    • So that special little something that you need once a year, but when you need it, you need it RIGHT NOW is tied to the foot of a pigeon fluttering around the warehouse somewhere. Frequency of use does NOT denote importance.

      It sounds like you don't pay the IT department for your storage. In my experience, once a department is charged for storage, they suddenly start requesting cheaper storage.

      • Good call. Departmental chargebacks, along with bonuses partially tied to budget variance, lead to more cost-effective methods. I've noticed more companies charging IT costs to individual departments, instead of lumping it all under admin costs, and not just for companies in the IT sphere of business.

        When expensive storage == no new Blackberries his year, sales departments take notice :)
        • I am at a contract where we are moving to a chargeback system. I can't wait until it is implemented. It will be so much fun to watch the change in attitude.
          • Hopefully your contract will be over when the first review period ends...

            VP of Marketing: "What do you mean the chargeback for that tech is more than I make per hour?"

            CIO/CFO: "Their time is valuable to us than yours."

            VP of Marketing: "What am I, a schmuck?!"

            CIO/CFO: "Yes."

            In my experience, that's the downside of chargebacks -- all of a sudden, everyone has an idea of what "that guy in the server room" makes... and is VERY unhappy about it.
    • Re:Oh....good.. (Score:4, Insightful)

      by Kadin2048 ( 468275 ) <.ten.yxox. .ta. .nidak.todhsals.> on Monday June 26, 2006 @02:29PM (#15607289) Homepage Journal
      Frequency of use doesn't denote importance, but it might denote how quickly you need to be able to recall it. Similarly, importance doesn't imply that quick recall is necessary. If you don't use something frequently, it might be okay to store it somewhere that takes a while to recall from, even if it is "important," as long as you know where it is so that you can get it back.

      As an example, financial records for past years might be very important, but you don't need to be able to access them in a tenth of a second. As long as you can get to them if you really want to (sacrificing a few seconds), then it's all right.

      The way I see this translating to reality is that you'd keep all your old documents in slow-speed storage, but then keep an index in high-speed storage, so that you could easily search (both by name and by content) and decide when to pull stuff out of your archives.

      This is no different than what people have been doing for centuries with paper. Just because the card catalog is located in the center of the library doesn't mean its contents are inherently more valuable than the actual books (which might be in the basement, back shelves, wherever); it just means that the catalog gets accessed much more often.

      Actually, in the physical world, people often exchange speed of recall for certainty of recall. You put important documents in a safe-deposit box, rather than your kitchen counter, because even though it'll take you longer to get them out of the box, they're guaranteed to be there when you need them. Likewise, a system which traded off speed for redundancy would probably be appropriate for "important" but infrequently-accessed electronic documents.
    • Frequency of use does NOT denote importance.

      It doesn't *always* denote importance. however, if a tiered storage system improves performnce a large enough percentage of the time then I'd live with a drop in performance on the odd occasion. Similarly to using spare memory for IO/file caching.
  • Tiered storage has been around for ages. In the old days it was disk with tape as a backing store.

    I do like the idea of this product. Similar performance gains can be had by having the OS manage the data. It's a different-yet-similar concent but some desktop OSes do this already with code libraries, putting them all in a single directory with little or no fragmentation within the file to allow for faster loading. Other OSes play similar tricks with system library metadata.

    --
    This would have been FIRST PO
  • This scheme reminded me of low power optimization in circuit level. The critical pahts in the circuit are governed by low threshold transistors ( ensuring high performance, i.e speed ) The non-critical paths are governed by high threshold transistors ( ensurin low leakage in stand-by mode with no particular degradation of speed since they sit on non-critical paths, that is the idle paths. It is nice to see the core of this idea in a macro-scale.
  • For a large-scale organization on the order of hundreds of employees, but I doubt very much that it would be viable on the desktop (watch, as I say it, HP and Dell as rubbign their hands...). This is for a number of reasons mostly rotating around performance.

    For example, take an MP3 collection. I go to open up my old Soviet music collection (which I have), but I haven't listened to it in months, possibly even years. This would put it on the low-end of the priority and I would have to wait for the data to
    • Yes, but you're missing something here. Would you rather have to wait a relatively long peirod of time for infrequently used files or have to a relatively short period of time for every time you use a file that you use frequently? The theory is that the (relatively) long pause to get little-used files is shorter than the aggregate delays of loading frequently-used files. Also, I think the amount of time you'd have to wait for even your least frequently used files would be relatively low. In the worst ca
      • Add to that, the fact that the word "cheap" is being used in different ways here. An extreamly reliable 10k rpm drive is going to be noticably more expensive than an extreamly reliable 5400 rpm drive. When people are saying "cheap" they mean a lot less expensive, not poorly made.

        I would like to know what kind of paint your using that dry's in the time it takes to load an mp3 off of a slow 5400 rpm drive. ;)
  • This is "new"? (Score:4, Insightful)

    by Medievalist ( 16032 ) on Monday June 26, 2006 @02:06PM (#15607105)

    IBM mainframes that literally pumped water were doing this decades ago.

    What, you say water cooling is coming back too?

  • It already is (Score:4, Insightful)

    by malraid ( 592373 ) on Monday June 26, 2006 @02:10PM (#15607134)
    That's why you have HDD with cache. That's the whole concept of "virtual memory". The next step might be hybrid hdds (solid state / mag platters). But I don't think it will go much farther than that. Multiple raids is overkill for the average desktop.
  • Just read TFA: (Score:5, Insightful)

    by Ant P. ( 974313 ) on Monday June 26, 2006 @02:11PM (#15607146)
    $50k for a 6TB fileserver? What's that extra $40000 paying for that a normal fileserver loaded with RAM can't do just as fast?
    • Re:Just read TFA: (Score:3, Interesting)

      by Anonymous Coward
      Apples and pomegranates you compare;
      Channels of Fiber come not cheap.
      Terabytes 6 with connection of light for less than $50k you will not find.
      Terabytes 6 with connections of wire you may.
      SATA drives, untested are delivered.
      SATA drives with fewer bearings.
      SATA drives with short life.
      Enterprise storage is not easy.

    • What's that extra $40000 paying for

      A man in a suit with a laptop and a Powerpoint presentation to demonstrate how it'll lower your TCO, increase your ROI, and boost your career.

    • Re:Just read TFA: (Score:2, Interesting)

      by Anonymous Coward
      It's not a server, it's a SAN. You connect a server via HBA to the SAN unit. The cost differece is in the performance of the drives you're getting (8 15k 146gb fiber channels and 8 500gb 10k fiber channels), these aren't the same Maxtor 250 GB SATA drives you picked up at Best Buy last week. (then there's the enclosures, controllers, io cards, etc....)
  • by Red Flayer ( 890720 ) on Monday June 26, 2006 @02:12PM (#15607152) Journal
    Cheetos go in the easy-to-reach cabinet next to the fridge.

    Beer goes in the fornt on the top shelf of the fridge, milk (eventually cheese, typically) goes on the bottom shelf in the back.

    This is automated, since I simply shove things onto the shelves when I get home from the supermarket. Anything I consume and replace ends up at the front. Anything I buy because I 'should' be eating it (like fiber biscuits, or whatever) ends up pushed to the back.

    It's automated via metatag, too. Anything tagged 'ice cream' goes in the door of the freezer, anything tagged 'vegetable' gets relegated somewhere in the back, where it quickly develops an inch of ice crystals, to slowly dry out to a freezer-burnt state of suspended animation until I buy a new fridge unit.

    This costs no more than regular kitchen storage space, but if you'd like a custom design for you and your loved ones, my consulting fee is $75/hr, or a bag of chips and a six-pack.
  • Yes, Kinda... (Score:5, Informative)

    by ThinkFr33ly ( 902481 ) on Monday June 26, 2006 @02:16PM (#15607187)
    Automatic tiered storage is definitely coming, but probably not in the form of multiple disks that run at different speeds or RAID levels.

    Microsoft announced a while back that Windows Vista would support three technologies designed to improve disk speed called SuperFetch, ReadyBoost, and ReadyDrive. [msdn.com] SuperFetch is simply a way of preloading applications and data when the OS anticipates that you'll be loading those soon.

    ReadyBoost and ReadyDrive both utilize persistent memory caches to speed up access to the disk.

    ReadyBoost treats normal USB keys and flash disks like temporary caching locations for data from the disk.

    ReadyDrive is essentially the term Microsoft uses to described their support for hybrid hard drives, which are disks that have a built in flash memory module that's used as a persistent cache.

    Not only do hybrid disks [pcworld.com] dramatically increase performance, but they also result in huge power savings for mobile devices like laptops and media players.
  • I could see a use. (Score:3, Interesting)

    by Kadin2048 ( 468275 ) <.ten.yxox. .ta. .nidak.todhsals.> on Monday June 26, 2006 @02:17PM (#15607194) Homepage Journal
    I could see a use for something like this. Personally, I've stopped throwing stuff away. With the exception of temporary and cache files, storage is cheap enough that I just don't delete anything on the off chance that I might want it again. Every email, every instant message, every dictated note (I use a little Olympus digital recorder), every digital photo, it's all saved. By the time I fill up my main hard drive with stuff, I can just buy another one that's probably between two and five times the size, dump everything onto it, and keep the old one as a historical backup. (I keep online backups as well, but I won't bore you with it here.)

    I don't think I'm that atypical in this regard. GMail brought the idea of saving all your email, forever, to the masses; Flickr gives you an unlimited amount of photo storage; and technologies like Apple's Spotlight make it relatively easy to search through gigabytes of saved information and pull up related items. What we haven't seen yet is a lot of popular interest in redundant backup systems: that'll come later, once people start realizing how much of their lives they're stored away on the crummy OEM drive in their Dell. (Probably after a lot of them fail and we hear some real horror stories.)

    It's not hard to imagine a near future where people just get used to not throwing anything away. In that situation, tiering storage -- allocating the fastest media to the most frequently accessed information -- could have big performance gains. And assuming that you have a relatively static amount of frequently-accessed information, and basically only add information to the "infrequenly accessed" category, a tiered system means that you only really have to add storage to the bottom tier. It's a pyramid where the base gets larger and larger, but the upper part remains basically the same size.

    So for example, as you save more and more emails (infrequently accessed information), they automatically get saved onto inexpensive, slower drives, which are then mirrored to each other for redundancy. A single, fast drive could hold the system -- maybe solid state storage? -- and more frequently-accessed data. A smart system would know what information needs to be moved up to faster storage to be very useful (uncompressed digital video, for example, wouldn't be much fun to work with off of a slow drive), and what can be left there as it's accessed (MP3s and compressed video could be played directly from slower media).

    I think it's an interesting technology with a lot of possible applications, but as with a lot of other things, it'll be the home user who arrives last to the party, because their storage is the least centralized. Unless there's a move away from storage on individual desktop PCs and towards storage on per-home servers, it'll be a while before most people require or see the benefit in such a thing.
  • This is hardly a new concept — mainframes have been migrating untouched datasets to tape for years. If this really is a new idea in the SAN market, SANs must suck worse than I'd previously supposed.

    And “Is automated tiered storage headed to desktops?” Well, no, unless there's something cheaper than hard disks, which there currently really isn't.

    • But it would be nice to see the technology adapted to consumer price points, but it probably won't be as long as huge ATA disks are $200.

  • One application of something similar is definately coming to desktops (and laptops in particular) in hybrid hard drive arrangements--cacheing commonly used files to flash memory to be able to spin down the platters and conserver power or for performance gains. (Although I remain wary of Vista using USB thumb drives as caches . . . finite read/write cycles and all.)
  • This is interesting, because when you read about old operating systems that ran on computers with several types of memory--fast magnetic core memory for the active programs, slower rotating-drum memory for less active data, large and slow hard drives, and automatic tape drives--they did exactly this. It makes sense that, given that we have L1 cache, L2 cache, and system RAM, each of which is slower and larger than the next, that we would extend this to hard drives, having a small, fast drive for often-used

  • I clearly see a benefit of using the client machine (PC) as part of the storage hierarchy, the data being moved belongs to a specific user. You can apply usage patterns, policies based on server storage available etc. Email could be moved from the client to the server transparently over IMAP even without modifying the protocol. For most cases this makes it irrelevant whether you are given 100MB or 2.7GB of email storage by your email (online spyware) provider. Here are my 2 cents. http://blogs.hk.com/inde [hk.com]
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday June 26, 2006 @02:36PM (#15607343) Homepage Journal

    ...but we can't seem to even get a fucking trashcan right.

    I should never have to empty my recycle bin manually, except where I want to perform a security erase - which should be a function delivered with my operating system. This is the height of stupidity.

    It's not even a hard problem! There's functions which programs use to check for free space. Lie to them. Don't count files in the recycle bin against the available free space. If you're about to run out of space, delete the least recently used file. Perhaps you might also base things based on total number of accesses, or other criteria, but I believe (perhaps naively) that making the trash can an automatic FIFO from which files are automatically deleted when disk space is low would be about a hundred times better than what we have now.

    Also, I want this functionality on all operating systems. Unless I explicitly request deletion, no file should ever be unlinked, deleted, or whatever you call it when I delete it, whether through the command line or the GUI.

    This is not hard and it would make everyone a lot happier.

    • by mrsbrisby ( 60242 ) on Monday June 26, 2006 @03:08PM (#15607643) Homepage
      Also, I want this functionality on all operating systems. Unless I explicitly request deletion, no file should ever be unlinked, deleted, or whatever you call it when I delete it, whether through the command line or the GUI.

      The problem with this is, is that it causes a significant reduction in performance.

      Ideally, the operating system chose the best possible spot for that file when it got written. Once that file is deleted, that spot will once again be the fastest best possible spot- for at least something. If the operating system skips that spot for a new file, then this new file isn't going to be accessed quite as quickly.

      Truly automatic tiered storage solves this problem by splitting the directory services from the storage system- that is, the file's _name_ is no longer tied to the volume that the file happens to live on (and no, this isn't the same thing as symlinks or shortcuts). This allows the decision as to what the best spot for a file is to be deferred until later- and even spanned across multiple volumes!

      Unfortunately, such a beast is very difficult- if we make a reduction in our requirements- say that performance isn't very important- or perhaps that we can stop using our computer for a few hours each evening, then it's probably possible. What we need is a new kind of file system that supports either atomic moves between disks, or a filesystem that splits the names from the storage.

      A few research projects have been focused on these kinds of changes- but they all tend to break UNIX semantics (Amoeba immediately springs to mind)- and those UNIX semantics are, in-fact, the most widely used and recognized semantics for filesystems anywhere (Even Windows uses them!)-- people who develop a filesystem incapable of supporting them, really need to have a real good reason for breaking everyone's hard work.

      While they often do, it hasn't yet been seen as good enough for general purpose stuff.
      • Ideally, the operating system chose the best possible spot for that file when it got written. Once that file is deleted, that spot will once again be the fastest best possible spot- for at least something. If the operating system skips that spot for a new file, then this new file isn't going to be accessed quite as quickly.

        Filesystems may be automatically and intelligently defragmented (while live, if the filesystem is decent) when disk I/O is at a minimum.

        Currently, some operating systems (e.g. Wind

        • Filesystems may be automatically and intelligently defragmented (while live, if the filesystem is decent) when disk I/O is at a minimum.

          But my filesystem is never idle, or even nearly so. Nonetheless, fragmentation isn't exactly a bad thing, and doesn't necessarily have to cause problems (such as lost performance) by itself.

          Worse still: How does the defragmenter know to avoid using this block? Or how does it know that it's a good candidate to be moved to the other end of the disk?

          We could make a record of e
  • I read the article and I don't see anything desktop specific here. It sounds like you have a single storage array on the back end to which your (file/database/whatever) servers are attached. The storage array has both high performance Fibre Channel drives and less expensive drives. It keeps track of which blocks are accessed most frequently and migrates them to the appropriate disk tier.

    Sure, your desktop connects over the network to a SAN attached server in some fashion, but I don't see anywhere in th

  • This is so simple, you have your good failsafe raid1 setup with 10,000 rpm hard drives for the IMPORTANT data and the rest of your drives are just 7200 rpm drives to store what is not important. Its really easy to determine which drive the data goes to too:

    if filename=pr0n
        store_on_good_drive
    else
        store_on_slow_drive
  • So, if i watch a movie once, its rarely used, put on the slowest disk and stutters when played? ;p Well, anyways, where on a desktop system would you need really expensive high speed data stortage? Normal disks these days are extremely large and fast and they never come under any stress anywhere near a high performance webserver or anything alike. While backups system will get more important while disk space gets cheaper and cheaper, i dont see any need for more performace. People will just put up a raid wi
  • I really wish that my local host's storage were used only as scratch space for encrypting all my data for network storage, and a local cache. Why should I lug "my" PC around when there are PCs everywhere? Maybe if my PC were really better than the others, but for most of my data access, any Web terminal will do. Combine that with a biometric/password protected mobile "phone" containing my keyring and bookmarks, and I'm literally "good to go".
  • I already do this at my home.

    Big files that I don't mind losing (ripped dvds and cds) are on a local, cheap raid-5 array.

    Everything else resides on my PC.

    Every night, my PC runs an automatic rsync job that syncs it all up to my rsync.net filesystem.

    I guess, theoretically, I could take it a step further, and add a layer of geographic (and even political) redundancy by making my account sync to California and Colorado, and not just the primary CA site.

    rsync.net just announced sites in Switzerland and India ..
  • Quality of Storage Service (QoSS) has been a feature in Veritas Storage Foundation for two years or so.
  • by Anonymous Coward
    My single disk raid 0 setup works great...
  • I've read about this company before. However, I'm not sold on it, and at last check (a couple of monhts ago), their website was remarkably bereft of useful technical detail.

    My biggest question is how they handle free space tracking? Unless this box has "hooks" into the filesystem, it is not going to have the faintest clue when data has been deleted.

    Also, can you say "Holy Fragmentation Batman!"? Again, pretty intense "hooks" into the filesystem are going to be required in order to keep files even remotel
  • I built something like this 10 years ago. A big corporation's in-house marketing & PR department, lotsa project files full of artwork and such for campaigns, big files used daily for months then ignored for years. It was MacOS 9 & Windows 95 clients, Netware 4.1 on a HP server with RAID 5 and 2 DLTs w/ loaders.

    One DLT was for backups using ARCServe (before they got bought by CA). It was simply a matter of shipping cartridges in and out of the storage vault & off-site as required, replacing indi

  • It was about 1962, when IBM was touting something they called "Percolate & Drip" storage. The idea was that things that were used often "percolated" up to the fastest storage medium, while data that was only infrequently used would "drip" down to the most capacious media. Why do children get to claim everything they imagine is somehow NEW? Mature adults try to stand on the shoulders of giants.
  • That's how things used to work on ICL George 3/4 circa 1977.

    The joys of waiting for an operator to load a tape so you could edit a file, hoping he wouldn't CANTDO.

    (Little used files got shoved of to mag tape. Still showed up in the filestore. When you accessed them a message was sent to the operator: "PLEASE LOAD VOLUME ASBHJ123 FOR :HUGHES.SOMEFILE(1/FORT)", if the lazy bugger didn't want to load the tape, or if he couldn't find it he'd type "CANTDO LOAD VOLUME" and you'd get a horrid error).

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...