Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

IBM Developing Lego-like Storage Brick 181

AaronW writes "According to this story at EE Times IBM is developing a 32TB storage system built around blocks that can be stacked like Lego bricks. Apparently they will be connected in a 3x3x3 mesh using capacitive coupling and will be water cooled."
This discussion has been archived. No new comments can be posted.

IBM Developing Lego-like Storage Brick

Comments Filter:
  • Surely LEGO owns the patent on this concept?
  • by DJ-Dodger ( 169589 ) on Thursday April 25, 2002 @04:06AM (#3407644) Homepage
    This is just a marketing ploy so they can sell storage Clusters shaped like Castles, Pirate Ships and the Millenium Falcon!
    • Will this be compatible with my lego land space station? Will it be backwards compatible to my lego land pirate's island? Will it be cross-platform compatible with my linkin' logs?
    • Non-cube shapes are interesting because a cube has minimal surface area - you might want more surface area because external i/o is through the exposed faces. Also, varied shapes might provide better cooling, access to inner modules, etc.
      • First spheres have minimal surface area to volume ratio. Second, the major win that they were talking about was reducing the floorspace at a datacenter. Hence the shape used has to be able to stack seamlessly. The cube or cuboid is the most sensible shape as components are themselves cuboid, and cuboids are easier to manufacture than other shapes. As for external i/o you could just put more than one cnnector on each face. Finally, the point is that you don't need access to inner modules. If they fail, you leave them there and route around.

        • Spheres have minimal surface area when the surface is smooth. When the surface is made of aligned cubes, this is no longer true. Consider the two dimensional case, packing 25 units:



          the square above has a perimiter of 20 units, the more circular shape below has a perimiter of 22 units. The square is the optimal shape for smallest perimiter, and this projects onto higher dimensions.
    • I'll take the pirate ship kit.
      And one extra for a footstool
  • No I am serious, can you just imagine it?
  • by Crag ( 18776 ) on Thursday April 25, 2002 @04:11AM (#3407660)
    "IBM's Ice Cube project aims to define a way for end users to easily maintain increasing amounts of data, while also plowing ground for a similar approach to computing systems."

    Ice Cube? Lemme guess: They sell a bandwidth package for Internet hosting called "Ice T"

    Their bandwidth monitoring and packet sniffer is called "Snoop Dog."

    Oh wait...IBM's PS/2 had the MCA bus. Maybe that was a Beastie Boys reference. Maybe IBM has been into Rap and the like for a long time...
  • by Arkan ( 24212 ) on Thursday April 25, 2002 @04:14AM (#3407671)
    Am I wrong in thinking that this design may lead to a new approach to servers farm, where each cube offers some kind of power (processing, storage, networking, moka brewing), and the whole assembly keeps itself in shape?

    For the first time in the history of /., the assertion "Imagine a cluster of these!" takes its full meaning: storage might be the first step, and only the bandwidth of the couplers is a limit to the usability of CPU cubes or networking cubes.

    More, the software part will certainly bring some huge advances in clustering, as the challenge of virtualising all those cubes may help in building self-repairing (or should I say self-dumping?) clusters...

    Oh, and by the way, here is the first step to assimilation.

  • by E1ven ( 50485 ) < minus author> on Thursday April 25, 2002 @04:15AM (#3407674) Homepage
    I'm not sure I follow. They say they want it to be easily stackable, and fault tolerant (they specifically mention leaving blocks in place if they fail), but how do you combine that with a water cooling system?

    With a water cooling system, you need to make sure that the joints between cubes are water-tight, and maintain them over time, thus defeating their "no maitance" theory.

    Or am I missing something? Perhaps they could use "disk blocks" and "cooling blocks" and just swap out the "cooling blocks" if there is a problem? Still takes more work than air cooling, but less than inegrating it into every block would.

    What about just leaving air holes, and using it in a chilled room? Most server rooms are chilled anyway.

    Just some ignorant thoughts.

    Colin Davis
    • by Anonymous Coward
      Apparently the watercooling is used to take the heat from the internal heatpipes. Thus the internal cooling is a closed system and the external cooling system only needs to cool the end of the heatpipes. The water probably runs through a vertical pipe onto which the cubes are stacked and the heat is transferred through touching metal surfaces.
    • With a water cooling system, you need to make sure that the joints between cubes are water-tight

      Do you really though? I envision water pouring into the top of one of these cubes through a little funnel, trickling down through the hot places with the aide of gravity alone, and coming out through the bottom through another hole, ready to enter the cube below and do the same thing. At the very bottom of the whole shebang, the water is chilled and pumped back into the funnel of the top cube again.

      Better keep each block upright at all times though! :)

    • I'm not sure I follow. They say they want it to be easily stackable, and fault tolerant (they specifically mention leaving blocks in place if they fail), but how do you combine that with a water cooling system?

      It can be done without water flowing from one brick to another. If every brick is built in a metal shield/radiator, then the water only flows between the hot parts and that shield within every brick, and the bricks are touching with these metal shields. But I haven't read the article, so I don't know how they actually did it, I just point that it is possible.

      • What happens to the cube in the middle of a block 3 long 3 wide and 3 high ? it can not dissipate heat anywhere other than to the surrounding blocks which are taking care of them selves ok you say maybe their are build to take the load from the surrounding ones that what happens when we have a massive cube farm or several hundred Ice Cubes... hopefully they don't melt
    • Quoth the article...

      A water pipe rises through each vertical stack of bricks, linking to heat pipes on each module. The water cooling scheme is cheaper than air cooling, researchers said.

      So you connect a vertical stack with some plumbing. The joints are not moving and I doubt the system is pressurized, so maintenance is not a problem. Plumbing is the least of their worries. The vertical stacks probably just connect at ends and channel the water to a chiller. No big deal, really.
  • I have don't hold NEAR that much. I sure hope mine are upgradable.

  • Will these be compatable with the Duplex blocks for the four and under crowd? So that while your kids imagination grows, so does their legos?
  • 2 problems (Score:2, Interesting)

    by cdf12345 ( 412812 )
    If I were IBM I would avoid Lego comparisions
    and 2nd, I would change the name.
  • by xxSOUL_EATERxx ( 549142 ) on Thursday April 25, 2002 @04:18AM (#3407681)
    1. Lincoln Log: kept rolling off table.

    2. Tinkertoy: storage structures too delicate, engineers kept losing fins for making "windmill" structure.

    3: Play-Doh: kept getting stuck in carpet.

    4. Erector Set: engineers spent too much time making jokes about name.

    • even IBM isn't THAT dumb!
      • by Anonymous Coward
        Rubik's Cube has a place ... it's the cube where you put the dedicated encryption hardware.
        Here's an image: Given a one time pad, the security guy goes in every morning and fiddles with the cube to get it to match today's patterm.

  • Old News :) (Score:4, Informative)

    by AtrN ( 87501 ) on Thursday April 25, 2002 @04:18AM (#3407684) Homepage
    See Robert Morris's presentation [] (6+MB PDF) from the USENIX File and Storage Technologies [] conference. The videos of the invited talks [] are also worth watching (if you can afford the b/width to get them).
  • Lego isn't the point (Score:1, Interesting)

    by Anonymous Coward

    The two most important improvements are:

    The system is watercooled. Imagine this, you have 12 harddisks per cube. You certainly don't want to hear the fans which would be needed to cool all of them.

    The second improvement, and you'll instantly see why this is coming from IBM, is that bad disks are not supposed to be replaced in the cube. They are simply turned off and the storage system works around them. If you need more capacity or can't live without the failed storage space, add another cube.

    • What if they use IBM drives? Then all the disks will die if used over 300+ hours a month.
      What a marketing scheme--Force them to buy a whole new unit rather then just replace a part of it.
  • by gnovos ( 447128 ) <gnovos.chipped@net> on Thursday April 25, 2002 @04:21AM (#3407691) Homepage Journal you end up having to pull em apart with your teeth? I'd rather not, I'm sure they get really hot.
  • Can you offset the cubes a bit, rather than lining the faces up perfectly? That would really open up more possible structures.

    I think it's a given that, at least in places that have room for it, there will be some playful constructions made from these things.
  • Looks like an excellent step towards a truly borg like information technology system.

    IBM: resistence is futile!

    Websurfing done right! StumbleUpon []
    • Obviously u got modded for "funny" but I think you might be right... as some people said here, complete intergration of cubes (storage, proccessing, outside network nodes, etc), you will create a huge computer with practically unlimited capabalities, add to that a self growing AI program and cubes that create other cubes........... whoops.
  • access? (Score:1, Interesting)

    by nslu ( 532403 )
    i am wondering how to acces disk block (let's say it died [GXP anyone?] and needs to be replaced) that is somewhere in the middle of such construction.
    • read the article: you don't. You leave failed blocks in place and plug another one in on the top.
      • So, when you run out of floor space, do you shut down your shop for the week(s) it takes to detect failed modules, tear the complex apart, remove/replace them, and rebuild the collective?

        Does IBM somehow believe that floorspace is infinite and free, while regular maintenance is cumbersome and expensive?
    • It would be nice if you could dig for the bad module ;)) Take one cube away, reattach it somewhere else, take the next cube, reattach, until you find the bad module; then - replace it and take all the "displaced" cubes back. That would be nice :)
  • by JoeSmack ( 540377 ) on Thursday April 25, 2002 @04:23AM (#3407699)

    I'm glad they finally announced this project I've been dying to talk about it. I talked to a researcher on this project while I was at IBM's Almaden Research Center.

    I was blown away when they described it to me. I have to say that IBM is by far the greatest computer techonology research company. They take the top minds give them boat loads of money, ten years later they blow your mind with the completely innovative technology. I mean come on, cube storage?!?!

    Too bad, they just can't make any inroads in the client side market. They invented the harddrive years ago and today they aren't going to even make any more client models.

    Anyhow, I just wanted to talk about cube failures. Ice cube uses a 3x3x3 array of 27 cubes. But, the question is what happens if a cube goes bad. Essentially, you can never turn off Ice Cube. It's meant to be continuously running. If a single cube failure occurs the system just routes around it. To compensate you can stick more cubes on the outside. Of course, throughput will be hampered.

    I asked the researcher what happens if say all the middle cubes burn out or when the throughput gets too damaged. He responds, "Well, given the failure rate, it probably won't be an issue until about ten years have passed, and by then we'll have much more powerful storage technology."

    Finally, anything that is water-cooled is nifty in my book.

    • by BCoates ( 512464 ) on Thursday April 25, 2002 @05:14AM (#3407825)
      I asked the researcher what happens if say all the middle cubes burn out or when the throughput gets too damaged. He responds, "Well, given the failure rate, it probably won't be an issue until about ten years have passed, and by then we'll have much more powerful storage technology."

      Since the entire system is supposed to be fault-tolerant, if you wanted to reclaim some of the space/performance from the dead cubes, you could just start removing cubes from one end, throw away (or salvage, whatever) the dead ones, and then stick the still-functioning ones back on the other end, wait for them to sync back into the network, and repeat.

      Of course, instead of growing, the whole unit would now have a tendency to migrate across the room...

      Benjamin Coates
    • by Anonymous Coward
      >Ice cube uses a 3x3x3 array of 27 cubes.

      You know damn well that the block that fails is *always* going to be the block stuck in the middle!
  • Is this the same IBM thats getting out of the disk storage system? So what are they going to use for hard disks... Fujitsu?????
  • I thought IBM was planning on Bailing Out of the Hard Drive Market []? I guess IBM really does have multiple heads these days - although maybe like the article says, IBM's focus on this product is the hugely complicated software that will be necessary to make it work, rather than the hardware.

    Websurfing done right! StumbleUpon []
  • I find this storage technology troubling... There could be cubes the size of elephants floating around in there!

    This problem is obvious, to anybody with an advanced degree in hyperbolic topology, m-hay.
  • there was an arcticle about this. It said that the main problem was electron leakage between the circuits. I wonder if IBM is just saying they're thinking about it or if they've solved it. I hope they post more info =D
  • Imagine the possibilities-- a Lego robot that can solve a Rubick's cube *and* store the contents of the Library of Congress.
  • This article IceCubes would mean cool computing [] at New Scientist covered the technology.
  • Software Hard? (Score:3, Interesting)

    by TarpaKungs ( 466496 ) on Thursday April 25, 2002 @05:19AM (#3407831)
    Quote: Designing software that can mask the complexity of making a collection of plug- and-play drive modules appear to a user as one cohesive file system is expected to be one of the core challenges of the project.

    "Software ... core challenge"??? (This sentiment is in the context that IBM aren't totally clueless about this sort of thing ;-)

    Starting with a simple schema:

    1. Low level disk manager carves up disks into globally uniform chunks - say 20GB for argument's sake.
    2. RAID manager does the usual RAID 5 stuff using chunks from different cubes.
    3. Logical volume manager combines/carves up logical raid arrays into user required sizes.
    4. And finally a robust resizeable filesystem presents space to the user (or go back a step to present a virtual block device to Oracle or anything else that likes to avoid filesystems.
    OK - that's a simple schema from which a better system can be evolved - but the core technology exists now. 1- disk partitioning; 2- RAID; 3-Linux LVM, Veritas Volume Manager and many others exist; 4- Growable filesystems exist (reiserfs, Veritas etc etc. Need to work on the ability to shrink for a fully rounded solution. Stage 2 needs to be careful concerning topology to avoid bad latency problems.

    To make this truely plug and play (but not in the MS sense) inserting a disk-cube would see it tested, auto partitioned and put in a pool. The systems engineer would be required to create/delete/alter filesystems and/or virtual disks as they needed - and configure things like how many simulatenouse cube failures can the system tolerate, how many hot spare cubes are kept in the pool and so on.

    The software to do the underlying stuff is here today - I'm using it - albeit rather manually. The automation/management software to make this polished isn't hard conceptually. Of course if you only wanted one filesystem like the article mentioned it would require even less configuration ;-)

    I'm actually much more impressed with the hardware here. Very cool. Not sure about the 3D and "stacking" structure. Bugger to replace a dead cube in the middle. Unless you are supposed to leave it there and throw a new cube on the top? I'd go for a 2D stacking system with overlapping layers (like a brick wall) - but with the couplers designed so you can knock a brick out sideways leaving the others undisturbed. Hmm - just a thought...

    • Oh yes, you've thought about it for five minutes and solved all the problems. You're so brilliant. Errr...except that you don't address multiple hosts, recovery from multiple (even non-concurrent) failures, reconfiguration to avoid hotspots, etc. etc. etc. Just about anything is solvable with current technology if you ignore enough parts of the problem. The whole point of this, the whole reason they say that the software is such a challenge, is that they actually want to address the parts of the problem you ignore.

    • There are hardware diskarrays like HP Virtual Array that can do this. Just pull one 36 disk out and stick a new 72 GB in and the array will automaticlt resize and start up sync up and migrare RAID-5 data to the new available space. Very cool, combine this with a inteligent volume-manager and filesystem that can resize automaticly and you have something that works simliar to the IBM IceCube concept. k_ arrays/midrange/va7400/index.html
    • And obviously there will be some instruction to shutdown a specific cube which will consist of migrating the data to other cubes so no data is lost in the process. Then take it out, and place it at a different location, when it's connected, it'll auto-sync and migrate data to there if needed. I also persume they'll make a system to auto-migrate files used more to the nodes closer to the "network out" cubes..
  • I could buy a hard drive that communicates wirelessly with my computer, and which is powered by the Tesla coil. I could just put it on a shelf near my computer and leave it in it's box.

    That would be cool.
  • 2.5" hard drives? (Score:4, Interesting)

    by pmsr ( 560617 ) on Thursday April 25, 2002 @05:37AM (#3407862)
    They seem to use smaller 2.5" hard drives, like the ones in notebooks. It does mean smaller power consumption and less noise, but what does that do to performance is yet to be seen. Maybe they are betting on time to make them faster and technologicaly more advanced. Yet, after i read an article at TomsHardware about doing raid with 2.5" disks, i am a believer! Not! :)


    • With enough striping (with mirroring as well) or RAID-5, performance of an individual disk isn't an issue.
    • Re:2.5" hard drives? (Score:2, Interesting)

      by amorsen ( 7485 )
      The platters in modern SCSI drives are quite small. The 3 1/2 inch form factor is just for backwards compatibility. It is very hard to make large platters spin at 15k -- and that is also why most SCSI disks are still 36GB or less.

      I bet that we will soon see 2 1/2 inch SCSI disks again. They make a lot of sense in blade servers and 1U servers, where laptop IDE drives now reign.
    • what does that do to performance is yet to be seen

      Probably won't be a factor, because of caches and parallelism. If both your reads and writes are served via cache most of the time (the latter to be destaged to disk on the array's own time) then the actual disk speed is less of a factor. Also, if your requests are being served by a relatively large number of disks then a single disk doesn't become a bottleneck. Large transfers can occur in parallel, queuing effects are reduced, etc. Combine caches with lots of disks, so that the array has lots of flexibility in how it schedules I/O, and the result is even more powerful.

      But what, you say, about the potential data loss when a node holding a cached block fails? Well, the bricks have very fast connections. It doesn't seem too much of a stretch to suppose that caches might be replicated (this is one of the interesting software challenges to which IBM alludes). Thus, the time to complete a write is only the time to replicate its data to another block, not the time to actually write it to disk. As long as you retain enough reserve power so that cache can be flushed to a special area on each brick's local disks in case of an external power failure, you could even claim to be ACID-compliant (some vendors do exactly this).

  • by cybergibbons ( 554352 ) on Thursday April 25, 2002 @06:04AM (#3407898) Homepage
    I feel that the "lego" comparison is a bit flawed - this to me suggests a completly sealed box which stores data, power being inductively coupled, data through RF, etc. Also, lego is designed to be built, taken apart, built again.

    This system is meant to have 27 cubes in a 3x3x3 cube, and when part fails, it is supposed to remain in place. Low latencies and high throughput are due to their being interconnected to the surrounding bricks.

    First issue here is, that people don't like seeing things fail, and leaving them. This thing contains a "fast x86 processor", a gig of ram, (later on) six port Infiniband switches, plus all the disks. One of these failing is expensive - and getting the middle brick out would require removal of many other bricks, and probably knock out the system quite well....

    It isn't really exandable either. For 27 cubes, perhaps the 3x3x3 is the best layout or topology of the blocks, but as you increase the size of the array (100 bricks or something), a cube becomes far more complex, with longer paths between cubes, longer latency, impossibility of removing a central brick. Heat would build up in the centre (yes, they are watercooled, but every part will be making heat, and not all of them connected to the heatpipe and watercooling system).

    Maybe some mad buckyball style arrangement would provide the shortest average path between disks (but this would require a lot of statistical work, and depend on how the data was stored, what sort of access was required).

    We could end up with huge, weirdly shaped storage arrays, like in films.

    The watercooling is a step forwards, working in server rooms is getting far too loud.

    Reliability may be an issue - 2.5" disks which it uses are known to be not as reliable as their larger counter parts. And there are a lot of them in this (12x27 = 324 disks), so failure is almost guaranteed within a short time.

    I think this may be more of a concept thing than a final product - certainly the lego and modularity aspects need to be re-thought.
    • I'm not in favour of the water cooling. We just got rid of the external water cooling system in our building that supported the IBM 3090 & ES9000 processors. Putting in a new device that would require roof chillers again would be nuts.

      Now, if the water cooling is internal to the unit that is a different story.

      As for fan noise on the computer room floor, most of it comes from the A/C units, not the equipment.
      • That's funny. Our ES/9000 was air cooled when we got rid of it (replaced with a Multiprise 2000 S/390).

        Water cooling is a bad idea. First off, lots of use to have this stuff years ago, but don't now because we all switched to air cooled stuff. Secondly, Water is just a bad deal with the rest of te stuff that will be in the room with the huge disk array. What happens if one of these pipes burst some night? Water everywhere. I also expect IBM to get this to be aircooled eventually. I remember a anecdote about our old ES/9000. Story goes is that we almost installed a water cooling system because at the time, that was the rage in Mainframes. Big, honking water cooled units. Now, that is not the trend although alternative cooling seems to be needed soon because the danged 1-2 u servers get hot!
    • It isn't really exandable either. For 27 cubes, perhaps the 3x3x3 is the best layout or topology of the blocks, but as you increase the size of the array (100 bricks or something), a cube becomes far more complex, with longer paths between cubes, longer latency, impossibility of removing a central brick. Heat would build up in the centre (yes, they are watercooled, but every part will be making heat, and not all of them connected to the heatpipe and watercooling system).

      And you'ld never be able to fit it into the server room either. :-) By the time you'd have built such a large system the original bricks would be so obsolete that they'd just be spaceheaters anyway. Better to mirror the old cube onto a new one and throw the obsolete one out.

      Reliability may be an issue - 2.5" disks which it uses are known to be not as reliable as their larger counter parts. And there are a lot of them in this (12x27 = 324 disks), so failure is almost guaranteed within a short time.

      The cubes would presumably be redundant and autoreconfiguring internally as well, so they would degrade over time until you lost all the controllers/disks/interconnects/powersupplies.

    • Nice points cybergibbons :-) I liked the buckyball idea. And I wouldn't like to leave a dead brick in the middle. What if it had a serious power fault and went up with a big bang (like an ancient Sun Ultra 1 did to me the other day)? I don't like the 3D idea. It is very cool - but not practical. And as you say, 3D will get path length problems when you've got many many cubes.

      So, if you had *lots* of cubes you might be wanting to connect them in a 4D, 5D or some other n-dimensional way as mentioned (still possible with 6 couplers - just a block doesn't connect to *all* it's neighbours...

      ...which brings back memories of transputer chips with their 4 fast-serial connectors. Then beowulf clusters. Seems these schemes often end up with each node being plugged into a super-switch and being able to achieve wire-speed connectivity to any node over 1 virtual hop.

      Which would make this very practical. IBM sells you an backplane, switch modules and storage modules. Which sounds like iSCSI and gig ethernet switches - which are here today - though I don't know if anyone has made a solution exactly along these lines. Most of the big SAN boxes I've seen still seem to be a big server type box with disks that plug in.

      There could be mileage in losing the automatic couplers - but doing disks - and controllers - that are stackable (in 2D) and are connected with wires to a big switch - but using black split loom/conduit aka the borg.

      Having wires would be distinctly less cool - but borg tubing would make up for it in my book :-) Naturallly CISCO or Extreme would realise this is very cool and make gig switches in the same form factor - so your switch cube would sit in the middle with all those black tubes. Oooh - I feel assimilated already...

    • Heat would build up in the centre (yes, they are watercooled, but every part will be making heat, and not all of them connected to the heatpipe and watercooling system).

      Every brick is connected to the water-cooling system.
      • I realise that every brick is connected to the water cooling system, but is every single component in each brick connected to a heatpipe, which is in turn connected to the water cooling system?

        As with any device containing air, the air will heat up. The insulation of the outside bricks and the lack of forced air cooling would not help this.
    • Here's a quick thought on how to keep a larger structure cooler, and possibly allow maintenance in the inner layers of blocks. How about a Sierpinski sponge []?

      The bigger the cube, the more perforations, and more ways to get at the inner cubes. Nearly all the cubes could be accessed since they would have an outer surface exposed. Of course, in a 3x3x3 structure you would have only 20 bricks instead of 27, but any of them could be accessed.

  • by eamber ( 121675 )
    Great - but will it die after 6 months of use - or not be fit for 24/7/365 use like some of their current drives?
  • by Anonymous Coward
    To see the picture, click here: ick.gif []

  • Oh well []
    Expandable storage just by adding a brick... very cool! I hope they continue with this idea in other ways as well!
  • I have to say, I wish I'd thought of it first.

    However, this does bring me back to an original idea that I had for a server room. The room will be entirely empty, with large square tiles on the floor. Each tile will have information on the hardware that is below it (server name, switches, routers, etc). And, each will have a latch of some sort. Then, you unlock the latch, and pull up, and a large storage bin below slides up on spring-loaded rails, and locks into place. Then, you service the parts that need working on (swapping tapes, changing bad hdds), and slide it back down. All of the hardware will be sub-ground-level, which will make for much easier cooling, and a lot less cluttered environment.

    These Ice Cubes from IBM would make a helpful addition to this idea, except you could only have probably two or three servers to a tile, attached one on top of the other. And there would be no side-to-side connections.

    Eh, it was an idea.
  • Geez imagine if you have a hundred of these stacked in a cube and one at the centre springs a leak...

  • More like CINDER BLOCKS :)
  • by alta ( 1263 )
    This isn't exactly a new idea... Anyone remember the old apples where you upgraded the memory by stacking more memory on top of what you already have?

    Hmm, didn't those old apples have NUMA? ;)
  • To put all their effort pushing a new technology, like this one. IBM has been quite creative in their devellopement over the year, maybe they felt it was time to pass to something else. Can't wait to see this...
  • This is the ideal story for Slashdot - legos, data storage, a big company like IBM, new technology...

    I mean, it's like it was tailor-made!

    Must have been difficult choosing an icon for the story. I can just imagine the editor agonizing over the little lego, the ibm logo, the hardware nut... or maybe the Borg Gates icon just to get people to read it.
  • Why use an x86 processor? 1. Aren't there way more efficient processors out there? 2. Why use a competitor's CPU rather than IBM's own POWER chip? - Which happens to be one of those more efficient CPUs.
  • sure there isn't a caveat that says they are only supposed to be used for like five hours a day or something?
  • The RIAA is pushing for new legislation which would make it illegal to exchange Legos, having heard that they may soon be used to store .mp3's
  • This comes on the heels of IBM bailing out of the consumer HD market.

    Are they going to have a 6 week waiting period if one of the drives fails? Are they going to tell people that their drives don't fail any more than anyone else's? Are their drives going to have extraordinarily high failure rates, in some cases 50%? Are they going to tell people that they are using their drives too much if they are on for more than 8 hours a day?

    Sorry, but they can't even get consumer grade hard drives to work with any semblance of reliability. Why would people trust them to make drives that are obviously going to be targeted at high-end commercial boxes?

    There is currently a class action lawsuit pending against IBM for their recent HD disasters that they unleashed upon the public. Maybe people should wait and see, before jumping on the next IBM storage bandwagon.

    Here's the link to the lawsuit [], if you are interested.
  • Now that IBM have announced they are selling off their OEM storage division to Hitahi this becomes irrelevant.
    It will not make it to market without funding.
  • I'd imagine the software for this would work a lot like freenet, assuming it will be fault-tolerant and hot-swappable. Files would probably be scattered about in such a way that if a piece is temporarily unavailable, it would find the missing piece elsewhere, with the possibility of additional storage coming online or going offline randomly...
  • High density magnetic storage for the purposes of building very large storage systems will hopefully soon be replaced by optical hollographic crystal storage []. Once these systems reach production it will be possible to store hundreds of billions of bytes of data, transfer them at a rate of a billion or more bits per second and select a randomly chosen data element in 100 microseconds or less.

    This will pretty much make all the obtuse magnetic data bricks and high density RDRAM obsolete! I would image that IBM knows this, but wants to make 32TB data storage systems a reality today and for now, only has magnetic disks at thier disposal.

APL hackers do it in the quad.