Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Using A Microscope As A Hard Drive 63

An unnamed correspondent writes: "Nature reports that IBM Zurich is developing a practical method for braille hard disks that may eventually be able to pack 60Gbits per square inch, or about four times current disk technologies. I wonder how many moving parts there are with 1024 read heads." Well, they're not really braille; perhaps the analogy to clay tablets made in the article is closer.
This discussion has been archived. No new comments can be posted.

Using A Microscope As A Hard Drive

Comments Filter:
  • . . . you can still read your data (if your fingers are small enough).
  • You have to have a fuckin' little eraser for that.
  • This sounds like a punch card technology that would give programmers the ability to screw up an election instead of a graphic designer.

    ----

  • In case you want to see the comments people had when this was posted on Slashdot five days ago [slashdot.org].
    --
  • by HiNote ( 238314 ) on Monday November 20, 2000 @12:40PM (#611653)
  • Atomic force microscopy measures the force on a probe extremely near a surface. It measures electric interaction between the probe and the surface. Current hard drives probe the magnetic structure of the surface. As far as reading, it's nearly the same. The difference is in using an electrical probe rather than a magnetic one.

    As for writing, I am curious about how rewritable it is.
  • This is that technology made into hard drive form before that was a theory now they know what to do.
  • I don't really have any idea how to go about answering your question, knowing nothing about the physics involved. Sorry. Perhaps you should have asked someone more informed.
  • The solution is to read lots of information in parallel, using many AFM tips at once
    It sounds simple, but all those angels dancing on heads of pins could cause serious complications...
  • tht article had 8 comments also..... doesnt look like many people got to see it, also they werent exactly the same.. so nyah

    Jeremy

  • Hardcopies of braille books are deteriorating just like other paper books, except at a faster rate: in normal paper, you only have to contend with acids in the paper, whereas with braille paper you have to deal with these same acids in addition to oils and dirt from physical contact between the reader's fingers and the paper surface. (Imagine every third-grader who checks out a library book rubbing his greasy fingers against the words as he reads it, and you might get an idea of the problem.) As such, there is an effort underway to digitize all old braille books, and only recently has the appropriate braille scanning software [indexbraille.com] been developed [nfbcal.org].

    The ultimate irony would be for these digital copies to be subsequently stored on a disk medium that itself resembles braille. Actually, that would be the penultimate irony. The ultimate irony would be for the disk medium to fail for the same reasons that historical braille systems have failed, but as the blurb points out, this systems is not strictly braille-like.
  • The future lies in optical technology. The reason for this: Any drive in which the heads physically come in contact with the storage media is prone to failure.

    Sorry bub, but the heads in a HD never, ever touch the media. When this does happen it is called a "head crash" and is generally considered to be a Bad Thing(tm) because your HD is now a brick.
  • by Anonymous Coward
    The future lies in optical technology. The reason for this: Any drive in which the heads physically come in contact with the storage media is prone to failure Why just stop there? What about Quantum Technology?

    I love making unfounded predictions based on technology I don't understand!

    Please, optical is nice, but how many cd-drives have you worn out in the 3 years or so? I've got 3 of em chalked up, and I've yet to replace HD's...

    Optical drives require moving parts as well - and these are just as likely to fail as reader heads on the basis of mechanics. The fact that there is 1024 heads is irrelevant! Have you ever taken statistics? Just because there are more of them doesn't atutomatically increase the rate at which they will fail, that is a property inherent in the device. Sure, if you have 1024 heads it is that much more likely that 1 of them will fail with respect to a system composed of 1 head but if the rate of failure of heads is less than 3 orders of magnitude than that of conventional mechanical devices then these heads would be less prone to mechanical failures.

    By the way, In an atomic force microscope the reader heads don't actually come into contact with the source, that's the whole point! They only do so to write to the disc! - which I don't think implements the atomic force microscope aspect of the apparatus itself anyways.

    Keep the whoring up, there's lots of jackass moderators that don't understand what you're talking about, and based on statistics they'll be more likely to support your unfounded skepticism than the 2 or 3% who actually understand what the hell this device is.

    -An Anonymous Coward Against Unfounded Technical Criticism

  • > any idiot knows that a metal box shields what's inside what magnetic and radio interference by reflection...

    Hah hah! Now explain why metal magnets aren't self-shielding.

    Ryan
  • "how many cd-drives have you worn out in the 3 years or so?"

    Personally, I've never had a CD or DVD drive fail, but that doesn't say that they don't fail. My point being: If my CD drive were to fail, I could just take out the media and it would be fine. With current hard drive technology, if one of the heads fails, and I have something I need on my drive (who here actually has current backups of ALL their files) I would be up shit creek. Using a laser as opposed to a mechanical head greatly reduces the chances of data loss due to drive failure.

    Where exactly did you see that "the rate of failure of heads is less than 3 orders of magnitude than that of conventional mechanical devices"?

  • Maybe "electronic" is more appropriate. The probe measures electrostatic forces.
  • Actually I suspect it will be fine. At any visible scale, there will be just one head, flying over the disk in the normal way. At a microscopic scale, the head has a comb with 1024 tiny tines. Each tine is constantly adjusted to remain at a fixed height over the disk, by an individual feedback loop, and a piezo-electric actuator (a crystal that stretches or shrinks under electrical control). There are NO moving parts in the usual sense, just a few that stretch or bend. By reading off the activity in the feedback loop, and a fair bit of signal processing, the exact shape of the underlying surface can be read. In laboratory AFMs this is to literally atomic precision. I imagine the AFM disk would sacrifice a little of this for robustness.

    Writing is accomplished by pushing the tine down a little further and scratching the (plastic) surface. Re-writing by heating a small area slightly with a laser so that it softens and surface tension flattens it.

    Steve
  • Probably depends on whether the force varies with the square of the distance, the cube of the distance or some other multiple.
  • I don't see why it should be especially over-sensitive. The pins would be suspended over a rotating disk, just as a magnetic head is now, and held at constant (small) altitude by a feedback loop and a piezo. The feedback loop is also the sensor. I imagine the feedback loop would be running at hundreds of KHz or even a few MHz, so most external shocks would be so slow that the signal processing software wouldn't even see them.

    Basically, the heads are so small and working so fast, that external movement will no more disrupt them than a gently rocking ship disrupts your eye-tracking when you read a book.

    The control and sensing electronics would be susceptible to interference just as a current magnetic devices are, but shielding is easy.

    Writing is also safe enough. Think of using a pen on shipboard (not in a gale).

    Steve
  • The problem with pure optical technologies is that the information desity is limited by the wavelenth of the light (several hundred nm). Take chip manufacturing for example, where optical masks limit the structure size. Near-atomic resolution/information density (sub-nm) is not feasible with optical-only techniques.
  • From the article it's indicated that the principle is based on atomic forces, not electric. (Which jibes better with the explanation of the bending needle).

    Are you sure you didn't think of tunneling effects which can be used for similar devices (remember the STM?)
  • It's rewritable, although it might be like flash RAM, where you have to zero it in quite large chunks. To erase an area, you warm it with a laser until the surface flows, and surface tension flattens out all the little scratches which mark the bits.

    As far as I know, you can do this as often as you like.

    It isn't as slow as it sounds, because the areas involved, and the amounts and distances flowing are so tiny.
  • Holo storage is still just research i'm afraid. (although you could possibly say multilayered discs is adequatly similar)

    The problem with optical storage is that we are limited by the wavelenghts of the light we are using, as well as the mechanical precision of the optics). At present it seems even good'ol stonage tech like magnetic storage beats optical.

    Obviously in order for a product to be marketable it need to have a reasonable MTBF, but I don't see any neccecary reason why fabricating an assembly of multiple probes will be much more fragile than a single probe. Remember they will likly be placed in a single assembly. If they crash, they will do so under the same conditions that a single probe would do so.
  • I can just imagine something like a tiny version of those really big old mechanical music boxes that played huge interchangeable metal disks li ke this one. [musicboxshoppe.com]
  • We wouldn't be driving around in cars or by high speed passenger trains. Can you think of the name of the engineer that designed the horse? You know, the one that was completely satisfied with his/her invention. Something has to be invented in the first place before it can be improved upon.
  • At the bottom of the Gabor page:

    "This document was created with StarOffice 5.1 for OS/2..."

    Well, someone had to be using it, I guess - OS/2 would score on the No-Bill-Here-o-Meter, but StarOffice as well! Heroic!

    No .sig, just sMiles
  • On a related note, has anyone else noticed that the claims of IBM's death were a bit premature? They seem to really have re-emerged as a major leader in R&D, and are piling up market leadership points in hard drives and Java tech while making inroads into monitors, back into PCs, and regaining ground lost to Sun, Intel, SGI, and DEC in the big-iron and supercomputing markets.

    I don't think IBM is "regaining lost ground" in supercomputing, I think IBM has nearly stolen the show. Take a look at the current Top 500 [top500.org] list of supercomputers and you'll find that IBM built the current champion as well as 2 of the top 3, 5 of the top 10 and almost half of the top 500.

    Add to that all of the other cool R&D stuff, plus lots of nifty software innovations (Alphaworks, anyone? [ibm.com]) and the largest consulting and professional services organization in the world and it's pretty clear that Big Blue is very healthy.

    So, please, everyone go buy some IBM stock and drive up the value of my options ;-) (yes, I work for IBM, and it's a pretty cool place to work, at the moment).

  • with typical TTM, areal densities for existing hard drive technologies will have easily eclipsed 60Gbits

    what ever happened to NFR (Near Field Recording) drives? I remember them being touted as the next big thing about four years ago

    Amorphis
  • by Anonymous Coward on Monday November 20, 2000 @12:33PM (#611677)
    The Sumerians developed cuniform hard drives in 5000 B.C. Not only were these portable (literally fit in the palm of the hand), but they could be read very well from many different angles. There was one write head (known as a "stylus"). Since some of these drives are still readable, they also hold the world record as the oldest surviving portable data storage system.
  • The future lies in optical technology. The reason for this: Any drive in which the heads physically come in contact with the storage media is prone to failure. Standard magnetic hard drives fail often enough, but 1024 read heads is just insane. You'd be lucky to have the thing last a year. What we need to be working on is faster read/write speeds for optical media. Where are the 3D optical cubes I've been hearing about?
  • by bboy_doodles ( 170264 ) on Monday November 20, 2000 @12:33PM (#611679)
    The information storage capacity of magnetic hard drives has expanded enormously in recent years, but is now nearing saturation point.

    Not again! People have constantly been predicting that hard drives and processors would reach a limit in "a year or two", but has it ever happened? No!

    In almost all situations, technologies do not just die but gradually evolve and lose the theoretical constraints that everyone was worried about. CDs have grown from storing 600 MB to 4 GB and soon 120 gigabytes.

    And on a side note, doesn't this technology seem a lot like CD's? I'd much rather invest in the 120 GB multi-level CDs rather than this "microscopic Braille".

    - BBoy doodles
    C is for Cookie

  • by LHOOQtius_ov_Borg ( 73817 ) on Monday November 20, 2000 @12:35PM (#611680)
    It sounds, at this point, like the system will be somewhat fragile with so many heads and the notion of a third meaningful dimension in R/W. While at this point it seems too fragile for home use, clever shock-proofing and very good quality control in parts manufacturing could result in a reliable drive. In general, though, it's pretty cool. Unfortunately, the productization of this technology seems at least 5 years off, which means I'll have to stick with a pile of IBM 75GB drives for my (personal, sorry) MP3 server.

    On a related note, has anyone else noticed that the claims of IBM's death were a bit premature? They seem to really have re-emerged as a major leader in R&D, and are piling up market leadership points in hard drives and Java tech while making inroads into monitors, back into PCs, and regaining ground lost to Sun, Intel, SGI, and DEC in the big-iron and supercomputing markets. Does anyone know of any really good insights into this in the form of articles, books, etc?

  • by Jeffrey Baker ( 6191 ) on Monday November 20, 2000 @12:44PM (#611681)
    In this month's Communications of the ACM, an article by Carnegie Mellon University researchers describes a device of this type which outperforms existing disk technology using an array of only 20 sensor tips. No such device has yet been built.

    The article is available [acm.org] from the ACM [acm.org] in PDF format. A paid membership, or a small one-time fee, is required.

  • by Anonymous Coward
    Here is a link to yet another microscope. This one also originated at IBM Research Zurich and was perfected at Dalhousie University in Halifax, Nova Scotia, Canada. It is a digital inline holography microscope implementing the ideas of D. Gabor: digital inline holography microscope [pawlitzek.com]
  • This is all fine and good, but plain old magnetic media continues to dominate the arena and looks to do so for a considerable time. Even if a new technology could ramp up to full production really fast (a few years) it would have to have some sort of additional edge against magnetic media to even remotely make a dent in the market, let alone become the new storage media king.

    When a competing technology starts selling competitive devices (capacity, size, speed, reliability, etc.) at reasonable costs then I'll pay more attention.

    On a side note, a little company called BiT Micro [bitmicro.com] manufacturers high performance solid state storage devices in hard drive form factors, though at considerably higher cost.

  • Where would we be today if researchers became content with exsisting technology? We could very well still be travelling by steam train and horse and buggy.

    I recently read an article [itworldcanada.com] on scientists working on optical solutions for miniturization of computers.

  • You know, I'm just waiting for flash memory to do some drastic price dropping so it can be used as long term storage. It seems to me it would make a lot more sense: flash memory would be faster, less prone to mechanical failure, and a lot smaller. Of course, right now it is _way_ too expensive for really large scale use, but I look forward to the day when hard drives leave us...

    (of course, I don't think they should leave us completely, but instead should be replaced on day-to-day use in many applications)
  • Usually i find it kind of annoying when people point out that something similar was done a month or more ago.... however this was barely 5 days. That is getting on the bad side.

  • by Anonymous Coward
    Hmmm... How do conventional hard drives work?

    Ohhh... now I remember, they scan the surface of a magnetic disk. I bet there are lots of dangers included in running a magnetic storage device in a computer case, with all the stuff going on inside and all...

    Oh wait, we actually use these things right now.

    Now imagine the intellectual leap required to apply the same techniques to protect these AFM drives... It blows my mind!

    Seriously, do you think scientists would waste valuable years of their lives trying to create devices that would fail under such mundane situations?

    Please moderator! Pick me!

    - An Anonymous Coward Devoted to Unfounding Poor Moderation.

  • It's not just that, it's having moving parts at all. Any time you have a component that spins, jerks, or slides, you're going to get failures. Friction is a bitch. I don't know the statistics, but anecdotally speaking, I replace more crapped out hard drives and power supplies than any other kind of component on my boxes.
  • The 2 things lacking in Computers today: 1. cheap CPU memory (l-cache) 2. decent hard storage (archiving)
  • Actually, it's not trivial. AFM's are quite prone to vibrations, etc. (I'm not so sure about outside electromagnetic interference). A laboratory AFM setup will use a lot of damping equipment (stabilized tables, vibration damping foam, etc.) to protect it from minute jars. The precision required to bring a tip to within angstroms of a surface is almost unbelievable. I'm sure the researchers have figured out a way to overcome the difficulties, but I bet it's not a trivial extension of current hard disk technology.

    Don't knock the first poster's comment. It's a realistic caveat. The inventors of the scanning tunneling electron microscope won a Nobel Prize for their work, and that was essentially all in the details (i.e. the physical principle behind it is pretty simple). Not that I'm comparing this advance to that, but still, nothing's as easy as it seems.
  • Atomic force microscopes measure the Van der Waals forces between atoms in the probe and on the surface. Van der Waals forces arise between neutral atoms, and are essentially electrostatic dipole interactions. A neutral (non-polar) molecule or atom will acquire a dipole moment due to the fluctuations in its "electron cloud", this dipole can induce a dipole moment in a nearby neutral atom, and thus you get a dipole-dipole interaction, which is considerably weaker than the dipole forces between polar molecules (which have permanent dipole moments).

    In principal, it works the same way as a scanning tunneling electron microscope. You have a force which depends strongly on distance (exponentially for the tunneling current in an STM, roughly 1/(distance^6) for the Van der Waals forces in an AFM), and that allows you to measure distances precisely by measuring the variation in forces (or currents).

    So to sum up, AFM's operate on electrostatic (sort of) forces between atoms on the surface and in the tip.
  • Near Field requires about 70 angstroms, and it is really hard to make an air-bearing/lens system that is robust at those distances. Same problem as a hard drive, really, and then what have you gained.

    There are real resons why media will top out in a few years. The superparamagnetic limit will be reached relatively soon. However, that will still gie us significantly more storage than we currently have.
  • Sure there needs to be research on what direction storage should head when we reach a limit on the capabilities of magnetic drives, but shouldn't we be focusing more on ways to improve the speed of the drives instead of the capacity?

    This article states that although the new drives may hold many times more bits than today's drives, they will only run at about the same speed. Video editing is bad enough with the drives we have now. What is going to happen when we begin to work with HDTV streams or even uncompressed video?
  • Ok..I'm not even that upset that they did the same story over..What's getting me is that everytime i pull up /. (which is growing less and less often) i see the same old thing..Not necessarily the same exact story (although it's happening more and more lately..) but the same general stuff...These stupid [miniature/massive amounts of data on a] hard drive posts are getting redundant/used/beaten to death..I'm not even sure if this story is about that type of stuff because I don't even want to go through the trouble of reading it. But seriously..Slashdot cannot be that starved for stories...Just give me access to the submission bin - I'm sure I can find something of more interest.

    Then again, perhaps I'm the only one who's sick of these stories.

    ~Steve
    --
  • Not again! People have constantly been predicting that hard drives and processors would reach a limit in "a year or two", but has it ever happened? No!

    Well, I'm not gonna argue with you on that one ... sure, there are the laws of physics to be obeyed, but I'm pretty sure those laws said CPU's would max out around 1 GHz .... however, from what the article says, the HDs are now starting to have the same problems CPUs are having - if you make them much smaller, you can't be sure wheter a particular bit really is a 1 or a 0 .... will be interesting to find out how they got around that 3 years from now ....

    And on a side note, doesn't this technology seem a lot like CD's?

    uuuuh ..... yeah ..... Do you know the scale of your average atom? I'd say this thing (which works on "bits" of < 100 atoms) has a slightly higher capacity than a 120Gb multi-level CD .... RTFA

  • >Any drive in which the heads physically come in contact with the storage media is prone to failure.
    You should have worded it like the Godwin's law: As a drive's use grow longer, the probability of a failure approach one if there is physical contacts between heads and the storage media. Any failure will cause the datas stored on the drive to be lost.

  • OK, usually i don't reply to ACs, but ... if you'd read the article, you'd see that you can "erase" the written data by heating up the "bit" ... I skipped mocular chemistry, but I guess it's because when you add heat, the atoms re-arrange themselves in a "flat" landscape ....
    (although the article didn't clarify wheter this is EEPROM- or Flash-like .... EEPROM wipe the entire memory, Flash can wipe only selected bytes ...)
  • Do you need a special handicapped sticker to park your heads?
  • because if the 'plastic' disc they use gets too hot, there goes all of your information.

    Seriously, if they can manage to iron out the problems that they have with the slow read speed, this looks like it would be a nice solution to the magnetic disc size limitation.

    Of course, going from a magnetic read to a 'needle' read means that you will have to buy all of software over again. (Bad reference to the vinyl to cd change-over for music. And yes, it is supposed to be a joke.)

    Eric Gearman
    --
  • I couldn't get to the link. If anyone managed to get a copy in their cache, could you please post a copy? That way I can make an informed comment in this discussion, unlike the one below: It seems to me that this would have to be a write-once, read-many type of device?
  • You can read about the details of these wonderful devices in the Applied Physics Letters.

    AnHTML [aip.org] version.
    ASectioned HTML [aip.org] version
    Or download thePDF [aip.org].

  • This old scientific american article [sciam.com] outlines serveral different techniques including the moziac.
  • It seems to me that with 1024 pins scanning a surface for "attractive forces" generated by microscopic scratches and dots would be rather susceptible to myriad outside forms of interference. My habit of resting my leg on my tower, for instance, wouldn't be so wise. What about electrical intereference (inside the case and outside), ambient radio, etc? Not to mention the writing side of the process.

  • Just what we need, a device with 10X more points of failure. And wait, its even better. Slower access times, YHEA. Just think 1000 Years from now, everyone will think we all just shrunk.
  • ... 'cause the last thing I need is extremely dexterous blind people with sensitive fingers reading my email.

    As a side note, where the hell does everybody park at the Special Olympics?

    J
  • It was developped by a Professor at ETH Zurich (which is coolaborating on several projects with IBM Zurich). The only probelm was to keep the whole thing cool enough (and with very small temperature variation) so that your "hardrive" needed a whole room (just for a very small surface). Sory no URL it was one of my professor who told it to me.

  • Well, the article doesn't mention anything about the rewritability of this type of device. From the description of this technology, it seems that it may be a write once type medium... or maybe limited to a few rewrites. But, it seems that the media would soon be incapable of holding futher writes. (Well, I guess one could erase it at a later date, but what would be the point?)

    This would be useful, but only for backup type storage it seems... or a massive database where lots of data gets stored, but little change happens.

    Anyone with any insight into this care to enlighten us as to whether this sort of technology could be used for massively rewritable storage?
  • by atrowe ( 209484 ) on Monday November 20, 2000 @12:38PM (#611708)
    "what good is space, space, and more space when bus speeds are still 133MHz or 266MHz"

    File Servers

    Large Databases

    MP3 storage

    Digital video editing

    Slashdot's Archives.

    There are quite a few applications where massive amounts of storage can outweigh the need for speed. If necessity mandates speed and massive storage, buy several Braille drives and set them up in a RAID 0 config.

  • this reminds me of the time I tried to use a squid as a heat-sink... also of the time I tried to use a piece of stale toast as a floppy disk by encoding it with heiroglyphics.

    sec... lost my train of thought.

    ah, yes... and the time I used a pickled, held to an old Iron Butterfly album with a piece of duct tape, as a tape drive. And the only thing that kept that from working was that the pickle was just way too soggy. Perhaps I'll revisit this using a Valasic.

  • Burning a cd-rom amounts to using a laser to create pits in a surface. A microscope reads the device by shining a laser on the spot to determine whether it has been burned or not. The presence or absence of these pits determine the pattern of zeros and ones.

    So using a microscope really isn't news. They're just using a different form of microscope.

  • Where's that penis bird guy when you need him.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...