Follow Slashdot stories on Twitter


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

IBM Announcements on Chip Design/Nanocommunications 111

mr was one of the folks who wrote about some IBM scientists who have discovered a way to transport information on the atomic scale that uses the wave nature of electrons instead of conventional wiring. The new phenomenon, called the "quantum mirage" effect, may enable data transfer within future nanoscale electronic circuits too small to use wires. Big Blue also unveiled some new chip technology, called "Interlocked Pipelined CMOS" that starts at 1 GHz, and will be able to deliver between 3.3 - 4.5 Ghz.
This discussion has been archived. No new comments can be posted.

IBM Announcements on Chip Design/Nanocommunications

Comments Filter:
  • by Anonymous Coward
    Gee, hasn't this been done before? Like Intel and AMD running the cache ram at one speed and the cpu at twice that speed? Yep. So now IBM has figured out that instead of running the cache at 250Mhz and the CPU at 500Mhz, they could run the cache at 250MHz and the CPU at 1000MHz. Why am I not excited?
  • by Anonymous Coward
    A 128 bit binary word length greatly simplifies the code for calculating the cross product of two 7-dimensional vectors in a vector space over a field of characteristic 2. This is related to the fact that 128 = 2^7.

    For the small portion of the population who don't regularly calculate cross products of 7-D vectors, the main advantage is that "128 bit computer" sounds better than "64 bit computer".
  • Copyleft [] has this [] one.
  • Check there. You should be able to find a vt220 for $20 or so.

    - A.P.

    "One World, one Web, one Program" - Microsoft promotional ad

  • They ditched ISO-9K a few years back, is why. There was probably an ISO spec that said they had to wear those damned socks.

    ISO-9K is a great way to turn any company into a shithole.

    - A.P.

    "One World, one Web, one Program" - Microsoft promotional ad

  • This stuff is going to make our current PCs look like vacuume tube computers. There is no way such a fundamentan shift in `direction' is going to be only an incremental step. This will be like going from plates and grids and electron streams to silicon layers and fields. This has a lot of potential.
  • Sweet maybe I can begine using cosmic rays to transmit ethernet frames! Talk about shortest path to destination! Maybe space/time folding would work better?

    Seriously though, all this new tech just plain rules. I can't wait to see what things are like when I'm 90!
    Don Rude - AKA - RudeDude

  • Whoa, dude. I don't know which post you *thought* you were replying to, but you've got it all wrong.

    The guy you're flaming was complaining that the technology moves too fast for the user to keep up, or be able to safely upgrade. And I, for one, agree with him.

    I tend to upgrade every two years or so, but if I upgraded componentwise, I'd want a motherboard that lasted.

    And you made a lot of assumptions about that guy, alright? Think about it. If I had a few computers lying around (I do), and I bought a new one, I might want to build a better second one out of the remaining components. I tend to reuse old hard drives, because there are standards that allow this. What about chips?

    Regular pentium-style ZIFF sockets were standard for a long time. Now we have all kinds of weird proprietary cards, buses, RAM, whatever. I know enough not to mix random RAM, but the complexity has definitely gotten out of hand for all but the up-to-date hardware hacker, and that isn't me.

    pb Reply or e-mail; don't vaguely moderate [].

  • Actually, I haven't messed with it, I went from a P133 to a K6/300, and my next purchase will be... well, whatever has the best price/performance ratio in x86-land by mid-summer, probably. But like I said, hardware hacking has gotten complicated enough for me to stay out of it, because I haven't been keeping up.

    It used to be, you didn't miss much by not keeping up, like I said. IDE and SCSI, ATAPI cdrom drives, floppy controllers, ISA and PCI, Serial and Paralell Ports... and now they have to mess it all up with a bunch of proprietary, non-standard technologies.

    If there were an open spec, I know I wouldn't have to wait this long for decent support under Linux. But no, we've got incompatible video cards, incompatible processor extensions, incompatible media, incompatible I/O... Obviously there's an advantage to standardizing the hardware platforms and the software interfaces. I'd be happy with maybe two, well-documented, competing standards in each separate domain. But no...

    pb Reply or e-mail; don't vaguely moderate [].
  • <i>ISO-9K is a great way to turn any company into a shithole. </i>

    Hmmm... I tend to disagree. ISO-900x is just a means of proving you have documented all your processes. All it really does is show that the company is a shithole earlier on by pointing out through documentation where they are headed.

    Six sigma... now <b>there</B> is a tedious thing to go through. :-) ISO-900x just guarantees you have documented everything you do; six sigma is the practicality of ISO -- six failures in every million.
  • I know this reply is too late to catch the moderators eye, but I hope Mr. Hard_Code at least notices it.

    I mean, most commodity chips heretofore are locked down to one clock. That means the tiniest circuit still has to wait for clock to compute another value.

    Most commodity CPUs (and many other chips) are actually pipelined. They do some amount of work on the first clock tick, and send the results to another part of the chip, next tick they do the same amount of work on the new data, and another part of the chip works on the results of the the last cycle. A "modern" x86 CPU does that 8 to 18 times to handle each instruction. There are other wrinkles to it too.

    When you design a CPU (in my case a "play" CPU much like the really old PDP-8) you use software that does a layout and tells you the critical path (most any VHDL synt tool will do). That's the slowest part which forces the rest of the circut to be clocked slower. When you find it there are three choices:

    1. The clock met your target speed, and you are tired of working on the project. Your done.
    2. Clock too slow. Find a faster circuit (replace a ripple carry adder with something that propigates the cary bits faster), these normally need more transistors, or wires, or something that is in short supply, like brain sweat.
    3. Clock too slow. Split the job into two parts and pipeline it (or lengthen the pipeline). This will help throughput, but not latency. It will also make anything that needs to flush the pipeline cost more.

    I don't know if the 68000 was pipelined. The 68010 was, I think it had three pipe stages (and no cache, although the loop-mode was a lot like a 3 instruction I-cache). Some RISC CPUs have pretty long pipelines, but the moderen Intels tend to be longer, in part because decode takes as many as three pipe stages (it does on the K7, not sure if the PII/PIII does it in two or three). A RISC decode is only one pipe-stage, and frequently other crap is thrown in too (like branch prediciton forwarding logic). By way of contrast the IBM PowerAS (a PowerPC like CPU) has a very short pipeline, I think about 6 stages (and thanks to the branch forwarding logic, branch mis-predict penalitys are something like only 2 to 4 cycles).

    P.S. I'm not a hardware guy. I'm a software guy, and a harware wannabe.

  • See, it's a bit more than that. This is, like, running the ALU at a different clock rate from the instruction decode unit and dispatch unit and the like, and having the register-rename circuitry at yet another speed, and so forth. In most modern CPU architectures, most everything is running much faster than it needs to be, or much slower than it can be.

    It's got to be more then that. The TI SuperSPARC did that in '95 or so. Most of the CPU ran at (I think) 50Mhz, but the register file at 100Mhz. In that case (and all similar cases I know of) all the clocks are multiples of each other (no relitavly prime clock rates).

    More over if the decode unit runs faster then the dispatch unit there is no useful gain. The dispatch unit has to be fed decoded instructions. The same is true for many (but not all) other parts of the CPU.

    I think (with no proof, and no help from that watered-down article) that most of the parts in this chip run at the same speed (say 1Ghz). They each have their own 1Ghz clock distribution net, and they are not sync'ed with each other (so the ALU could see start of clock while the decoder is half way through a clock, and the renamer has just seen the end of a clock pulse). The boundry between each clocked unit must have a small async buffer.

    That would trade the pain of clock distribution for the pain of having a bunch of async buffering all over the place adding to latency. Given how painful clock distrubution is in really big chips this is probbably a positave tradeoff. At least for latency intolerent workloads.

  • Um...I think we already have quantum waveguides based on long chains of atoms in a crysal lattice. They're called "wires". (I don't know, though, maybe there's a different method that's better.)

    WRT tunneling, you're correct that electrons spend a good deal of time in the "mirage" position - it's not a mirage at all, any more than any de Broglie wave is. For instance you could call the peak of the interference pattern in the classic two-slit experiment a "mirage" too since it has an high electron density.

    The speed of the electrons between foci is definitely sub-light speed. The de Broglie waves themselves may travel FTL (the same old non-locality that gets everybody excited) but as in all of these situations it's not much use.

    It seems to me if they used a pair of parabolic reflectors similar to microwave dishes, they could get current to flow between foci over relatively large distances, with no hardware needed in between (except for the copper substrate, which is already a conductor - oops). I believe that different beams could cross as well, with little interference. That would be a useful feature, since it could replace circuit traces. It's doubtful whether this arrangement would be very efficient, due to leakage, diffraction, etc., but it's though-provoking.

  • IBM has had an interesting past. I remember when they were the evil empire. And trust me, they were a really, really big evil empire in the perception of many people. This was in the days when the terms mini and super computers made sense.

    OTOH, IBM has been noted for their research programs. About 20 yrs ago, there was the big three in terms of corporate research. They were AT&T, IBM, and Exxon. Well Exxon has greatly reduced their research efforts (as had the other major oil companies), AT&T has been split up and then again split up again (Bell Labs is part of Lucent), while IBM has redirected their researchers to perform more applied research. But IBM research is still very impressive. Low temperature super-conductivity was an IBM product that came out their Zurich research facility.

    Off topic, but Thomas Watson many years ago had the now-famous Think posters put up. I used to have a cartoon in my office that showed the Think poster with a guy saying, "I'll like to, but I have too much work to do."

    Ross Perot founded EDS (after being an IBM salesman) to provide software for IBM mainframes. Back then, IBM philosophy was to sell hardware, software was just an afterthought. Hmmm, I wonder if anybody else got rich for selling software that ran on IBM hardware.

    Yup, I'm just rambling. IBM is a "friend" of linux at this moment. They have been very good for the time being. However, as a person who has witnessed the might of IBM in the past, I'm scared of what IBM could potentially do to screw things up. Remember, the enemy (linux) of my enemy (MS) is my friend. Of course, we all live by the ancient Chinese saying/curse, "May you live interesting times."

  • to run quake4.

    Great. ;) Now all of my machines are out of date, and I won't be able to play any new games at all.
  • wasn't one of the old PDP's asynchronous?
  • From the description, IBM is not using asynchronous logic. They are using synchronous logic. The innovation is that the chip is divided up into multiple, independent clock domains, running at different speeds.

    My question is are the clocks phase locked to a master timing source or are they free running?

  • I've read some papers on research operating systems that use huge global virtual address spaces that are shared across multiple computers. Each object gets a globally unique virtual address that is never reused. That can use up a lot of address bits.
  • Nothing like real innovation, is there?

    It excits me to see technology like this being announced.

  • I'm missing something. You can transmit data about a cobalt atom to another spot where there is no cobalt atom by using this eliptical coral. Sounds, neat. But...

    Someone help me make the intellectual leap so that I understand how this can be used to help us build better circuits. Is the idea that we replace wires with really long, eliptical corals?
  • Fast is relative.

    I'm running a Pentium Pro 233.. and it certainly doesn't *feel* outdated.

    Now, if only I had a fast internet connection to match...
  • by Roofus ( 15591 ) on Monday February 07, 2000 @01:54PM (#1297467) Homepage

    Is it just me, or has IBM made a real turnaround in the last 5+ years? It seems they understand the whole open source movement, they've pretty much ditched they're sorry aptivas, and they seem to be a leader in new technologies. On top of that they've changed the way people percieve them. I remember hearing stories about how they had to wear knee-high black socks to match their black suits long ago, and now I go to an interview with them, and the guy is wearing jeans and a Polo shirt!

    Honestly, this is one large corporation I have respect for. And there aren't too many of those left now and days.
  • My only guess is to make a chain of elipses with overlapping focal points. A change at one end focal point would be miraged to the opposite focal point of that elipse, which is where another cobalt atom is sitting at the focal point of another elipse, which then cascades down the chain. To trash my own theory, here's a quote from the article, "The intensity of the mirage is about one-third of the intensity around the cobalt atom." A chain of just a few elipse "links" would have an enormous power loss. The theoretical response is "If you could form the elipses better, you could get a 99.9% intensity mirage". Improving from 33% to 99.9% efficiency would be impressive, but not unheard of.

    Disclaimer: I got a C+ in physics

  • My grandma could do that in her sleep, and she's dead! Company A did nothing special. Besides, there was a paper written at an obscure university ten years ago which made reference to something like what is described

    1: It's the "obscure university" that Alan Turing was a professor at.
    2: It's the "obscure university" that built the world's first stored-program computer (the Manchester Mark I/"Baby")
    3: It's either very similar or identical technology.
    4: They've built the chips. They have prototypes.
    5: Funny how everyone jumps up and down in defence of IBM, when they quite happily quote unrelated tech after unrelated tech to prove that Microsoft doesn't innovate...

  • by spectecjr ( 31235 ) on Monday February 07, 2000 @02:20PM (#1297470) Homepage
    This was probably quite difficult to implement, but isn't exactly conceptually brilliant. Modern computers already run at different clock rates internally. Your disk I/O bus runs at one speed, your video processor runs at another speed and the CPU still spends a lot of time waiting for stuff to come down the system bus from memory.

    It's even less conceptually brilliant, when you see what people elsewhere have been working on - namely wavepipelined architectures.

    Funny... people just keep on reinventing the wheel... fire... and then they patent it to hell.

    IIRC, the guys at Manchester University [] were working on this back in 1989/1990 (or at least they were when I went on a tour of the place...). Back then, it was just called the "wave pipelined RISC chip" - these days, it's the "Amulet". Check it out. It's based on ye olde ARM [] processor architecture - but the implementation is completely asynchronous -- that is, each individual logic element is clocked separately.

    Sure, it's still experimental... sure, it's slower than other chips - but it also predates IBM's announcement by about 11 years. Just goes to show - academia ain't entirely useless ;-)

    Architectural Overview at Berkeley []

    The Amulet Asynchronous Logic Group at Manchester University []

    Who needs clocks? Bah!

  • Just wanted to correct the typo. Boink!

    #include "disclaim.h"
    "All the best people in life seem to like LINUX." - Steve Wozniak
  • What's really nice is that IIRC alot of Drexler et al's work on nanotech is concerned with avoiding quantum effects that would disrupt their atomic-scale gears, etc. Here the scientists are turning the problem on its head and using the quantum nature of matter at the nano scale for their nanocomputing device. However, obviously heat is still a problem (cooled to 4K - don't think Kryotech will cut it anytime soon ;-p)
    #include "disclaim.h"
    "All the best people in life seem to like LINUX." - Steve Wozniak
  • How about a memory mapped file system that incorporates the ip address space? Hmmm.. perhaps we need to jump straight to 256 bit busses.

  • Yes, it's great that IBM is thinking ahead to new designs and it all looks great on paper but... how about sticking the damn problem at hand. Even the guys at IBM are having a bitchass problem working with Motorola to get the PowerPC 7400 (?) G4 to climb over its current 500MHz wall. At the moment, the G4 evolution timetable is well over 6 months behind schedule and yields of _stable_ 500MHz chips are unacceptable to sell to Apple (or anyone who can get a hand on one). Until these fundamental chip fabrication problems can be solved, how can IBM even think of pushing the envelope on an even faster design?

    Linux user: if (nt == unstable) { switchTo.linux() }
  • by chazR ( 41002 ) on Monday February 07, 2000 @01:51PM (#1297475) Homepage
    To increase speed, IBM researchers decentralized the clock, using locally generated clocks to run smaller sections of circuits. The design thus allows faster sections of circuits the freedom to run at higher cycles. It also significantly reduces power requirements.

    This was probably quite difficult to implement, but isn't exactly conceptually brilliant. Modern computers already run at different clock rates internally. Your disk I/O bus runs at one speed, your video processor runs at another speed and the CPU still spends a lot of time waiting for stuff to come down the system bus from memory.

    As far as I can see, IBM have scaled this down to a single chip, which will increase overall throughput considerably. Difficult to do, very worthwhile, but conceptually all they have done is to get the latency issues into a smaller space.

    OTOH, this could lead to an architecture with considerably lower power consumtion, which is definitely worth doing.

    The bit about 'quantum mirages' has already been discussed on /. a few days ago.
  • I think you could get a sensible cross-product of six 7-dimensional vectors. Or, in general, n-1 vectors of n dimensions.

    Remember how to get the cross-product manually? You make a matrix like this:

    | i. j. k. |
    | x1 y1 z1 |
    | x2 y2 z2 |

    ...and take the determinant. To get a similar matrix with 7-dimensional vectors, you'd need six of them.

    But I don't know how much use that would be.
    Patrick Doyle

  • What do you mean by a transaction? And what does word size have to do with how many of them you can do per second?
    Patrick Doyle
  • Doubling the word size of your processor instantly doubles the chip area you need for most parts of the chip. The extra area you would use for going from 64 to 128-bit could probably be better spent in other ways, like caching or speculative execution.

    However, 64-bit is definitely worthwhile over 32-bit because 32 bits can only address 4GB. Under Linux, for instance, you only get 3GB of those because the last GB is reserved for the system. This places a hard limit on the size of things you can map into your address space.

    64 bits can address 16EB (that's Exabytes), which should stave off Moore's law for another 50 years or so.

    Patrick Doyle
  • Now if someone could figure out how to arrange a ring of atoms to create 8 or 16 quantuum states, and arrange them so you could build a three or four bit adder with carry circuit out of two or three of these rings, then I'd be impressed.

    I suspect scientists are working on it. 'Cause eventually, we're going to have to start using quantuum states and tweeking the fundamental nature of the universe to build processors fast enough keep up with the computational requirements of Windows 2020... :-)
  • I'm a software guy and a hardware wannabe also...but I'm quite aware of pipelining. The fact remains that the pipeline is limited by the slowest component in it. If one of your pipeline stages is memory access, every other stage has to wait at least as long as it takes the memory access stage to complete, to continue on.

    With multiple clocks, everything is working asynchronously at the limit of its own circuit. For simple circuits this will be very fast. - the Java Mozilla []
  • They say they use multiple clocks to increase speed. Sometimes the best inventions are those that simply make sense. I mean, most commodity chips heretofore are locked down to one clock. That means the tiniest circuit still has to wait for clock to compute another value. That doesn't make any sense. Have independent smaller clocks, and make the computing asynchronous and have each component just fill up a queue. Match up the results with their ids/stamps and there you go. Without independent clocks, the slowest component will dictate the overall cpu speed. - the Java Mozilla []
  • Six sigma... now there is a tedious thing to go through. :-) ISO-900x just guarantees you have documented everything you do; six sigma is the practicality of ISO -- six failures in every million.

    Hmmm... you know, maybe that's what the Justice Department could do to penalize Microsoft. Require them to implement the Six Sigma program there, and get all their products compliant. Considering the current state of their software, it would either destroy them when they couldn't do it, or at least slow and bog their releases so much that other companies can somewhat catch up.

    As it is now, they're, what, point-six sigma? :)
  • That's ridiculous. Why do you want the fastest CPU on the market? Why do you honestly need it? What's that? You don't need it? You just want to brag to your other 13 year old friends that your computer is faster than theirs? Oh, tough luck to you. Other people, people who are trying to get REAL WORK DONE, are actually happy that technology is moving quickly. Maybe if you can't handle it, you should go buy a Apple Macintosh. I hear they don't innovate very often. You can have a top-of-the-line computer for years and years.

    Geez... I have a dual cpu Pentium III 450 MHz system, and according to my research, there isn't anything out there much more than 50% faster than it (say, a 733 MHz Coppermine in an i820 motherboard). When dual and quad CPU Coppermine systems become available, I *might* upgrade to one of those. If I *need* the speed increase, that is. Why upgrade when all I could get is a 50% speed increase, though? It'd cost me hundreds of dollars.

    All you need to do is buy a decent computer (Dual or quad CPUs, Ultra2 SCSI, Asus motherboard), and you'll be set for *years*. No need to upgrade every month. It might cost more in the short run, but it'll last a hell of a lot longer than that Celeron/EIDE based computer you bought for $100.

  • When, exactly are you going to add two 128-bit numbers? Is this a common occurance for you? It's not for me.
    I'd rather see a 64 bit, 66 MHz PCI bus in consumer motherboards. There are increasingly more peripherals that exceed the bandwidth of the 32 bit, 33 MHz PCI bus. And they're getting cheaper every week. Adaptec Ultra 160 SCSI adapters are only ~$250. I'd buy one if I could actually handle the bandwidth...
  • Regular pentium-style ZIFF sockets were standard for a long time. Now we have all kinds of weird proprietary cards, buses, RAM, whatever. I know enough not to mix random RAM, but the complexity has definitely gotten out of hand for all but the up-to-date hardware hacker, and that isn't me.

    Oh, come on. It's not that hard to keep up to date with new stuff. Slot 1 CPUs go in... slot 1! Slot 1 has been around for years now. Do you know why Socket 7 was around for so long? Because AMD used it for so long. Intel abandoned it a loooong time ago for their proprietary Slot 1 architecture. AMD couldn't make Slot 1 CPUs. But all you needed to do was buy any old Slot 1 CPU (from 233 MHz to 600 MHz), and it would work in virtually any Slot 1 motherboard. Is that so bad? No, it is not. Which Slot 1 CPUs don't work in Slot 1 motherboards? The new Coppermine CPUs, if you're unlucky. The Coppermind CPUs will work in many (but not all) Slot 1 motherboards! For years now, you could have used the same Slot 1 motherboard, just upgrading CPUs. What are you complaining about? That your 486 doesn't work in a Pentium II motherboard? Oh well.... time goes on.

  • Well, ten years ago, IBM was making high end PS/2 workstations that would have slaughtered the PC you had. They used a high end bus (MCA), hot swappable SCSI hard drives, IBM engineered x86 compatible CPUs (anyone remember Blue Lightning? Clock tripled 486s running at 100+ MHz that beat most Intel Pentiums in SPEC benchmarks), etc, etc. IBM has always had lots of awesome, innovative products, even available to the consumer market. People just ignored them until recently. IBM was always kick-ass. However, nobody really cared about kick-ass. They cared about cheap . IBM computers were expensive . Now, IBM is balancing kick-ass with cheap, with the added bonus of openness.
  • It will be interesting to see if this results in a practical quantum waveguide to replace wiring. Just insert (or remove) an electron at the one end of the pipe and it will produce (or delete) a marage at the other end.

    I wonder if the "mirage" could be interpreted as the electrons of the cobalt atom tunneling to the image location and spending a fraction of their time there? That less-than-half strength might be because the nucleus is still at the other location and makes the electron density "prefer" that region because it is lower energy, due to the attraction of the positive charge.

    I also wonder what is the speed of propagation of the effect? Switch a gate's output by dropping an electon into an electron trap at one end of the waveguide, and it appears (at, say, 50% density) at the other end, and affects the logic there. How long does it take to happen? Does it exceed the speed of signals in a wire? (That's a very small fraction of lightspeed on a chip, where the wire resistance and stray capacatance form a delay line.) Does it approach that of light in vacuum? Does it EXCEED that of light in vacuum? (Even if the total system can't send signals faster than light in emptyness, which is a very slight improvement on light in quantum vacuum.)

    Whatever it is, my bet is that it will happen at tunneling speed.
  • of course not

    but the apple II e's that were in my highschool's computer lab might not be state of the are anymore

  • by bugg ( 65930 )
    Can you please explain to me why 128-bit architectures are more logical than 64-bit?

    Perhaps I'm not seeing something, but I can think of very few situations in which 128-bits is a definite advantage over 64.

    Bits != total computational power, people.

  • truely amazing... 5 days later and the mirage
    appears again... but perhaps what is scarier is:

    slashdot + (anti)slashdot -> geocities +14.3 KeV
  • Shouldn't you be off pouring grits [] down your pants while watching a statue of a certain Star Wars related actress?
  • That site is not mine :-) Some troll poster posted it the other day. It's also been on a few other message boards.
  • Well, crusoe isn't the fastest processor at all. It's just the lowest power using (wrt to speed) x86 compatable chip.

    I'd much rather have a cluster of P3 or Athlons.
  • No you don't. It'd be faster, but you could do it with just 4 32bit numbers.
  • Maybe if AMD started releasing computers with Linux pre-installed my news submission about AMD demonstrating a 1.1GHz Athon using copper-interconnect technology wouldn't have been rejected.


  • "Maybe if you can't handle it, you should go buy a Apple Macintosh. I hear they don't innovate very often. You can have a top-of-the-line computer for years and years."

    -lol- You obviously haven't been keeping up with the news. Apple's G4s are among the fastest things out there.


  • by cryoboy ( 94214 ) on Monday February 07, 2000 @02:47PM (#1297497) Homepage

    Ok the reference for this work is:

    H. C. Manoharan, C. P. Lutz & D. M. Eigler, Nature403, 512-515(2000).

    In this experiment a few Cobalt atoms were deposited on a Copper surface. Using a scanning tunneling microscope the Co atoms were gently dragged into an elliptic(coral) structure, and one Co atom was placed at the focus of the ellipse. (The images of this stuff are gorgeous and more cool STM images of atoms and atomic maniputation can be found at the STM Image Gallery []).

    Due to the magnetic nature of the Co atom electrons near the atom tend to align their spins with the Co magnetic field screening the magnetic moment. This local phenomina can be imaged by the STM, the surprising result is that another mirage image appears at the second focus of the ellipse. This suggests some sort of long range electon ordering.

    These experiments are being done with a low temperature ultra-high vacuum stm (this stuff is damn hard) and to reproduce these same results in a next generation processor as a means to transport data is unlikely in the near future. Nevertheless, these results will have a great effect in our understanding of macroscopic quantum systems and ordering.

  • Actually, you couldn't just get new cpu's for the same slot1 board. If you bought one of the original slot1 boards you would have had to go get a new mobo based on a bx chipset to use a p2 350 or higher.
  • I can think of a number of cases where this would make code run 2x as fast. Pretty much any time where you would want to move data around in memory you could now do it in 128 bit chunks, rather than 64 (or, heaven forbid 64). Memmove/Memcpy would be much faster...
  • by gargle ( 97883 ) on Monday February 07, 2000 @01:31PM (#1297500) Homepage
    Using the 'quantum mirage' process, previously posted Slashdot stories magically reappear at another time and place.
  • Thanks so much. I thought I saw one on [] Guess I was looking in the wrong place.

    Nick Messick

  • Based on my reading of LA Times article on the subject, the basic idea, is that moving the Cobalt atom from place to place will turn the mirage off and on giving us 1's and 0's for binary calculations.
  • Moores law is taken out of context too much. IT is 18 months not 12 and it was not meant literal. If he had been speaking literally and on a 12 month scale we would have like 400 GHz computers right now.
  • Item: Company A introduces new technology which is X years ahead of any competition; will have commercial product which Y times better than anything available ever before in Z months /. Response My grandma could do that in her sleep, and she's dead! Company A did nothing special. Besides, there was a paper written at an obscure university ten years ago which made reference to something like what is described.
    If you will excuse me, I have a computer lab to attend to.
  • Moore's Law sucks it took less then a year for my 400mhz and 500mhz computers to be outdated.

    This stuff is kinda of cool though, maybe we'll get to see some nice VR stuff and better speech recognition. Eventually the hardware will be fast enough to run windows.

    I would have liked to have seen intel go to a 128 bit architecture instead of 64 it would have lasted longer.

  • which is why IBM spends a huge amount of money on student placements and sponsorship etc (though non of it seems to go to the wages of the student employees - shame)

    ex-employee btw

  • great comment.

    The IBM engineers are really ENGINEERS. They are thinking about solutions and how to address the problem *using* the boundaries that nature has given us rather than attempting to find ways around them.

    also I didn't realise that K.E.D. did physics as well as write books. you'd get a moderation if I hadn't already posted to this story.

  • but it didn't help anyone because they didn't sell it to anyone.

    btw presumably you are at Umist/manchester. Top place.

  • One name: Lou Gerstner.

    all he did was make the technology and engineers they always had the focus

  • Let me know if they plan on getting rid of them... I'm looking for a couple of text terminals to hook up to my linux box.

    I'm looking for something that can emulate a DEC vt100... but I can't find any in my area (north Jersey).


  • Thanks! Oddly enough, I never thought of that. :)

  • Here's a link [] to some work on Asynchronous processors at the University of Manchester, UK
    (Place where the first computer was built 51 years ago)
    Really cool stuff ;)
    Amulet Processors []
  • woah, another person posts the same link
    at approximately the same time as me.
    The /. effect =)
  • Actually the professor who started the project is still there.
    Also the work is still continuing with other students and lecturers. Just because some of the original peole left doesn't mean the work will stop.
  • Didn't we see this a few days ago in this story []? I was wondering why it sounded so familiar.
  • Couldn't one also implement an MMX-type thing, where in one operation you'd add two
    64-bit numbers in parallel, or perhaps
    4 32-bit numbers

    Where have you been, altivec does just that (at least for the 4x32bit stuff).

  • It has nothing to do with being more "logical."

    It has always been the case that once the computer's competency is there, the applications arrive... Way back in the mid-60's my father took me to an open house at IBM, and proudly demonstrated that the computer could add 2+2 lightning-fast. I asked why that was necessary, since I could do the same thing (ok, I was young once, too!)....

    If you want to know what the new processors will be used for (in series, no less!), keep up with math! Learn about the complexities of modeling visual information (animation on a G4!), exploring "chaotic" (or "complex") domains, etc.

    It's the new stuff we'll be able to DO that make new processors (and etc.) so exciting!

  • A 128 bit binary word length greatly simplifies the code for calculating the cross product of two 7-dimensional vectors in a vector space over a field of characteristic 2. This is related to the fact that 128 = 2^7.

    However, cross products of 2 vectors are only defined on 3 dimensions, how can you uniquely have something perpendicular to 2 vectors in 7 dimensions?!!

  • does that mean that it didn't crash in another universe?

    Not only that, if it crashes and you lose data, can you go into the other universe and get the data back? If so, how do you get into the other universe to do so ;-)

  • Intel is currently shipping 800MHz chips. So by Moore's law, they should have 3.2 GHz in about 36 months. If IBM had a 3.3 GHz CPU shipping in exactly 36 months (3 years), Intel would only be 3% slower.

    <sigh> Look, Moore's law is an observed phenomenon, not a fundamental rule, and it is observed in the industry as a whole, not necessarily within individual companies. If Intel doesn't come up with technologies allowing them ot go to 3.2 GHz within 3 years, then no, they won't have a 3.2 GHz then. It's by no means inevitable that they will come up with such technologies.

    Here's a thought on this 'multi-clock' CPU of IBM's: What clock will they advertise it at? Presumably the clock of the fastest part. Still - maybe, just maybe, we'll start seeing marketing move away from clock speed as a meaningful measurement of chip performance. We can always hope.

  • IBM says 1 GHz will be available in a year, but Intel will definitely ship 1 GHz Willamette CPU's in under 12 months. So in the near term, this isn't a Big Deal. IBM also says that the 3.3 to 4.5 GHz chips are 3 to 4 years out. Intel is currently shipping 800MHz chips. So by Moore's law, they should have 3.2 GHz in about 36 months. If IBM had a 3.3 GHz CPU shipping in exactly 36 months (3 years), Intel would only be 3% slower. So maybe this isn't such a Big Thing?
  • I did use 18 months, not 12.
    800 * 2 = 1600 in 18 months, then 1600 * 2 = 3200 in another 18 months. 18 + 18 = 36 = 3 years.
  • I am well aware that Moore's Law is not a rule and that Intel chip designers still have to show for work on Monday. I used it in the context of whether this is a Big Deal(tm) or not. A 4GHz CPU would be awfully nice to have in 2000, but Moore's Law predicts that in 2003 or 2004 4GHz might just be run of the mill. Just trying to inject a little perspective, that's all.
  • If anyone had told me five years ago that I'd work or IBM and like it, I would have laughed myself sick, but when my company got CA-d last year I jumped ship to IBM and I have to report a really nice environment. Obviously there are still a number of old school when-we-were-king types, but in general things are really relaxed (well, relaxed except the workload). I've got freedom to work from home if the client doesn't need me, I've never heard anybody mention dress codes, and if you can survive the mountains of paperwork, you can really be okay. (compare that to CA policies...brrrrrr)YMMV-- divisions and locations can differ wildly, but it's pretty okay.

    On the other hand, they still are a giant business. They do flexibility and open source because they think flexibility and open source work-- it's a business model and not a philosophy. Companies this big don't have philosophies, no matter how many True Believers they have on staff.

  • Just because the speeds of processors are running so fast does not mean that the latest and greatest today will be outdate tomorow. Outdated is when the processor will no longer run the latest programs at an acceptable speed. Personaly, I don't care if my chip is the fastest or the slowest on the market, as long as it runs my programs at a good speed.
  • by deep_magic ( 137913 ) on Monday February 07, 2000 @02:06PM (#1297526) running a quantum chip and it crashes, does that mean that it didn't crash in another universe?

    I can see the error messages for Win2010 now:

    "Error - user32.exe performed an illegal operation in this universe - please continue in another universe or restart..."

  • Just as cool as hearing about RNA doing calculations that are beyond my knowledge.

  • Yeah, this won't light an even bigger fire under AMD and Intel's asses. As if new chips arent rolling out too fast as it is.Sharkey []
  • For cryin' out loud, its not flaimbait. You can read my opinion of the matter on my site. Chips are already moving out too fast to reach the GHz mark, why accelerate the process? The major chip vendors are taking a big hit from all this, and the consumer is getting an influx of worthless chips. Why buy if it'll be outdated next week, literally?I agree that new technology needs to be pursued, my point wasn't that IBM shouldnt move forward with this. In fact, I feel just the opposite. My point is that someone should tell AMD and Intel to chill with the chip wars.Sharkey []
  • I aggree, I have a Macintosh IIci that just barely runs linux, and I haven't upgraded to a new computer. However, my dad works for a large company that does very math oriented stuff ie Orbit calculations. For him, he brings home a laptop that runs at about 600 mHz and he uses all of the cpu for intensive floating point calculations. For most anything you could possibly do at home other than web hosting, the most you could possibly need is around 200 mHz. Assuming your running an OS that doesn't waste cpu.

    Behold the power of ONE
  • yes, that is EXACTLY what this means. ;-)
  • this is very true, the technology is being upgrade so fast that none can keep up and it's regoddamdiculous to even try to buy the fastest CPU on the market, not only will it be out of date tomorrow, you won't be able to use it for lack of a supporting mobo (motherboard for those of you who don't know). this is making many more clock speeds available, but making a stable and longlasting system less likey. think about it, they're making quantity, not quality now. sure, i want a gighertz processor as much as anyone, but i think i'll wait a year before i get one so i know it works and there's a motherboard for it, but than, it will no longer be available nor will supporting motherboards. Well, I guess I'd just like to share my opinion that things are moving TOO fast now, and the race needs to slow down or... well, there has to be a way that the end user doesn't get screwed. - AZ
  • by Signal311 ( 149134 ) on Monday February 07, 2000 @01:47PM (#1297533)
    So does this mean that the IBM 1-piece 386 dos based terminals in my highschool's computer lab aren't state of the art anymore?
  • if quantum mirage is further refined this technology could potentially break all current computing paradigms. one can only begin to imagine the sorts of complex structures made utilizing quantum mirage. imagine a beowulf cluster of handhelds containing crusoe processors fabricated using quantum mirage technology!
  • I don't think the chain of ellipses would work as such, as it would mess up the geometrical symmetry which lets the mirage appear in the first place. Frankly, speaking as a physicist (hopefully the first and last time I ever utter that self-serving phrase), I don't have a CLUE how this could make smaller circuits, except in the extremely general sense that it demonstrates that quantum coherence of electrons can be seen. It's far more likely that the tiny circuits of the future will involve specially-designed molecules like carbon nanotubes or organic thingies. Don't get me wrong--it's a gorgeous experiment, and the type of thing that makes physicists salivate. I just think everyone except those in charge of funding agencies should shut their ears when applications are discussed, as the hyberbole can get a bit thick.

The first sign of maturity is the discovery that the volume knob also turns to the left.