0.01 Micron Process? 101
hypo writes "According to a recent ZDNet article, IBM is developing a technique called "V-Groove", that allows the channel lengths of transistors on chips to be 10 nanometers (0.01 micron) and below. Currently, most companies use a 0.18 micron or 180 nanometer process. This is certainly a giant leap. The only caveat is that IBM is not planning to use this in large chips (i.e., processors) for 10 to 15 years. However, this is still quite revolutionary because most people thought that a 0.02 process would be the fundamental minimum. This all shows that Moore's law can perhaps hold true in the future. This article also discusses Carbon Nanotubes, which might research market faster than experts had previously thought."
Trace Width or Channel Length? (Score:3)
Trace width is the width of the conductors connecting different transisitors on the chip. This is important because a smaller trace width means that the whole chip is scaled down, including the spaces between the traces. This raises capacitance between parallel wires and causes the posibility of cross-talk.
As for channel length, the article says:
Channel length represents the distance electricity needs to travel through a transistor, shorter transistors lessen the distance traveled, delivering greater performance.
While this is related to performance (specificly, switching timings), I am not sure if it is related to trace width at all. The ZDNet article may be mistakenly associating the two.
Also, I think that one may be able to vary the trace width and the channel lengths independently. If that is the case, we may have performance increases from channel lengths even if we hit a wall when it comes to trace widths.
Can someone with some microelectronic background clarify these issues?
Thanks.
Re:Two words... (Score:1)
Encoding at under 20x? try to find the GoGo encoder webpage. Assuming you have a faster processor (I have an Athlon 700 running WinNT and I can encode at 30x), you should be able to encode quickly and nicely. The Win32 counterpart encodes at 160Kbps and pretty much rocks da haus compared to all my other encoders.
Re:Flamebait?Moderation is doh! (Score:1)
However, the process I described is also called bremsstrahlung. I used to work in the space industry (I am now doing astrophysics), and one of the reasons why thick Aluminium is not used as radiation shielding in space is because of bremsstrahlung effects from high energy protons.
Re:OT: differences in compilation time (Score:2)
Doesn't matter. Object Pascal compiles 10-100 times faster than Visual C++ with all optimizations turned off. The speed comes from a few places:
1. C++ programs tend to be idiotic with the include files. A 10,000 program may include 500,000 lines of includes. Object Pascal has a much nicer module system.
2. Object Pascal has a much cleaner syntax than C++ and doesn't need a preprocessing step.
3. The Object Pascal compiler is a very nice piece of programming
Re:Why 10 to 15 Years? (Score:1)
Re:Two words... (Score:1)
Re:Why 10 to 15 Years? (Score:2)
There's also a marketing issue -- As a company, you want to keep one step ahead of the competition. You also want to get the biggest bang for your research buck. If you get too far ahead of the competition, you won't be able to use, and make money off of, some of your other research. It's also nice to have an 'ace in the hole' for when they threaten to overtake you in another area.
Finally there's the simple lead time for going from producing a .01Micron straight line to producing a 100-million transister CPU from said technology -- and doing it in good quantity with high reliability.
-----
That having been said, I remember a story from a Nortern Telecom tech about the (relatively) early days of optical fiber. One of the labs claimed to have produced a really high-caliber optical repeater laser (about the size of a large grain of sugar). The production of the units was fobbed off on a Japanese company because the company big-wigs didn't believe lab staff that it could be done well using local resources.
Well the Japanese company messed up the order, (they weren't sensitive enough -- a prime specification) and the Exec turned to the lab and essentially said 'we need that order NOW -- Please do it with the lab equipment (no time to build a fab facility at this point).
Well, the lab made such high quality units that they were TOO sensitive. They were reacting to noise from the other electronics (which wasn't expecting such high quality in the repeater laser). Rather than re-design the electronics they went back to the lab and asked them to purposefully crank down the sensitivity of the lasRs.
Moral of the story: If IBM really HAD to get that stuff out the door in 18 months they could proabably do so. Chances are, however, that they can't see the long-term financial benefit of doing so.
Re:Perpetuity of Moore's Law (Score:1)
And if you keep halving size, you get there in log time, ie sooner than you'd think.
My completly uninformed opinion is that silicon has a couple of decades left at best, but computation in general has a century or so before it runs up against the minimum scales and maximum efficiencies of matter and light.
Re:Just a thought... (Score:1)
Re:Just a thought... (Score:1)
Re:286 was protected mode. (Score:1)
Email me.
Don't trust anyone over 90000.
Re:Two words... (Score:1)
now if they could just point that big dish at arecibo at washington, maybe they'd start getting some results...
Re:Why 10 to 15 Years? (Score:1)
Re:Not so profound - Gordon Moore (Score:1)
Re:Why 10 to 15 Years? (Score:2)
Technical matters aside, I guess that'd be like releasing your sophmore album when everyone is still grooving to the first one. I think people have a limit - a measurable one - to how quickly they'll bounce to the Next Best Thing.
So what I'm saying is there might very well be market disincentives for doing such things. IANAE (I Am Not An Economist) and I can't prove it, just taking a wild swing.
.02
My
Quux26
Re:Nit-Picking: Micron? (Are you sure?) (Score:1)
I certainly did not make those numbers up -- they were exactly what was taught in high-school science classes -- particularly, chemistry, I think. (It was written in the textbook, too.) It was a long time ago, so I obviously can't give a complete bibliographic refference (though it shared the title of "Chemistry" with virtually every other such textbook). I am certain I remeber the numbers correctly, despite the time elapsed, though it is possible that author refferenced them to cm rather tham m, for some cheesy reason (like assuming this would seem small to "kids"); E -6 m = E -4 cm, E -10 m = E -8 cm, so that would work.
"My" notation, is the "engineering notation" found on most calculators, BTW -- looking at the bottom of the screen reveals that <SUP> is not listed as available HTML codes, so I used this notation rather than exponents.
Note that I wouldn't have given them if I didn't have reason to think those were accurate. It is interesting, that some other replies managed to be informative, rather than just accusational and insulting.
Re:No, that's not what Moore's Law means. (Score:1)
Thanks for that informative post!
Crash course in wafer manufacturing (Score:3)
The wafers we supply have and 'Epi' layer on them, which is short for epitaxial. The layer is silicon that is grown on the wafer at a high tempurature (I think 950-1000 degrees C). This makes the wafers less rough, thus smaller lines widths. The wafers are inspected for defects, and the machines that inspect them can only see particles, pits, etc down to
Intel already annoucing the
The wafer manufacturers are mostly breaking even at this point. Intel is making fat cash, but they are getting it from squeezing all the wafer suppliers. Some have dropped out of the business do to the lean conditions. Nobody really has enough money to buy equipment to make these wafers on a large scale, let only finding vendors that have equipment that meets those specs.
.01 micron, holy crap!
Re:Two words... (Score:1)
Think you've been reading too much sci fi...
Re:Nit-Picking: Micron? (Are you sure?) (Score:1)
OT: differences in compilation time (Score:1)
I have noticed this to. Object Pascal (I used the Borland Delphi 5 compiler) is blazingly fast, much much faster than MS VC++, Borland C++Builder or gcc (though Borland C++Builder seems to be somewhat faster than MS VC++). Even plain C compilation with those compilers is way slower than Object Pascal compilation.
Someone can give an explanation?
Re:0.02 "fundamental"? (Score:1)
(That's what princeton's Plasma Physics Lab can do ).
But Moore's law says a tad over 6 years (Score:2)
Can't have it both ways. Either it (or something of equivalent density) is out then or Moore's law finally breaks down.
Re:Not so profound - Gordon Moore (Score:1)
I don't think so. Extrapolating Moore's law backward to 1930's, even with only a factor of 2 per 18 months, gives 1 operation per day. Somehow I think computations went a little faster than that, and certainly faster than 1 operation per 3 millenia in 1900.
If Alan Turing did indeed propose such a doubling every 12 to 18 months (and nothing in my reading suggests it), then such progress drastically slowed some time in the last 70 years.
0.01 micron process (Score:1)
Re:Just a thought... (Score:2)
Science to product is a long long haul (Score:2)
Liquid fuel rockets - 1920's
Turbojet - mid 1930's
TV - 1920-something
High temp superconductor 1992?
Digital electronic computer - 1945
Re:Why 10 to 15 Years? (Score:3)
They haven't even made a chip yet! They have just made a transistor or two.
It is not even clear from the article how small they have gotten the channels: it only says that the technique "scales to" 10nm.
There is a long way from showing that a given technology will sort-of-work in a lab to mass-production. That's one of the reasons for the delay.
Another is that there might not be a market. It is currently quite feasible to get a couple of ~1GHz processors with a few gigs of RAM in a machin we can almost afford. Let's face it: few of us sitting here reading Slashdot are using our quad-Xeon workstations to their fullest. Who would really buy it at, say, 100 times the current price? I've just ordered my dual-PIII and I doubt I could easily use more processor speed. (Memory, maybe.) And I'm sure I wouldn't pay half a million bucks for it!
Re:Why 10 to 15 Years? (Score:1)
Another problem would be IBM would have a proprietary production design and "closed-source" is evil (around here anyway)
It sucks, but that's capitalism.
Re:Two words... (Score:1)
Stop running the SETI@home! It's NSA's Echelon client and by running it, you're helping the Big Brother!
Re:all right enough (Score:1)
Re:Two words... (Score:3)
I mean, as the article says, sure, servers and stuff will definitely put good use to the increase in performance, but what about good ol' Joe Sixpack using Excel at his office? I mean, besides from cranking SETI@home units faster, is there really such a need for faster processors at home / office?
In all honestly, I stopped noticing any speed differences around 200MHz or so. I used a 200MHz Pentium running Win NT for a while at work, then I went to a 400MHz Pentium II. Couldn't tell the difference at all.
It is getting to where rewriting software and/or changing your approach are much more valuable than processor pissing contests. When I compile code with Visual C++ it seems to take forever, given a large project. If I use Object Pascal instead, the compilation time drops by 2-3 orders of magnitude. That's a much bigger win than increasing my machine to a 2GHz processor.
Stray radiation to become a problem. (Score:1)
Re:Why 10 to 15 Years? (Score:1)
Re:Trace Width or Channel Length? (Score:1)
In general, the 0.xx micron tends to refer to a technology 'node' as stated on the SIA roadmap and not some real feature on a chip. It's an incredibly misleading term and I have often discussed this in detail with professors and industry folk alike about how this confuses folks. Sometimes companies like IBM might actually post more meaningful numbers like channel length or gate length (Leff or Lpoly) but quite often when this is done it confuses the average reader so most companies tend to instead refer to technologies by their SAI node and not by any true dimension.
Another horrid inaccruacy in the article was the definition of short-channel effect ("interference between transistors located too close together"). This makes me cringe as it's not even close to the true definition. Short channel effect refers to the difficulty in turning a given transistor off and doesn't normally have any relationship with device isolation. Basically, the fields between the source-drain of the "short channel" make it increasingly difficult to isolate the source from the drain in an off-state. As a result, for ultra small devices it is difficult to keep the device turned off. This has nothing to do with interference from other transistors.
Re:Two words... (Score:1)
And 640K ought to be enough for anyone
lots of reasons .... (Score:3)
Re:Why 10 to 15 Years? (Score:1)
This breaktrough is just one of the many, many hurdles that have to be overcome, in order to get a product that can actually be manufatured.
actually there is a sort of port to this. (Score:1)
"On somewhat of a tangent, there is continuing work to support a subset of the Linux kernel on 8086, 8088, 80186, and 80286 machines. This project will never integrate itself with Linux-proper but will provide an alternative Linux-subset operating system for these machines. "
I think that aswell as being a 16 bit chip another problem to porting linux to the 8088 was the memory. Was it that the 8088 didn't support protected memory? I forget.
You are right that this guy almost certainly has never never tried Linux on an 8088 but it's not imposible.
0.01 Micron! (Score:2)
I would worry the salmon chip trying to swim upstream to spawn. The worst we have to feer from the V-Groove is some funkadelic dancing.
---
Re:Flawed logic (Score:1)
As Newtons laws are accurate, but only the the domain of speeds not approaching c.
As Moore's laws are acurate, but as thing have definatly changed since the creation of Moore's law it is almost accurate to say we exist in a different domain.
Cool (Score:4)
Flawed logic (Score:2)
Ahem, a law can not be both accurate and in need of revision within its (original) domain.
Re:0.01 Micron! (Score:1)
Why 10 to 15 Years? (Score:2)
When we see stories about quantum leaps in computer technology, why are companies so slow to actually produce, implement, and sell it?
I feel releasing this technology now would not only benefit consumers, but help to drive down prices of other technologies. For example, if IBM released a processor built using this process today, I'm confident Intel's CPU price would drop.
So, what's keeping IBM from releasing hardware based on this technology in 1 to 2 years instead of 10 to 15? Ideas?
Re:Why 10 to 15 Years? (Score:1)
Re:Why 10 to 15 Years? (Score:2)
They didn't use electron beams. Where did you get that idea?
I quote: "V-Groove, in addition to lithography techniques, uses chemicals to create an anisotropic chemical reaction..."
Don't be in so much of a hurry to post to actually read the article.
Torrey Hoffman (Azog)
Just a thought... (Score:1)
Perpetuity of Moore's Law (Score:2)
What I mean is that since it takes about 10 years for an emerging technology to go from theory to mass implementation, if there were theories that showed the promise of Moore's Law living on for more than ten years into the future, products based on those theories would emerge faster than Moore's Law predicts.
Fox's Law: The estimated time that Moore's Law will hold true will always be close to the time it takes to turn the latest theory into a commercial product.
Kevin Fox
Re:Perpetuity of Moore's Law (Score:1)
First off, Moore's law is not a law as in a law of Nature or physics. it should really be called "Moore's observation.
Second, there are fundamanetal limits to silicon which will be reached sooner or later. Maybe the limits are further out than we think, but you can't shrink those wires for ever.
But that doesn't matter. Take a look in Ray Kurtzwiel's badly written but provocative book "the Age of Spritiual Machines". There is a passage in there which lists the processing power, bang for the buck available, back to the 1890s. Moore's 'law' holds, more or less, back all this time.
Kurtzwiel's commentary on Moore's law is that Moore's law is evidently not a property of silicon, but of the marketplace, and that we have nothing to fear, silicon will be replaced by something else. No doubt that revolution will be slashdotted.
Typical Californian hyper-optimism, but he may be right in this, at least for the next few hundred years - remember, no matter what the medium, you can't keep doubling the performance indefinitely.
Two words... (Score:2)
In addition, IBM and Intel agree that, especially with faster Internet connections, software will catch up to and exceed the capabilities of today's desktop processors, requiring more performance there as well.
I have one question here: will software really need more and more CPU performance as time goes by? (Code it again, Sam!)
I mean, as the article says, sure, servers and stuff will definitely put good use to the increase in performance, but what about good ol' Joe Sixpack using Excel at his office? I mean, besides from cranking SETI@home units faster, is there really such a need for faster processors at home / office?
Shouldn't other areas of computer science be explored as well? Im sure there's lots of research going on all the time, but if someone were to discover a faster search / compression / whatever algorithm that would make up for a slower processor, wouldnt it?
As usual, that's my opinion... and as I said, the truth is I'll probably use it to play better, faster and bloodier games on my PC
Re:Just a thought... (Score:3)
Yes, and yes. At least, for a while.
By the sounds of the article, they've only managed to create a handful of transistors using this new process. All a transistor consists of is 3 layers of semiconductor with interconnects -- not a particularly complex structure. The next phase will be building a non-trivial circuit. This, no doubt, will require reworking of their technique (read: years of research) to produce an experimental prototype. Then comes tuning to actually make it useful. At this point, they're still basically producing the chips "by hand" -- very expensive and time consuming with a very low yields.
Once they've proven that the process really does work (assuming, of course, that it does), and that you could conceivably build a real chip with it, they need to design the mass production fabrication hardware. When that's done, they'll actually be able to turn out a few chips, as you said, on a smaller scale -- no doubt still at tremendous cost.
The last barrier is the infrastructure. The final version of the new process will likely require overhauling one or more existing FABs (or building a new one), again at huge cost, both money and time.
10 years from single transistor demo to the first production model is actually pretty quick. It's the same story again for other innovations -- be it faster/smaller chips, higher density hard disks, holographic storage, whatever. The more radical the new strategy, the longer it takes to get it right and get it ready.
Re:Why 10 to 15 Years? (Score:2)
IBM might be able to produce one at great cost for a technical demonstration, but doing it on a regular basis might be beyond the abilities of a production line. Anyway in 10-15 years there me be a faster, more efficient method of information transfer that we haven't thought of yet. Moves from
Re:Not so profound - Gordon Moore (Score:1)
What has been doubling over the last two hundred years isn't exactly computation in terms of operations/sec. It is actually computation in terms of operations/sec/price. Both Kurzweil and Minsky (and Turing) have written on this. If you trace the amount of computing power that $1,000 buys over the last hundred years or so - you can see that the amount of computation bought per $1k has been doubling every 12-18 months.
-=|t
Re:Perpetuity of Moore's Law (Score:1)
Just to play devil's advocate: Why can't you?
What's the actual physical laws that dictate that you can't? As far as I know, there's no equivilant to the laws of thermodynamics pertaining to information theory.
Kevin Fox
Flamebait?Moderation is doh! (Score:2)
The moderation is really crazy nowadays. Bremsstrahlung is a physical process, which with high energy particles like X-rays can scatter the lattice of silicon, resulting in spurious irradiation and damage to components.
Re:Two words... (Score:1)
Finally, someone in this thread gets it.
Nit-Picking: Micron? (Are you sure?) (Score:1)
Last time I checked, a Micron was different from a micrometer. Specifically, a micron was E -4 (0.00001) Meters, 100 micrometers, or 0.1 milimeters. A micrometer was E -6 m, An angstrom was E -8 m, and a nanometer was E -9 m. Thus, 10 nanometers would be precisely one angstrom, but actually, 0.0001 microns.
Whose in error? Have I just been uninformed all these years, or is this confusion on the poster part, and a mistake in the original report / news release?
Re:Trace Width or Channel Length? (Score:5)
Crosstalk between wires is not just a function of their decreasing spacing because the aspect ratio of wires is also increasing very significantly (anywhere between 50-100%) leading to a larger lateral area. Copper processes allow wires with a smaller aspect ratio but the same resistance per square leading to the decrease in coupling capacitance. Low k dielectrics also are used to decrease the interconnect capacitance.
The problem that we are currently facing is that transistors are fast enough that the critical paths in a modern chip is almost entirely due to the delays of long global interconnects. There are many things we currently to do speed up these wires including shielding, buffer insertion, and simply more intelligent routing.
Re:Why 10 to 15 Years? (Score:2)
Re:Just a thought... (Score:2)
Re:Why 10 to 15 Years? (Score:1)
2. Intel supplys ibm with processors for thier desktops. IBM wouldn't want to make them mad now, would they?
3. Theres no real mass market demand for it at the moment- as the article stated, there are very few things that your "average" computer user would need that would benefit from them releasing this chip. And no, 250frames/sec in quake 3 arena doesn't count.
4. because of the lack of mass market demand and the high cost they'd incur by implementing the processing methods they'd need to make this chip it would make the chip very expensive. Very expensive chip that few people want right now= something intel won't worry about for a while.
Re:Two words... (Score:1)
Re:Why 10 to 15 Years? (Score:2)
Re:Nit-Picking: Micron? (Are you sure?) (Score:2)
A micron usually refers to a micrometer, although many other people use it for a different measurement. An angstrom is (in your notation)E -10 m. So, 10 angstroms is one nanometer. If you really want to talk small, I suppose you could try picometers, femtometers, or apatometers.
To give you an idea of scales, Bhor's radius is about
Re:Nit-Picking: Micron? (Are you sure?) (Score:1)
Re:0.02 "fundamental"? (Score:3)
If you go any smaller, your waves become X-rays and that significantly complicates matters.
--
A better name... (Score:2)
Re:Why 10 to 15 Years? (Score:1)
Re:Nit-Picking: Micron? (Are you sure?) (Score:1)
Could be. In chem, the units of measurement are relative to centimeters,grams, and seconds as opposed to physics which uses meters, kilograms, seconds.
0.02 "fundamental"? (Score:1)
The REAL fundamental is Mr Heisenberg's Uncertainty Principle pxh!
1 THz processors in 25 years? (Score:1)
Obviously this is going to happen somehow but its nice to see how it might be done.
"In our days we only had 50GB hard drives and 2GB of RAM..... And we liked it!"
Re:Why 10 to 15 Years? (Score:1)
Foreboding.... (Score:3)
--Jeff
Re:Why 10 to 15 Years? (Score:4)
Re:Foreboding.... (Score:1)
"IBM: V-Groovey or be square"
Re:new moderation option needed: (Score:1)
Re:Two words... (Score:2)
I mean, as the article says, sure, servers and stuff will definitely put good use to the increase in performance, but what about good ol' Joe Sixpack using Excel at his office? I mean, besides from cranking SETI@home units faster, is there really such a need for faster processors at home / office?
Definitely. It still takes longer to encode my CDs as MP3s than it does to pull them off the CD at 20X :-). Assuming the current rash of technologies hangs around, I think we'll see people sending each other video mails within a few years. And I'll still be looking to have that Linux kernel compile time down below 2 minutes among other things. As CPUs get faster, the things we do with them will be more complex.
Cheers,
Toby Haynes
Re:Flamebait?Moderation is doh! (Score:1)
Quantum effects (Score:1)
Re:OT: differences in compilation time (Score:1)
Wow! (Score:1)
--
Can we READ the article? (Score:1)
The article claims this technology will put IBM 10-15 years ahead of the Moore's Law curve.
To quote the article....
"...will allow the company to stay ahead of the curve of Moore's Law 15 to 20 years in the future..."
Yes it isn't the best way to express the idea but it's not that hard to understand.
This is why I don't read SlashDot very often. this is about the 10th time I've seen you mis-report an artice. I'm amazed at how many people respond to them without actually reading the article themselves.
Re:Why 10 to 15 Years? (Score:1)
Re:Nit-Picking: Micron? (Are you sure?) (Score:1)
You did manage to get nanometer correct.
Out of curiosity, did you just make these numbers up, or did you read them somewhere?
cheese
Re:Trace Width or Channel Length? (Score:1)
The researchers have come up with a technique for creating short channel transistors without drawing them. This is useful for studying the operation of future devices, but will not directly impact the scaling of circuitry (and die sizes).
Re:anisotropic chemical etch? (Score:1)
I haven't read the article though, so maybe in their context it is wrong...
cheese
Re:Why 10 to 15 Years? (Score:1)
Re:Why 10 to 15 Years? (Score:1)
IBM isn't into commodity PC hardware chips... by dedicated products I'll assume you mean the bulk of the chips in the S/390, AS/400, RS/6000 and Netfinity lines...
--
Re:Two words... (Score:1)
Re:Just a thought... (Score:1)
Thanks for the info. :)
OT: Does anybody know of any good microprocessor mag/zine/web site like " Microprocessor Report [chipanalyst.com]" that doesn't cost like a billion bucks a month to subscribe to? I'm no engineer or expert, but I love reading this type of stuff (and happen to be broke ;).
q
Not so profound - Gordon Moore (Score:1)
Most people know Gordon Moore's great "law". What most people don't know is that his profound statement of computation, a) isn't that profound b) isn't originally his idea.
Anyone that has read the works of our favorite British geek, big Alan Turing, knows that he stated in the 30s that computational speeds double every 12-18 months. Turing took a look at the "computers" dating back into the 1800s. From purely human/mechanical, to entirely mechanical, to electro mechanical, to electrical (Vacuum), to transister, to IC, computational speeds have been doubling since Babbage's girlfriend was writing theoretical software! (ok maybe a little later than that..)
The point is that at each stage in computing history, when one medium reached its limit, another picked up and continued seemlessly along. So the broader Moore's law (the smaller scale and actual Moore statement was only regarding ICs), will continue, silicon or no silicon. So then the interesting question is what will pick up when silicon dies. Molecular/Nano would probably be most peoples guess right now.
I'd expect the only failure of "Moore's law" to be that it underestimates the speed in which computing technology will double - my guess is that in 15-20 years it will go even faster than 12-18 months...
-=|t
Re:0.02 "fundamental"? (Score:1)
Or -- an equally difficult problem -- do we even have a good hard x-ray source?
Re:actually there is a sort of port to this. (Score:1)
Email me.
Don't trust anyone over 90000.
Re:Two words... (Score:2)
Re:Foreboding.... (Score:2)
I can just see it now:
--
Man, I have to lay off of the Saturday Night Fever.
Rami
--
Re:Two words... (Score:2)
Where we could use this kind of technology today is in the high end routers driving the Internet. As present research is investing heavily into TeraByte routers, even these will be heavily taxed for speed in the coming years. You can bet that folks like Lucent and Cisco are looking very closely at this kind of speed requirement.
G4 Processors (Score:1)
I'm not sure how accurate that data is, but I am sure I have read it multiple times, probably on
By the time Intel starts to manufacture these, I'm sure all of the othe rprocesor companies will be too. The Macs will inevitably benefit form this technology also, maybe you'll just have to wait a quarter though.
286 was protected mode. (Score:1)
Re:Why 10 to 15 Years? (Score:1)