
AMD's Next Generation Processor Technology 320
Esekla writes "AMD has released info about their upcoming processor technology. The press release claims that they're producing circuits that run 30% faster than any other published benchmarks using "Fully Depleted" Silicon-on-Insulator and AMD's metal gating technology and actually has a good bit of technical detail for a press release."
Excellent... (Score:2, Offtopic)
Does anyone know if this is press-release hype or a real breakthrough? I'm not a semiconductor expert. But my suspicion is, real breakthroughs generally don't get announced in marketing press-releases on Yahoo Finance.
Re:Excellent... (Score:2, Insightful)
Neither - it's incremental improvement. That's how most progress is made.
Re:Excellent... (Score:2)
They, like all the other tech stocks during the bubble, got a nice shot in the arm. I -remember- seeing AMD actively trading over $70/share pre-split. Hell, I remember for almost a -year- (right after the Athlon came out, BTW, so you can take your "They stink at turning technology into money" statement and shove it) of time where they were trading at pre-split 30-50/share.
Had I the money, I'd have made a -hell- of a lot of bank on it, and I -called- it. 3 months before the run-up to 90 I wa
Re: (Score:2)
Metal gates? (Score:5, Interesting)
I also noticed that one of the lines in the slide said something to the effect of, "Mesa isolation was used to keep things simple". Does this mean that they just did that for the one test wafer to keep things easy, but it'll be no problem once we get the process into production? Or are we talking about something that's still many years in the future?
Re:Metal gates? (Score:2, Insightful)
Re:Metal gates? (Score:2)
Re:Metal gates? (Score:5, Informative)
Polysilicon has been the gate material of choice because it is much easier to process. However, metal would reduce the resistance of the gate. (The gate acts like a little capacitor, and the resistance of the gate affects the amount of time it takes to charge up and discharge, which affects the switching time.) I think the processing ease of Polysilicon is lost when you don't use Silicon dioxide as the gate material - for example, if you used a high-K dielectric. I don't know if metal is inherently more compatible with high-k materials, just that it's less compatible with SiO.
They also mention the metal gives a "tunable work function" (probably by adjusting the silicon/nickel ratio), which I would guess would change the turn on voltage of the transistor. Tuning the turn on voltage could certainly tweak up the speed a bit.
metal work function (Score:4, Interesting)
Re:metal work function (Score:2)
Clear up some misconceptions (Score:5, Informative)
You're quite right, you can't change the work function of a pure metal - but if you have a blend of materials, they will have to equilibrate, as the energies of the electrons in one material will have higher energies than the electrons in the other. Therefore, electrons will move from one material to the other like water flowing downhill, until the average energies of the electrons in the material are uniform between domains (or atoms) of the different materials. This yields a single Fermi level, which is described as the average energy of the electrons in the material. By varying the quantities of the materials (here, nickel and silicon), you can change the fermi level of the material, thereby changing the work function of the material. So, while you can't change the work function of a pure metal (you'd have to apply an impossibly obscene amount of charge to do so), you can make different blends.
Re:Metal gates? (Score:5, Informative)
(1) The gate resistance is reduced. This lowers the switching delay in some cases. Remember that the delay is proportional to the product of the resistance and capacitance (the 'RC' product).
(2) In polysilicon gates, the free carrier density is very high (1E20 carriers per cubic cm). Even so, under high electric fields that are needed to switch a transistor, there is a small depleted layer created right at the interface of the gate and the dielectric. This effectively acts as a capacitor in series with the dielectric and increases what is called the "effective oxide thickness". This is very bad, especially when process engineers are trying extremely hard to reduce the oxide thickness. At the scales we are at now, every Angstrom counts. In metal gates, the carrier density is 1000X higher. This makes it much harder to deplete and you regain the 4 angstroms. This means either higher performance with the same gate dielectric thickness, or you can get the same performance and increase the dielectric thickness by 4A, thereby reducing the gate tunneling leakage current (and hence power) by an order of magnitude. This is a big deal.
(3) Some high dielectric constant materials (that are candidates to replace silicon dioxide) are not very compatible with polysilicon. This could mean either thermodynamic instability or interfacial charge created that "pins" the workfunction (and affects the switching threshold voltage of the transistor)
(4) In fully-depleted silicon on insulator (FD-SOI, or "depleted substrate transistor" in Intel parlance) transistors, the threshold voltage comes out wrong when using doped polysilicon gates. It makes the transistor either too slow or too leaky. There is a desperate need for tuning the threshold voltage by using a different workfunction which can be found in some metal gates.
Of course, metal gates aren't without their problems. (the predecessors of today's transistors had metal gates - hence the 'M' in CMOS - Complementary METAL Oxide Semiconductor - which were replaced by polysilicon gates for processing ease.) Inability to be easily patterned, withstand high processing temperatures, reliability issues are just a few of them.
You learn something new every day... (Score:4, Informative)
Dopant profile and gate geometric effects on polysilicon gate [stanford.edu]
Gate Length Dependent Polysilicon Depletion Effects [stanford.edu]
Also EETimes [eetimes.com] has another interesting article with more information about AMD's presentation at the 2003 Symposium on VLSI Technology in Kyoto, Japan.
mmm... more speed... (Score:3, Funny)
Or at least as heat efficient! (Badum-dum psshh)
Re:mmm... more speed... (Score:2)
If only... (Score:4, Insightful)
well, one can only hope...
Re:If only... (Score:2)
That was a LONG time before the pentium was released. Don't hold your breath
Re:If only... (Score:2)
That 1.4Ghz offers some damned fine performance if you go look at the benchmarks.
Good lord people and their GHz envy...
Re:If only... (Score:2)
When will they use this? (Score:2)
I/O Speed Please (Score:5, Insightful)
Re:I/O Speed Please (Score:2)
I meant, NOT very exiting...
Re:I/O Speed Please (Score:3, Funny)
I meant, NOT very exiting...
LOL, I bet you didn't mean that either.
Re:I/O Speed Please (Score:5, Insightful)
Re:I/O Speed Please (Score:2, Funny)
How about a Beowulf of those? *ducks*
Mike.
Re:I/O Speed Please (Score:2)
Amd would really like to have what you describe, but they don't. I/O will only limp behind processing speed, as it has been since the 486. In the mean time, those cycles can be used for other things, such as datacompression of the data that goes to memory.
However, latency is what is killing in the end, not the data transferrates. That is what we have cache for, but it doesn't yet have 100% hitrates yet. The fun comes when there
depleted silicon? (Score:5, Funny)
WAR AGAINST AMD
Re:depleted silicon? (Score:2)
wait just a darn minute! (Score:2, Interesting)
XD
Actual speed (Score:3, Funny)
Not to worry, the next generation of Windows will no doubt be that much slower.
Actual speed doesn't change when bloat happens. (Score:4, Interesting)
At the time, it took a couple of minutes for windows to boot, on a 486-33. Today, it takes a couple of minutes for Windows to boot, on say a 1.6 GHz P4. Yes, it's doing a lot more, but it's taking just as long as it did a decade ago.
Re:Actual speed doesn't change when bloat happens. (Score:5, Insightful)
And FYI: you can build a reasonably fast system for less than $1,000, whereas a decently fast system in 1993 ran more like $1,500 - 2,000.
You can build a more top of the line system for $2-4k these days, whereas a top of the line system in 1993 ran more like $3-6k.
Computer people suffer from "The Good Old Days" syndrome just as much as everyone else.
Re:Actual speed doesn't change when bloat happens. (Score:4, Insightful)
Yeah, but how long until it actually logs in? That's a typical MS gimmick. They only measure from power on to logon prompt appearing.
It was incredibly obvious on NT 4.0 workstations. The logon box pops up, but the TCP/IP stack isn't even up yet. You get to type your login info 45 seconds after power on, but you still can't use the machine for another 90. Longer if you have to wait for all it's system tray stuff to load (chat clients, anti-virus, etc).
Re:Actual speed doesn't change when bloat happens. (Score:2)
I don't suffer too much from the Good Ole Days syndrome. I am definitely a case study for suffering, Need Future Now syndrome.
NetBSD (Score:2, Funny)
Working together to defeat Intel (Score:5, Informative)
As you may or may not know, IBM originally developed Silicon-on-Insulator technology and licensed it to AMD. Here is the whitepaper: http://www-3.ibm.com/chips/bluelogic/showcase/soi/ soipaper.pdf [ibm.com]
This is the same technology that was used to make the Power4 processor, and will also be used to make the upcoming PPC970: http://www-916.ibm.com/press/prnews.nsf/jan/06C1F2 11F9B1C24B85256ADF006163AF [ibm.com]
AMD has recently built a new state-of-the-art fabrication facility in Dresden to produce the chips, known as "Fab 30": http://www.anandtech.com/cpu/showdoc.html?i=1773 [anandtech.com]
I hope together IBM and AMD will continue to update their manufacturing process to keep on par or perhaps once again surpass Intel.
Re:Working together to defeat Intel (Score:2)
Re:Working together to defeat Intel (Score:2, Redundant)
Re:Working together to defeat Intel (Score:3, Insightful)
I like AMD as much as the next guy (running an 1800 XP), but I'm not sure why Intel needs to be defeated... good company, good products.
Intel doesn't need to be defeated, just "competed".
Intel (and every other company) simply needs to be in competition, in a hotly-contested race to produce high quality products for the lowest price in a well-informed marketplace
Absence of competition permits, even encourages companies to produce lower quality products because they can charge high prices for them [1[PD [ucl.ac.be]
Re:Working together to defeat Intel (Score:3, Interesting)
They force expensive, unwanted, patented tech. on the public, that isn't any better than DDR, and through their lecinsing programs, they prevent any 3rd parties from doing so.
I don't think Intel needs to be "defeated" per-se, but they could sure use some stronger competition, so they can't pull crap like that again, and screw over consumer.
Re:Working together to defeat Intel (Score:2)
Please try again.
Re:Working together to defeat Intel (Score:2)
1. The pentium floating point bug of yesteryear
2. The processor ID hoohaa
3. Competition is fun. No one really pushes MS but AMD pushes Intel and vice versa.
I'm in the 3 camp these days.
Re:Working together to defeat Intel (Score:3, Interesting)
Yes, lets all pray that Intel is defeated so that we have a different company that has a monopoly on mainstream microprocessors. Therefore, the existing competition that has driven down the cost of microprocessors, will disappear.
Rip on them all you want, but overall, Intel has been good for the mainstream computer industry. They generally participate in standards groups and for the most part, have an open architecture. Otherwise
Re:Defeat Intel (Score:3, Insightful)
1: AMD Athlons are cooler than P4s that perform equivalently. The old "AMD is hot" mantra came from PIII vs Thunderbird. It's not true any more.
2: Via is hardely "Mickey Mouse". How about ATI or NVIDIA? Asus? Abit? Shuttle? Chaintech? Aopen? Are they all "Mickey Mouse" too? You can buy an Athlon motherboard from every major manufacturer except Intel.
3: The Athlon is not crap. It is STILL one of the highest pe
Fully depleted of charge carriers (Score:2, Informative)
So you donâ(TM)t need to go shopping for a lead ATX case.
I think the full depletion increases insulation so the layer can be thinner.
Makes hacking tough, don't it? (Score:4, Funny)
SOI, shmeSOI. I say we get back to centimeter processes-- much easier to hack.
All well and good, but... (Score:5, Interesting)
...nowadays I think that the last component of a PC which needs speeding up is the CPU. Many other components act as a brake on the real-world efficiency of systems; one particularly close to my heart is the cache size. Most computational problems which I come across are too large to fit in less than 2 Mb; therefore, on processors which have a much lower clock speed than x86 offerings, but a much larger cache, I get much better results. The Sparc III series is a good example; the clock speed is around 500Mhz (maybe higher on more recent versions), but the 4 Mb instruction cache & 4 Mb data cache (IIRC) mean that the sort of numerical problems I solve can fly. Of course, it could be argued that this is due to the superiority of the SPARC architecture over x86, but you get my point.
I'd be interested to try out one of the new Pentium M processors (as found on Centrino platforms); I understand they have 1 Mb caches, and this may give them quite a performance boost for numerically-intenstive stuff.
Re:All well and good, but... (Score:2)
Re:All well and good, but... (Score:3, Informative)
Re:All well and good, but... (Score:3, Insightful)
Re:All well and good, but... (Score:2)
I'm pretty happy with my dual 500MHz Xeon. It's a touch slow on video encoding and games but I'm only using 512k cache CPU modules. My machine appears to have a dual memory bus, I think that improves performance too. What is in the system depends on who made it and such. Compaq even made their own chipset for their PIII Xeons and it's the most reliable x86 system I've ever owned.
Even
Re:All well and good, but... (Score:4, Informative)
If you need a 2MB cache you should consider a Xeon-MP which has just that. Couple this with a reasonably fast core and you should see some good performance for your application. Most x86 processors will have at least 1Mb of cache by the end of the year (Hammer, Prescott, Banias).
As you might imagine, on-chip caches are expensive. As a rule of thumb, the closer the memory is to the processor core, the more expensive it will be.
Your argument that SPARC is superior to x86 is weak. I've designed both kinds of processors and everything these days is basically RISC-like. The x86 code is translated into micro-ops that look like RISC. SPARC also has some stupid instructions and idioms. For example, register windows may seem like a good idea, but they really grow your register file and limit your frequency. Also, delayed branches are stupid and limit many things you can do. If I had to do another SPARC chip, I'd do some translation of my own into more efficient hardware-friendly micro-ops.
SPARC systems are nowhere near as competive as x86 systems. Their last niche of superiority with server workloads will disappear with the proliferation of Opteron systems.
Re:All well and good, but... (Score:2)
Power? (Score:2)
I'm i on the right track here?
Intel GHz War (Score:2, Funny)
Re:Intel GHz War (Score:2)
You obviously haven't seen their new chip, the Holyshiteron. It features a 12,000 stage pipeline which allows it to run at speeds in excess of 7 PetaHz.
Processor innovation... (Score:2)
Plastic Suregeon's Nightmare (Score:2, Funny)
On the other hand, some of the terminology used sounds like they came straight from a bad breast implant procedure.
"fully-depleted Silicon-on-Insulator"
Another term for "Your artificial knockers have sprung a leak"
"Strained-Silicon"
Another term for "God, those are humongous Jugs!"
Larry the Cow from Gentoo says... (Score:2, Funny)
Disclaimer: I use Gentoo
x86 / AMD == IBM / PowerPC ... ? (Score:2)
Here's [hardwareanalysis.com] a small article describing their relationship.
The question. (Score:2)
But the question is: are these real %% or is it 30+ performance marking?
Robert
Can somebody clue me then... (Score:2)
I can go out and buy an AGP card, decent on-board RAM, outdated GPU. If current video cards run at, say 500Mhz, and this card is antiquated etc etc...
Why will a 1.4Ghz
Re:Can somebody clue me then... (Score:2)
if the graphics card says it can do it why would the cpu expect that it can't? If you want good snappy video performance, then that's where you should sink your mad-money, what good is it for the cpu to spend 10 cycles waiting on the video card instead of 5 cycles?
Intel vs AMD (Score:5, Insightful)
Phaeton Sez (Score:2, Insightful)
Unfortunately, the other industries are market driven, and there are too many people who stroke off to Overclocker Weekly centerfolds of the Latest Greatest Processor(tm).
What we *really* need, is to completely pitch the entire x86 platform and start over from scratch. You all
Re:Phaeton Sez (Score:2)
30% faster circuits, "hmmm..." says marketing (Score:3, Insightful)
for heat, go retro (Score:3, Funny)
Re:for heat, go retro (Score:3, Funny)
Re:Will anyone notice the speed? (Score:5, Insightful)
And I'm not talking about Web servers, but heavy database work, HPC etc. We are evidencing an era where proprietary Unix systems are brought down from their pedestal, and having good performance figures can't hurt.
Your mom will also like it, what with all the video&image editing and stuff.
Why is it that every time an increase in computing performance is reported, Slashdot is full of people whining why they don't need it.
Re:Will anyone notice the speed? (Score:5, Insightful)
Re:Will anyone notice the speed? (Score:5, Insightful)
I have said this before, and I will said it again. I'm a professional software developer. I work on high-end 3D games, and I have a penchant for working with large, high-level languages that so many programmers put down as "too slow," such as Lisp, when I can. When I had an 866MHz Pentium III, wow, that was my dream machine. It felt like I had infinite processor cycles. If something ever felt a little sluggish, it was because I did something dumb and a little algorithmic tweaking made it go away. I never felt the need for more speed. Ever. Seriously. And now I have a P4 with 3x the clock speed (which I have for reasons other than the old PC not being fast enough).
The "gotta have more speed" issues come down to three major things:
1. Certain very specific tasks eat up all the processor power you can throw at them, such as high-end scientific numerical work (think: systems of tens of thousands of equations) and video compression. Both of these are specific enough that they shouldn't be driving general, across-the-board, desktop CPU development. Ideally, video compression should be done via coprocessor, just as drawing texture mapped triangles is. If we didn't have GPUs like those from nVidia and ATI, we'd need CPUs clocked at 100GHz in order to achieve the same results.
2. Some things are slow, but they often come down to really poor design or have nothing to do with processor speed. Boot time, for example. Or sometimes you hit Help in a giant program like Quark or Maya and there's a substantially long period before the help shows up. That's not a processor bottleneck; that's another program being paged in, maybe even the Java runtime stuff to support it, and then a monstrous index of data being loaded. But people see things like this and immediately think the processor is too slow.
3. There are certain outdated--IMO--activities that some people engage in which are fundamentally flawed, and hence slow. A good example is building monstrous applications using C++. C++ doesn't have formal support for separately compiled modules, so each one is compiled independently, you need an ugly make system to sort out the dependencies, and then they all get thrown into a massive link step at the end. People who write code with Delphi don't have this problem; compile time is effectively zero for most projects. Ditto for Lisp or Python. C++ is a necessary language, but again it shouldn't be the impetus for processor upgrades.
Thanks for reading.
Re:Will anyone notice the speed? (Score:3, Insightful)
nice -n 19 [insert big CPU intensive task here]?
Re:Will anyone notice the speed? (Score:2)
Hosting 300 postnuke sites on a xeon system will keep that CPU burnin through cycles!
Plus hosting is many other things included Mailscanner running f-prot virus scanning & SpamCop checking on every inbound/outbound mail, pop3, ftp, httpd/https processes.
speaking of https.. are those accelerator cards worth it? what a hog on cpu!
Re:Will anyone notice the speed? (Score:3, Insightful)
Do you know how much time your CPU spends computing and how much it spends waiting for data to arrive from your RAM?
Re:Will anyone notice the speed? (Score:3, Insightful)
I don't think /.ers are completely unjustified there... It certainly seems that most computer technology is seriously lagging behind the processor (RAM being an exception).
The PCI slots that were on 486s are the same ones that come with your bright and shinny 3GHz AMD processor... That is certainly a serious imbalance, and it is very strange that tech companies have not r
Re:Will anyone notice the speed? (Score:2)
Re:Will anyone notice the speed? (Score:5, Interesting)
Compile times for programs, and render times for graphics are steadily getting better, which means they finish projects faster, and have more developed social lives.
Which brings me to an interesting question. Is this true:
Faster CPU's = More free time for 'Working' Nerds?
it seems to work in my circle of friends, but is it a 'universal' truth?
Not at all... (Score:5, Interesting)
I work in the 3D department of a television production studio, and the better the equipment we get, the more demanding the clients are. Often enough it's even worse - we might show a new feature we couldn't do before because the rendering times would be too long, but instead of taking 3 or 4 times the amount it would have, the new hardware brings it to 1.5 or 2 - it still takes longer, it's just that now we can do it.
I sort of agree (Score:2)
Re:Will anyone notice the speed? (Score:2)
I dunno, having a long-running compile is a great time to refresh slashdot.
Wrong. (Score:2)
I miss the days of compiling a small program and having time to run to 7-11 and get a soda.
Re:Will anyone notice the speed? (Score:2)
Key point: All computers wait at the same speed.
If you're writing code, unless you're working on a huge project and make a change equivalent to changing stdlib.h, then compile time won't be a significant factor in your work. You spend very little time compiling code compared to the time it takes to write i
its probably a result of (Score:4, Insightful)
Re:its probably a result of (Score:2)
Re:Will anyone notice the speed? (Score:3, Insightful)
My dad is getting into editing my and my sister's childhood videos. His user experience would probably gain substantially in quality up to a 20 to 50 ghz cpu speed.
I plan to play Doom III, and have every reason to believe that there will be significant improvements to that experience up to 10 ghz at least.
I have written a number of test applications in the scientific computing arena for which insufficient CPU time is available to even consider doin
Already the case (Score:2)
At that time, my new iPaq (Not the PDA, the business desktop system type that seems to be relatively unknown) was competitive with some much more expensive (but 2-3 years old) high-end computing hardware. My boss was impressed at how well the $1000 system he bought for his summer intern performed. And that system was only 500 MHz.
1 month into my internship, I started running simulations on that machine. Some only ran for 10 minutes, but each batch
Re:Will anyone notice the speed? (Score:4, Interesting)
But getting away from the made-up benchmark, everybody in the computer industry is targeting those two groups right now: big servers and gamers. Those are the only two places where the industry actually makes any money. Gamers are the idiots who will pay $500 to get 10fps more in Quake, and businesses can afford to spend $50k (or more) on a single computer.
This shouldn't surprise anyone, though, because it's the way technology usually works. One or two interested groups spend obscene amounts of money on something that nobody else cares about. They make incredible advances, which go largely unnoticed, and then five years later people start seeing ways to apply the "useless" technology to all sorts of different things. The space program would be a good example of this. All sorts of objects we use every day owe their existence to the space program, which people continue to criticize as a waste of money. Sure, maybe the space shuttle doesn't do me any direct good, but the technology we came up with in building it sure does. The processor race works in a similar way. As CPUs get faster, software can add more and more useful features without impacting the performance of existing ones. Of course, some of those features are an annoying waste, but we still get a few good ones out of it.
Go buy a DV camcorder (Score:5, Interesting)
You'll change your tune.
With some of the more advanced video compression algorithms (DivX for example - Yes it has legit uses, great for distributing home videos to relatives.), a 10% increase in CPU speed can mean an hour or two off of your compression time.
Re:Will anyone notice the speed? (Score:5, Funny)
Re:Will anyone notice the speed? (Score:2)
I know similar software has been used for large mainframe computers in the past, specifically a package called "BEST/1". Of course with these systems, upgrading the CPU practically requires floating a c
Re:With the obvious question, being why. (Score:3, Insightful)
Re:With the obvious question, being why. (Score:2, Insightful)
Faster CPUs are a huge benefit. (Score:5, Insightful)
You obviously haven't tried compressing 2 hours of video into DVD-quality MPEG-2, let alone trying to compress it into DivX to send home videos to some relatives.
Would we really need more than 800 MHz on a home computer? I have a 1.7 GHz P4 laptop, and a 1.1 GHz Athlon. Upgrading to a Barton 3000+ (2 GHz or so actual clockrate, but much more efficient per clock than my current TBird) would take my 14-hour encoding jobs down to 7 hours. A difference between taking most of the day and running while I sleep.
And reencoding 1080i HDTV recordings into a more managable size... yikes... I've had 24 hour encoding jobs before.
So my suggestion: Go buy a DV camcorder, or an HDTV tuner card. I guarantee you you'll be desperate to upgrade that poke-ass 800 MHz machine in under two weeks.
Re:OK....so when will the 3Ghz Opteron come out? (Score:2)
Re:I like this.. (Score:3, Informative)
Re:translation (Score:2, Funny)
Not quite. (Score:2)
While they may be able to get 30-35% improvement for PMOS alone, and 20-25% for NMOS (or was it the other way around?), if implemented in a chip, improved PMOS transistors without improved NMOS would result in almost no maximum speed improvements. (It would likely improve power consumption, but not as much as the speed benefit of the transistor itself.)
This is because any given gate involves both NMOS and PMOS transisto
Re:Quick (Score:2)
IMHO, someone needs to buy the newest stuff, otherwise the prices won't come down.
Re:YES, I could use 1000X more processor speed (Score:2, Funny)
I think you might be better off with a girlfriend
Oh, wait, this is /.
Silly me.