Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

IBM One-Chip Dual Processor Due Next Year 121

PureFiction writes, "Looks like IBM is going to be scaling processors at the chip-die level. ZDnet has this story about plans for a dual-processor, single-die chip that will operate at upward of 2 gigahertz. It will be called the Power4, will use a .18 micron fab process, and feature on-chip L2 cache (supposedly quite large, though no numbers mentioned), and bus speeds of 500Mhz. I wanna overclock one of these bad boys ..." Better get out your pocketbook, then -- they're slated to power RS/6000 servers rather than consumer PCs, at least for a while. 64 bits, copper interconnects, and plans to move down to a .13 micron fab show that IBM's is thinking long-term. Similar technology may reach your desktop first, though, in products like AMD's Sledgehammer.
This discussion has been archived. No new comments can be posted.

IBM One-Chip Dual Processor Due Next Year

Comments Filter:
  • by Anonymous Coward
    "Windows 2000 has "load-balancing" where it will run processes that are processor intensive on the chip that isn't running the OS."

    Either this shows your fundamental misunderstanding of how SMP is implemented in modern OSes, or W2K's SMP is where Linux 2.0.x's was (Megalock. Only one process in kernel at a time). Judging from the benchmark scores, I'd put a lot of money on option A.

    BTW, SMP OSes don't call that "load-balancing" they call it scheduling.
  • by Anonymous Coward

    I saw a VLSI layout of one of these puppies about a year ago. It's one of things where you've got to do a double-take. One side was a mirror image of the other. At first I thought it was an old POWER mirrored for double-redundant mission critical stuff. But then I noticed the linewidth...

    Hiding before IBM lawyers get to me...

  • Heck, I just noticed it was the bus running at 500MHz, with the cpu better than double this. Now I'm really impressed!

    -Paul Komarek
  • Yeah, Goldfinger took it.... oh wait, he was foiled by James Bond...

    Actually, Goldfinger tried to irradiate it and make it unusable, there was simply too much to carry away.

    But there's still 140 million ounces of gold at Fort Knox according to the U.S. Mint's web site.
  • At the same clock speed, PowerPC chips run approx 40-50% faster than the PIII equivalent. So your 550MHz PPC is approx equivalent to a 750MHz PIII.

    Though (Hint Hint IBM/Motorola) it *would* be really nice to have a 1+GHz PPC! A 1GHz PPC would be approx equivalent to a 1.5GHz Intel.

    Of course in *real* life, CPU speed is largely irrelevant. RAM and disk performance is much much more important. (It's all about I/O)

  • As for this IBM chip. What took you all so long ? SMP on a single chip is an obvious advance.
    Actually, the IBM Power architecture was always designed for multiple cores on one die. It not only is not a surprise, but quite common amongst high end CPU architectures. It gets you the speed of one-die SMP, but involves the cost of cooling one of these SOBs.

    --

  • I think his point is if you toast a $87 celeron no great damage is done. But if you toast a $5000-9000 processor you are either Bill Gates or you are out one processor that is worth more than my car.

    I think few people have the cash to "risc" overclocking such expensive processors.
  • I did not think that it did either, but I found a link off the utah GLX project that said it did. I cannot confirm this though. I could be talking out of my ass...
  • I was wrong, the link I was thinking of is here [sourceforge.net] and it has nothing to do with Unreal SMP. Oops.

  • Also announced today by IBM are the two newest world's highest capacity hard drives [ibm.com]. These also sport IBM's first glass disk platters.

    Should be a good match for these new CPUs.
  • AFAIK, UT does not support SMP.
  • No, Im saying 486SX were 486DX chips with defect FPU so they just disabled the FPU and sold them as SX. I think there were ways to enable the FPU afterwards, never tried myself though. Also, later they built special 486SX chips without the FPU.
  • That is, if the defect only affects on of the cores disable that one and sell the chip as a singlecore chip. Seems like it would work even better on quadcore chips where one core more or less wouldnt really affect performance.
  • Have you all forgotten about the G4 7400 chips already? they can have up to *4* cores on each chip.

    now all i need is a quad core-quad processor G4 and i can take over the world....
  • Yeah, and I doubt that anything that comes with a $5000+ processor also has SoftMenu II. Although I would like to see the service engineer's face when I ask him what jumpers to change...


    -BW
  • The RS/6000s aren't that expensive. A Power3 based 44P Model 170 lists at around $10,000. A bit more than the typical PC, but no need to keep it under lock and key.

    The Power4 chip is expected to show up in similar models and, I would expect, in similar price ranges.

  • Simply putting two cores on a die is no big deal. What IBM is doing is near-insane (in a good way). From what I recall from MPR...

    OK, each die has 2 independant cores, with a shared 4MB L3 and their own memory controler to RAM. They also have two ultra-high speed links to connect to other chips

    Each cartridge (IBM's famous ceramic substrate) contains 4 dies, connected to each other via their high speed interconnects and for the power, ground, memory and I/O they have in excess of 2000 BGA 'pins' requiring something like half a ton of force to hold it to the motherboard!

    It gets even better :-) The power estimates are around 125W/die. So for the cartridge, you are looking at Half a Kilowatt of power! For a 32 way system, you would have 4KW of power in the processors alone. You still have to add drives, memory, I/O processors and fans. Is that just nifty or what?

    Thats no computer... Thats a industrial heating system!

    - Mike

  • to all you non-US people, Fort Knox is a place owned by the Treasury department where lots of precious metals are stored. It is locked up pretty tight

    Only thing in Fort Knox is a few "Guards" sitting around playing cards. No gold or silver there, was gone long ago... :>

  • That's how it used to be. AMD and Intel have caught up now. Read the Ars Technica article comparing a G4 to an Athlon. While PPC has a vastly superior SIMD archetecture, the x86 family has been catching up on integer and FP performance. Nowdays, an Athlon at the same clock speed slightly outperforms a G4 for the common integer and FP ops that your system spends more than 80-90% of its time doing, but Athlons are soon to be out in 1 GHz models while PPC 7400s are still languishing at 500 MHz.

    I mean, this SUCKS. IBM had the first demo silicon at 1.1 GHz almost 2 years ago now. We were promised 1 GHz chips with multiple processors on the core by the AIM consordium's projections 2 years ago by 2000. For servers, they seem to have only slipped 3-6 months, but us desktop PPC users are stuck with x86 envy.

    x86 envy! Of all the *#@! archetecures out there, it has to be one of the most arcane, messed up designs that is beating the pants off of everyone. I mean, all the addressing modes, the stack-based FPU, variable-length instructions, and MMX/3DNow/SSE! Have you ever downloaded volume 2 of the "Intel Architecure Software Developer's Manual", the instruction set reference? It's 854 pages! How can companies with so much baggage to work with beat everyone else to the punch on 1 GHz?

    Gah. It's bad enough to realize that we'll probably never see the end of that hideous kludge of a design, much less that it's because it's beating the pants off of cleaner designs due to production problems. Makes me nauseous...

    For that matter where's our 1 GHz+ Alphas?
  • Despite IBM's (well stated) committment to Linux and Open source, it is their proprietary product lines - AS/400's, RS/6000's and the big iron dinosaur mainframes that still make the profits that allow them to undertake this R&D. These certainly sound impressive CPUs - and one wonders at how much money IBM is still spending on R&D each year to continue to come up with these devices.

    Provided there is always a market for the top end, proprietary (and expensive) closed architectures, then IBM (and others) will continue to generate the profits to research and build leading edge stuff. Will you and I ever have one of these babies on our desktop, or will my server at home running Linux or FreeBSD or whatever thump along at 2+GHZ? Not likely, but you can bet that in the next few years, most consumer level chips will use some of these features.

    Moore's Law will last at least a few more years, I expect.

    Maclir

    Disclaimer: I once worked for IBM in the bad old days.

  • Sorry but your specs are wrong !

    1+ gigahertz (2 Ghz is later, it starts at 1.1 in 2001)
    Dual processor on one dies
    500mhz bus
    LARGE L2 cache (I would imagine 2-4mB
    64 bit
    -------------------------------
    x86 CPU's ::
    1+ gigahertz (this year, should be 1.5 to 2 Ghz by the time the Power4 launch)
    One processor on die
    400mhz bus (not 200 Mhz)
    512kB-2mB L2 cache (most probably 1 MB, but the Forster will have up to 4 MB since it is a Willamette for servers)
    32 bit

    Doesn't look so bad for x86 anymore...
  • nitpick:

    negroponte, I thought, defined bits as the immaterial thingies, and atoms as the material ones.

    So shouldn't that be atoms are atoms!?

    Or perhaps you were referring to something else
  • I imagine that the secure storage of the hardware is due less to the price of the servers than it is to the fact that most system security is predicated on the machine remaining physically inviolate. I get root read privs on any machine I can rip the harddrive out of, for example.

    Johan
  • well, the Tera chip (which I constantly rant about -- I love that thing) and the MAJC do know about threads. However the IBM thingies, and indeed pretty much every other chip out there indeed do not know a thread from a twinned up piece of animal hair.

    They need OS support to switch from one instruction stream to the other. Without that os support, it is up in the air what the IBM multichip would do (I guess I should read the link, eh?), but I imagine that one of cores is a master and is the only one which is activated on startup. It is up to it to run the OS code that initialises the other cores with the appropriate instruction streams.

  • You are ignoring the fact that the chip geometry is also getting much smaller. Yes there are more transistors on the die, but the die is still small because you can get more of them in a given area. Thus, the yields on wafers is still good.
  • I draw the line at my large intestine. Ick.

  • Uhh... He's joking. By any chance has your sense of humor been surgically removed?

  • That's true. But as your geometry gets smaller, so does your vulnerability to smaller bits of dust and smaller defects / imperfections.

    Of course there are other things: we know they've gotten yields up higher and higher per transistor, because they keep packing so many on... I think another poster implied that it must have simply been more cost/performance effective to design more complex single core chips than to try and do multi-cpu chips with the less complex cores...

    There must be a technical/trade paper/review out there somewhere which details not only what all the issues, sub-issues, and permutations of issues have been over the past 10 years on this, but what the actual numbers/progress on each item have been, and how the math actually worked out along the way, and thus show what things were actually important in getting the yields high enough to do this. It would be an interesting read.

    -NH

    Hey zzg: Are you saying that the 486 SX's were chips which had defects/failures in their caches, and thus 'selected' for cache-disabling? (I knew they had their caches disabled, but I don't think I knew/figured that it was a by-product of the yield failures... I think I just figured it was a corporate decision to hobble and sell into the lower cost market...)

  • I wonder if Transmeta could do that with their Crusoe processor?

    Hmmm, four Crusoe processors on one chip....
  • I have the impression that implementing a 2n-bit instruction set requires exponentially more die area compared to an n-bit architecture?

    Modern Intel 32-bit processors have tens of millions of transistors. The 80286 processor, with 16 bit architecture, had something like 125 000 tansistors.

    Imagine a processor die which would run hundreds of tightly coupled equal 16-bit cores, with a modern clock frequency of 1 GHz.

    The arithmetics that a program does can be broken to sub-problems that work within 16-bit number sets. When you need bigger numbers, you could use software emulation of bigger numbers. Accessing large data sets should be done through object methods anyway, not by direct addressing, so you don't necessarily need the traditionally desirable large, flat address spaces.

    All in-die processor cores could have a sizable private memory for higly dynamic small objects like run-time system constructs and such.

    It should be easy to design and optimize a processor, which is built of small equal, cloned parts.

    Of course you would need parellel programming techniques to use the power. But modern languages like Bell labs Alef and Limbo make it fairly easy and starightforward to write highly parallellizable prgrams. Most current systems use threading anyway. The channel abstraction in Hoares "Communicating sequential processes", later in Occam language and in those Bell languages I mentioned works nicely in this kind of arcitecture.

  • There was a serious discussion on usenet recently about whether it was possible to replace the oscillator on a network card, to get it to run at 200mbps instead of 100mbps.
    The discussion was cut short when it was pointed out that you would have to change all the network cards connected to the same network for it to work.
    As far as I know, nobody has tried this yet.
  • You missed my point. RISC processors in theory run on at least as high frequencies as their CISC counterparts, if not significantly higher. Instead we have Intel releasing 1.3GHz Willamettes in Q3 00, and IBM releasing 1.1GHz Power4s in Q3 01, almost a year later. Sure the memory bandwidth, dual processors, increased cache, and massive superscalarity will make the Power4 faster anyway... but it used to be that RISC systems had better CPI *and* higher clockspeeds.
  • who in the hell needs that kind of power?

    Anyone who runs a website which gets mentioned on /. of course.
  • I'm not impressed. Sure Intel and AMD have had trouble even getting up to 1GHz, but the power4 should be compared to the upcoming Willamette cpus, which are going to be starting at 1.3GHz -- and that is in the middle of *this* year, not the second half of next year when the power4 will be shipping.
    Sure it will be cool to have two processors within a single die, sure it will be cool to have a 500MHz bus... but the article makes it sound like the clock speed will be something really great, while in fact it is a little disappointing.
  • Man, what I really want to overclock is the battery in my laptop. =)

    You want to make the battery run faster? How odd. I'd prefer it to run slower, and thus last correspondingly longer. ;-)
  • http://bwrc.eecs.berkeley.edu/CIC/summary/local/ scroll to bottom for the specs(including the SPEC values) for the power3...includes practically all the mainstream processors
  • What took you all so long ? SMP on a single chip is an obvious advance.

    Memory bandwidth is a problem in all SMP systems AFAIK. Maybe what they were waiting for wasn't the capability to put two or more chip cores on a chip. Maybe it was multiple cores + on die L2 cache to alleviate the memory bottleneck problems.
  • Of course it will arrive on the desktop. It will have an Apple [apple.com] logo, come in a translucent graphite case, and be declared a munition by the DOD.

    I can't wait!

  • Pure Fiction said in the article:
    I wanna overclock one of these bad boys

    Seriously, the chip runs at over 2 GHz and has a 500 MHz bus, and the first thing Pure Fiction says is I wanna over clock one of these bas boys? Get a life dick.

  • I read it in microprocessor report several months ago. It was a very good article but the chip rotation really impressed me.

    Nice to see lateral thinking is alive and well!


  • Thanks to the folks at Terra Soft: Yellow Dog Linux! [yellowdoglinux.com]

    See it in action on a prototype.... Applefritter [applefritter.com]
  • If our buddy can get out from under Motorola's thumb and offer a multi-processor Mach Kernal for this puppy...
  • You missed my point.... (not like anyone will read this since this thread/topic is pretty old...)

    The utilization of this CPU (percentage wise) would be the same as the utilization of just one of the cores. Since it is two cores, and not just one with two register file. One core cannot dump extra instructions into the other core since the internal issue buffers are separate and probably not shared.
  • Completely missed my point again.

    You mentioned threads... yes, this cpu can process two threads at the same time... so can SMP machines...

    A thread is a concept of the operating system... by definition, a thread shares the same memory space as other threads of the same task, etc. The CPU has no idea what you want to do, it only crunch numbers/streams of instructions.

    The OS schedules the instructions into the CPU. Therefore, in order to crunch vertices 1-1000 and 1001-2000 simultaneously, it is up to the OS to tell the processor to do that, i.e., set the two pc's to correct locations. Assuming this machine does run an OS (most if not all machines does these days) the OS will need to schedule other processes/threads which means overhead... as much as SMP machines.

    Your message also contradicts the your previous. If the CPU is two complete cores, the execution units cannot be shared since the issue buffers are not shared. You cannot issue one instruction that resides in one core into the other core, there are no paths connecting those two. The article only mentioned that the L2 can be shared, not other buffers lower down the abstraction.

    The architecture they described from the article seems to me like SMP on a single chip, which is different from multithreading. Go read about these two, and learn the differences...

    It's funny how a message that is totally wrong still get a score of 2... just shows that the majority of slashdot readers, even the moderators, do not know much about internals of computers... not that I expect them to know though... but it's just funny.
  • Last time I heard, is that the Power4 chip supports multithreading by having two separate register files on chip (not two separate complete cores.) Perhaps the number of execution units in the core is also increased, but I am not to sure about that.

    But IBM could have changed the architecture since then...

    I think the whole point was to keep CPU utilization (and execution core unit utilization) high. Seems to me that slapping two complete cores together is kind of dumb because both chips will still be underutilized... half the silicon will be sitting there unused...
  • Number of man-years per chip stays relatively constant simply because we are not redesigning everything from scratch. Usually new chips are just a slight modification of the old architecture, i.e. elimination of bottlenecks, addition of newer advanced microarchitectural techniques, etc.

    Of course there comes a time when everything needs to be redesigned, and that usually take a lot longer, for example, the Intel Willamette chip... how long has that chip been in development? The last chip that Willamette team designed was the original Pentiums... that was almost 5-6 years ago! (if not longer...)

    We are seeing many chips coming out in short periods of time because both Intel and AMD have multiple development teams that leapfrog each other to release new CPUs. Intel, for example, has at least 3 teams, the Itanium team, the Willamette team, and the Pentium {II, III} team.
  • Unfortunately, the CPU has no concept of processes or threads. It just processes streams of instructions like a fatass with a stream of hamburgers. It won't care whether the burgers are from Burger King or MacDonalds or Wendy's.

    This means that in order to utilize this dual core, single die chip, we still need an OS, which means overhead... I would think the overhead is as much as SMP machines.
  • One benefit of this dual core chip is of course, as mentioned in the article, the bandwidth between the two cores, and the ability to share L2 caches.

    This would improve raw performance and will be much better than SMP setups, although I think the overhead _percentage_ would be about the same.
  • It will be interesting to see if dual processor on-die will double the performance. I wonder what the SMP overhead will be?
  • A while ago there was a slashdot article about how IBM now has the ability to put "tens of billions" of transistors on a single chip, while intel only sports 27 million.

    What I want to know is when IBM will make some chips with this technology, seing as how chips will probably push well past 2Ghz?

    Overclock that bud...

    Do not provoke me to violence, for you could no more evade my wrath than you could your own shadow.

  • The info I read (which is admittedly vague) says that Power3 implements the PowerPC ISA (both 32 and 64 bit versions).
  • What many people fail to realize is that more than your chip is at risk. I have seen motherboards go bad - and video, audio, and hard drives are at risk as well. I have replaced motherboards in systems where the owner overclocked their AMD system - and the cpu was fine. I have also seen corupted data.... not what I would call a good solution.
  • Idunno, overclocking high-end processors is doable in some cases (not necessarily this one), you just need more coolant then your average nuclear powerplant.
  • How would you overclock a "production (by production I mean RS/6000 AS/400 type proprietary machines)" type server? This isn't some BX motherboard with clock speed jumpers.

    The old fashoned way would probably be the easiest; change the frequency that the chip uses for timing. Either swapping out the crystal or modify the traces that are used to set the timing frequency would do it. That's what the BX boards do.

    Remember: bits are bits!

    You could "Kryotech" it, but I think there would be vast amounts of cooling already being it 2 chips on one die running at 2 gigahertz even with a .18 micron fabrication.

    Cooling is a necessity after you actually increase the frequency...and we're back to the crystal again.

  • Where and when will we have a board speed to use this uniformly? I want this for loading high cache calculations, but I also want it to (gak) handle the dsp aspects in a handhald. Very cool, very hot!!
  • Do you know any engineers? They overestimate everything, like Scotty telling Kirk how long repairs will take. You bet that you can run that chip faster than it is rated.

    Topher

    "I've not met a human I thought was worth cloning, yet. Lot's of cows though." -- Mark Westhusin

  • Do you know any engineers?

    I am an engineer; or, at least, I pretend to be one most of the day. I help design chipsets for high-end systems.

    Yes, we provide some margin when we set operating frequencies. But we spend an awful lot of time determining the operating boundaries. The word "estimate" doesn't give the full flavor of what we do.

    I'm not going to provide details, because they're probably confidential. But I will say this: We know the voltage/frequency/temperature points at which our chips stop working properly. And it's in our interests to push the frequency as high as it will go.

    Call me a wimp. Go ahead. But I don't overclock my system, and I definitely wouldn't overclock anyone else's.

  • by Anonymous Coward
    IBM announced power4 at the Hot Chips conference last fall. There is an excellent article in Microprocessor Report detailing the procesor. The report can be found on IBM's website here: http://www.chips.ibm.com/news/1999/microprocessor9 9.pdf
  • I once overclocked my watch - first time in my life I have every been early for anything.
  • You == IBM, iNTEL, AMD etc..

    As for this IBM chip. What took you all so long ? SMP on a single chip is an obvious advance. When you vastly increase the amount of circuits on a chip as happens between a Celeron and a P3 without a matching increase in performance something has to give. Why not make that the number of cores on the chip? I hope this isn't patented because it really is obvious.

    This brings up something I have been thinking about with the Cruise. If you can convert 32 bit instructions to 128 bit meta instructions and have the finished product run as fast as on the genuine 32 bit CPU.

    What if the same technique is applied to an SMP setup in such a way that the software sees the processors as a single CPU. Right now this kind of abstraction is handled by the Operating system and except on the Mainframe that is very inefficient. To the point where 2X400MHz CPUs is a whole lot faster than 4X200MHz.

    Now if the whole thing including say 6 CPUs and 2 Megs of cache were put on a single chip at 500MHz to 2GHz, how fast would it be ? My guess is that this could easily be the fastest low end server or workstation chip by a good margin.
  • Do you know any engineers? They overestimate everything, like Scotty telling Kirk how long repairs will take. You bet that you can run that chip faster than it is rated.

    Yes, you can, if you're prepared to take the risk -- that's the whole basis of overclocking. Chips are rated at the speed the manufacturer can guarantee they'll operate as intended. Say you overclock your chip by 15%. You're now encroaching into the safety margin that the engineers and the manufacturer allowed to be sure that all chips will work correctly. Even so, perhaps 98% of all chips will be OK. Do you want to gamble on whether or not you've got one of the 1 in 50 chips that won't work? Personally, I don't like the odds, particularly when the chips cost as much as this one will...

  • Overclocking SMP is NOT suicide [...] What's the risk?

    The risk is both damage to the physical hardware and data corruption. The hardware can easily be replaced when it's a cheap Celeron, but not when it's a dual core IBM Power CPU. The data corruption can't be ignored, though. Don't believe me? Maybe you'd like to hear it directly [indiana.edu] from someone you might trust.

  • Overclocking stupid, eh? Actually, many PowerPC processors overclock quite well, I know from personal experience.

    Maybe they do, maybe they don't. You're missing the point though. If you want faster speeds, go buy faster processors (or more of them). Overclocking is only for those who can't afford to do that. People buying these chips aren't going to fall into that category.

    The other point to consider is that overclocking an SMP system is tantamount to suicide, by all accounts. Now maybe that won't be the case here, because the cores are on the same die, and hence will be affected in exactly the same way, but I don't know enough about it to be sure, and I certainly wouldn't risk it.

  • To the point where 2X400MHz CPUs is a whole lot faster than 4X200MHz.

    Depends on what you're doing, my boy. If you're running 4 different CPU-hungry jobs, a 4X200 may well be faster than a 2X400 -- assuming everything else about the processors is equal.

  • I've been trying to overclock my lightbulb, and I thought I'd ask you gurus on Slashdot for some pointers. My bulb says "60W" on it, and I want to get it up to 75 or 100.

    • I'm having a heck of a time getting the heatsink to stay put, it keeps sliding off the top of the bulb. Any suggestions?
    • Microsoft Lightswitch keeps crashing. Do I need to up the voltage to keep it stable? I've got a 220V line that I could try plugging it into.
    • My lightbulb is currently running at 60 Hz. I've heard that when you increase the frequency, the lightbulb will start emmitting ultraviolet or even X-rays. My friends tell me I can protect myself by painting the lightbulb black. I need to know how many coats of paint to use, please help!!!
  • I suspect that it will be some time before this technology ends up in consumer PC's. The fact that its meant for servers aside, most stuff is not coded to support multi-threading.

    Sure, *nix is, BeOS, and NT (2000) are, but the majority of people still run 9X on their desktops.

    Quake 3 and Unreal Tournament support SMP, but there are few consumer level applications that support it. Apparently BeOS can force multithreading, and this is cool, but what we really need are more apps that can take advantage of paralell calculations. Even Carmack states that dual processors running Q3A only increases performance in the most demanding situations.

    Even the guys who maintain the Beowulf-How-to (someone is going to post this...) say that paralell computing is great for crunching data, well, IN PARALELL. Quake is not paralell. Clock speed matters more in 3d shooters than overall crunching power (Unless you *like* a slideshow.)

    Don't get me wrong, I personally would love to have a machine running either Linux or BSD with one of these things in it (or many) but I don't know what the hell I would do with it.

    Until then I will stick with a BP6 and dual-celerons, heck, maybe flip-chips or the new Jalapeno's from VIA/Cyrix.

    I think that this is the way of the future, but we won't see it on the desktop for at least 5 years. (IMHO)

  • If Motorola plans to incorporate this into their PPC lines. Taje the 604e for example, from what I understand of its architecture it could have easily been made to do two-chips-on-one-die. I would sorta like to see chips of this caliber in the next generation or so of Mac servers, maybe even non-Mac PPC systems (Linux, BeOS). The benefits of SMP over supercalar is that SMP allows you to have multiple superscalar processing units, if a processor can do n number of processes with a single superscalar processing unit then with SMP it can so xn processes where x is the number of processors. Most people know this already. What really interests me is the high bus speed. Intel and AMD's offerings may be nice for server platforms because of their price but they would get their asses chomped off by the sheer system speed from the Power4. I'm sick of hearing about the Athlon's EV6 bus, the memory (read the entire system besides the processor) only runs at 100mhz. IIRC AMD is going to be using DDR SDRAM with the Sledgehammer to boost its overall system performance and the system will clock at 133, I would still rather have a 500mhz system bus.
  • The problem there lies in the large datasets, if you were running 16bit code it would be fine but for many applications today (games, graphics, voice recognition, encryption/decryption, ect.) you need more than 16 bits. If you had to emulate 2^n bits higher than 4 you'd have major system slowdown. Having a bunch of identical cores would mean they would need to be small. Small cores mean they won't have the space to have optimized cores. Todays chips have highly optimized cores, like AltiVec that can handle large data sets at high speeds. It's like with Rambus memory, they have really high frequencies but a teeny tiny data bus which means they have lots of latency, sometimes faster is more valuable.
  • Overclocking SMP is NOT suicide. I know several people with overclocked dual Celerons that work fine. And why not? They are cheap, if one burns up, you throw it away and get another one.

    Heck, throw them away every four months and upgrade anyway. Celerons are cheap as dirt, and when overclocked, are as fast as far more expensive P-III's.

    What's the risk?


    Torrey Hoffman (Azog)
  • Enough with overclocking already. This isn't your $70 Celeron toy. When you get to work +$5.000 chips , you are free to overclock them but I doubt it even occurred to anyone to overclock their $9000 UltraSparc cpu or similar. Yep, overclocking is stupid. flame on ..

    Acutally, when I used to work in Ross (used to manufacture CPUs for Suns) in their modules lab, one of the things that we routinely did was to overclock the CPUs (not to mention other nasty little tricks involving soldering, cutting traces on the MB with an exacto knife, etc.). Mostly it's just a matter of providing proper heat sinks and air circulation. So it did actually occur to at least someone. :-) But you're right in that no serious business customer is going to overclock their high-end workstations and risk invalidating the warranty.
  • No, when running a process on a Windows 2000 box such as Quake II that doesn't do SMP, Windows 2000 will put the non-SMP program on its own processor. "Load Balancing"
  • Hey guys, are we quick to forget history? The more people get up and proclaim that a given technology is too expensive / not needed / 640k is enough for the desktop, someone goes and proves them flat wrong.

    One of two things happens: Consumer technology just blows away these so called "elite" chips, (anyone want to compare one of those "elite" Alpha 150Mhz machines - once a VERY expensive minicomputer chip - with a 1GHz consumer athlon?). The other is that "poof", it appears.

    There are issues with semiconductor yields as people mentioned preivously. But with celerons going for $70, it won't be too long before someone figures out how to do it cheaply.

    Ahhh, SMP on chip. Long way from the 6502 babyee :)

    Kudos

  • No, its two complete cores. And what do you mean the second proc would be sitting there unused? The stuff that this proc is going to be used for is highly parallel. Even most media stuff is parallel. Load BeOS up on a dual proc box and run a few media apps. You'll see that both procs have pretty high utilization.
  • Still wrong. It does process a stream of instructions, but that is exactly what a thread is! Whats to say that it can't process 2 streams of instructions? The guy above is still wrong, the POWER4 is two chips, but multiple threads CAN be done on the proc level. I think (don't quote me, I read it a long time ago on /.) that the Sun MAJC can process two threads. It goes like this. If one thread is say an OpenGL transform thread, while the other is a rasterization thread, whats to keep the transform thread from using the fp units while the raster threads uses the integer units? Or two transform threads sharing 4 fp units? 3D in general is hidously parallel. Again, I'm not quite sure, but I think someone is working on a multithreaded open gl implementation that uses multiple threads. Seriously, though, it makes sense. Whats to stop one proc from doing the matrix multiplys on verticies 1-1000, while the other does it on 1001-2000?
  • > No, when running a process on a Windows 2000 box such as Quake II that doesn't do SMP, Windows 2000 will put the non-SMP program on its own processor. "Load Balancing"

    That is correct. To prove this is the case, you can set the affinity (which cpu a thread is bound to). Task Manager | Process | Right-click on process | Set affinity.
    (This setting doesn't show up on a single cpu.)

    Another quick way to see this is the case is to start up Quake, and look at the cpu utilization. It will be around 50%, meaning the one cpu is taxed, while the other one isn't doing anything.

    One means of burning in a new dual system is to run 2 copies of Prime95: one on each cpu.
    For fun, I left 2 copies of prime95 and one copy of unreal running overnight. The one prime95 hadn't reached as many calculations as the 2nd one.

    Note: Windows NT runs the OS on both processors. It will not run a non-SMP aware process on both cpu's.

    For anyone looking for a cheap dual system, this is what I did:
    $35 cel/366 o/c to 550
    $140 Abit BP6
    Hard to beat the price !

    Cheers
  • 1. Each core in the Power4 is very superscalar, possibly more-so than any processor shipping today.

    2. I don't think that such a test (superscalar vs. SMP) would be usefull, as the results would be very, very, VERY heavily influenced by the multi-threadedness (or lack thereof) of the benchmarks, and any two processors available will have enough other differences in architecture to invalidate the tests.

    3. Both cores have small (16 or 32 k, I think) L1 caches, but share a large (1.5M or 2M) L2 cache. Furthermore, several chips share L2s via a ring-arrangement of uni-directional 128-bit 500 Mhrz buses, moving things around such that all cached data exists in the L2 of the chip that most recently accessed it, and in no other L2.

  • You will see this technology on the desktop. Beyond the fact that the Power Series is related to the PowerPC series (IBM uses both in their RS/6000 series), multiple cores has been on the PowerPC Roadmap [appleinsider.com] for a while. (Yes I know that is a rumors site. I have seen something similar on Motorola's site I believe, but can't find it right now). Yeah I know the info is a little out of date...but its just a matter of time.
  • The really interesting design feature of this architecture is that the chips work very well in SMP. 4 chips can be placed together each rotated through 90 degrees so that their fast interconnects align.
  • I wanna overclock one of these bad boys ...

    Always someone willing to ruin good hardware. Is there *anything* you people wont overclock?

  • I just so happens I was visiting alphalinux.org [alphalinux.org] today and saw Compaq has "just released" a document detailing the Alpha 21364 EV7 SMP on-chip processor. However this document has been out since I believe the October 1998(?) Microprocessor Forum. However, IBM's proposed 2 GHz at 500MHz FSB is quite intrigueing. I know... I know... Compaq seemes to be letting the Alpha wilt away on it's once strong vine but I'm still rooting for it. I remember when Alpha had reached 600MHz and Intel/x86 were sputtering along at half the speed. It wasn't until after the settlement between Digital and Intel did x86 start speeding up. Hmmm...anyone else smell fish? Well here's hoping that the Alpha can bring itself back to it's glory as speed king. And hopefully before the Merced/Itanium "Marchitecture" infects the corporate world.
  • by Shaheen ( 313 ) on Wednesday March 15, 2000 @09:51PM (#1199293) Homepage
    When I initially read this, I thought to myself, "Why didn't IBM just do a machine that was super-superscalar?" (Superscalar basically means that the processor takes n instructions at a time, rather than just 1 at a time).

    It would be really interesting to see the results from using on-die SMP versus a chip that is just twice as wide (2n instructions, instead of n).

    Also in question is how the caching is done. Do both cores update the same cache? Or do they operate on separate caches?

  • by RISCy Business ( 27981 ) on Wednesday March 15, 2000 @11:47PM (#1199294) Homepage
    No, POWER and PowerPC are not finally merging, nor do I think they ever will. The POWER architecture, however, since the POWER3, has fully supported the PowerPC instruction set in 32 and 64 bit implementations.

    Yeah, IBM and Motorola are in bed again. But it's been on again off again for years now. Don't count on it bein a final merging of the two architectures.

    =RISCy Business
  • by Haven ( 34895 ) on Wednesday March 15, 2000 @08:18PM (#1199295) Homepage Journal
    What took you all so long ? SMP on a single chip is an obvious advance

    1 terahertz is an obvious advance too. Just because its obvious doesn't make it easier. I'm sure that IBM has had prototypes of dual chips on one die before. They wanted the 7000 series(G4) of the Power PC chips to have a high end model that was 4 processors in the processors core. It is just hard to do. Just like it is hard to write an operating system that will make Non-SMP programs utilize SMP. Windows 2000 has "load-balancing" where it will run processes that are processor intensive on the chip that isn't running the OS.
  • by Haven ( 34895 ) on Wednesday March 15, 2000 @08:02PM (#1199296) Homepage Journal
    How would you overclock a "production (by production I mean RS/6000 AS/400 type proprietary machines)" type server? This isn't some BX motherboard with clock speed jumpers. You could "Kryotech" it, but I think there would be vast amounts of cooling already being it 2 chips on one die running at 2 gigahertz even with a .18 micron fabrication.

    Second of all, good luck on coming up with the cash to buy one. Even if where you worked got one they would still keep it under lock and key tighter than Fort Knox (to all you non-US people, Fort Knox is a place owned by the Treasury department where lots of precious metals are stored. It is locked up pretty tight.). I'm a super user for my network at work, and I'm not even allowed near some of the boxes we have.
  • by orz ( 88387 ) on Wednesday March 15, 2000 @09:09PM (#1199297)
    Current chips are superscalar, meaning that they have multiple execution units, but all execution units are working on instructions from the same instruction stream (thread). Complicated hardware analyzes dependancies and tries to translate that single thread into a parrallel mesh of instructions that can be executed simultaneously, but doing that is very difficult, and sometimes impossible.

    This would be different because two threads would be executing simultaneously, so as long as the OS could find two threads that need cpu-time, the hardware would gain a lot of parallelism without having to do more scheduling.

    This approach is good because it offers a way to use the excess die space without requiring too much extra effort from the designers. In the last decade or two the # of transistors per chip has gone up several orders of magnitude, while the # of man-years per chip-designer has not come close to keeping pace. It's also nice because the other common approaches are obviously reaching the point of diminishing return.

    What Compaq is doing is more interesting though... they are processing multiple threads simultaneously... on the same set of execution units! If one thread doesn't have enough parallelism... that's O.K.. The other 7 can pick up the slack!

  • by slyfox ( 100931 ) on Wednesday March 15, 2000 @08:27PM (#1199298)
    There is a good article on Power4 [ibm.com] at IBM's web site.

    The article says the system will have 10 GBytes/second of memory bandwidth and a 45 GBytes/second multiprocessor interface. The article estimates the cache sizes as 1.5 MB for the shared on-chip L2, and 32MB for the off-chip L3 cache. Each processor die has 5,500 pins and attach directly to a multi-chip-module (MCM).

    The article also suggests that the system will support up to 32 processors (2 per die x 16), and even more processors using clustering technology.

    Looks like this is going to make for a fast server system.

  • by cperciva ( 102828 ) on Wednesday March 15, 2000 @11:12PM (#1199299) Homepage
    When I initially read this, I thought to myself, "Why didn't IBM just do a machine that was super-superscalar?"

    Because of limited instruction level parallelism. Even with a 512 entry reorder window, 256 renaming registers, and a 256-way superscalar architecture, you still won't have ILP beyond about 10 on the gcc component of the spec benchmarks. Furthermore, as you increase the width of a machine, you increase the difficulty of finding all the data dependancies quadratically, since each instruction must be compared with each other instruction. Ultimately it comes down to an issue of decreasing returns, and you find that it is cheaper and faster to run two threads at once than it is to allocated twice as many resources to a single thread.

    As for the question of caching, I'd assume that they share the L2 cache the same way as in any other such system -- they share the bus, write to and read from the same cache, and snoop each other's actions. They of course would have their own internal L1 caches, with lower latency.
  • by Paul Komarek ( 794 ) <komarek.paul@gmail.com> on Wednesday March 15, 2000 @08:15PM (#1199300) Homepage
    At one time, not too long ago, the Power 3 architecture was rated (by some) as the second fasted floating point to the Alpha 21264 500MHz. The punchline is that the Power chip was running at 200 MHz!

    In the past, complications with multiprocessor computers has prevented their supremacy of single cpu architectures. I'd love to see IBM succeed with their multicpu chips, as I believe this technology may solve the nagging parallel problems with processor interconnect. And the Power architecture is very nice.

    Does anyone know if the PowerPC and Power architectures will finally become one with this product, as was expected with previous Power revisions? Somehow, I really don't expect to see it ever happen, with the way Motorola and IBM have gotten along.
  • by guacamole ( 24270 ) on Wednesday March 15, 2000 @07:54PM (#1199301)
    I wanna overclock one of these bad boys ...

    Enough with overclocking already. This isn't your $70 Celeron toy. When you get to work +$5.000 chips , you are free to overclock them but I doubt it even occurred to anyone to overclock their $9000 UltraSparc cpu or similar. Yep, overclocking is stupid. flame on ..
  • by Haven ( 34895 ) on Wednesday March 15, 2000 @08:10PM (#1199302) Homepage Journal
    "...will operate at upward of 2 gigahertz. It will be called the Power4, will use a .18 micron fab process, and feature on-chip L2 cache (supposedly quite large, though no numbers mentioned), and bus speeds of 500Mhz..."

    Power 4 ::

    2+ gigahertz
    Dual processor on one dies
    500mhz bus
    LARGE L2 cache (I would imagine 2-4mB
    64 bit

    -------------------------------

    x86 CPU's ::

    1+ gigahertz
    One processor on die
    200mhz bus (I don't recall the bus of the willamette)
    512kB-2mB L2 cache
    32 bit

    This not something you will see on Toms Hardware. Clockspeed isn't everything. A 500mhz 21264 DEC Alpha is MUCH faster than a 500mhz PIII. The Power4 is not a desktop processor. Compaq will not ship computers with the Power4 processor in them. People need to understand this! When was the last time you saw a benchmark that was PIII vs. RS/6000? I have only seen it once, and that was the PIII Xeon compared to other server hardware namely from Sun and DEC. That was on Intels site.
  • by orz ( 88387 ) on Wednesday March 15, 2000 @08:29PM (#1199303)
    The two processor cores is really cool, and something a lot of people have been hoping for for a long time, although not quite as cool as some of the stuff Compaq/Alpha is doing, but

    This article doesn't mention the most interesting detail I heard about the Power4: They're supposed to come in small rings of about four chips connected by ultra-high frequency 128 bit uni-directional buses that allow multiple chips to share their L2 caches, with fairly intelligent coherency stuff handled in hardware.

    The only bad stuff is that they're really targeting the highend server market, where I want most of that stuff for the low-end too. It's supposed to be 400 mm^2 on a .18 micron process w/ copper, so even after it moves to .13 micron it'll still be too expensive to mainstream use.

    Other tidbits include: 1. It's dropping a few of the more complex instructions from it's instruction set and depending on the OS to emulate them, 2. To simplify instruction scheduling, they're keeping track of packets of instructions instead of individual instructions, and 3. The per chip L2 size is supposed to be 1.5 megabytes.

  • by Northern Hunter ( 89531 ) on Wednesday March 15, 2000 @08:25PM (#1199304)

    > SMP on a single chip is an obvious advance.

    Unfortunately if you multiply the amount of circuitry you are trying to deliver in one fully working device, you cut your yield exponentially. This is a SERIOUS problem if your yields aren't high enough to make the exponential nature a small effect.

    Say on one wafer you have 30 defects bad enough to wreck whatever chip they are on. Now normally you make 100 chips on that wafer. So (first approximations here, I won't actually do the statistics) 70 chips make it, your yield is 70 percent.

    But now you double the size of your chips, so that same wafer now only produces 50. But you still have those same 30 bad defects. Whoops, your yield is now 40 percent. Quadruple the size of your die... Whoops, now you will be lucky to get a handfull of that entire wafer (you're trying to get 25 chips when there are 30 randomly distributed defects... I leave the answer as an excercise for the reader :)

    On the other hand if you do the same rough approximation with only 10 super bad defects per wafer, then you go from a 90 percent yield to an 80 percent yield when doubling the die size. No where near as bad an effect on the economics.

    So, the only reason they are now considering it is that they expect to have defect rates reduced enough to make it reasonably ecomonical.

    -NH

    My apologies for avoiding the statistics and actual mathematics, and my examples above use randomly chosen yields. I have an optoelectronics background that is a few years old, back when production yields at some places for III-V QWH Lasers with simple integration with a few other devices had utterly pathetic yields... Like 10 percent!!

Neutrinos have bad breadth.

Working...