Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Inside Intel's Next Generation Microarchitecture 116

Overly Critical Guy writes "Arstechnica has the technical scoop on Intel's next-generation Core chips. As other architectures move away from out-of-order execution, the from-scratch Core fully adopts it, optimizing as much code as possible in silicon, and relies on transistor size decreases--Moore's Law--for scalability."
This discussion has been archived. No new comments can be posted.

Inside Intel's Next Generation Microarchitecture

Comments Filter:
  • by Sqwubbsy ( 723014 ) on Thursday April 06, 2006 @10:47PM (#15081888) Homepage Journal
    Ok, so I know I'm going to get a lot of AMD people agreeing with me and a lot of Intel people outright ripping me to shreds. But I'm going to speak my thoughts come hell or high water and you can choose to be a yes-man (or woman) with nothing to add to the conversation or just beat me with a stick.

    I believe that AMD had this technology [wikipedia.org] [wikipedia.org] before Intel ever started in on it. Yes, I know it wasn't really commercially available on PCs but it was there. And I would also like to point out a nifty little agreement between IBM and AMD [pcworld.com] [pcworld.com] that certainly gives them aid in the development of chips. Let's face it, IBM's got research money coming out of their ears and I'm glad to see AMD benefit off it and vice versa. I think that these two points alone show that AMD has had more time to refine the multicore technology and deliver a superior product.

    As a disclaimer, I cannot say I've had the ability to try an Intel dual core but I'm just ever so happy with my AMD processor that I don't see why I should.

    There's a nice little chart in the article but I like AMD's explanation [amd.com] [amd.com] along with their pdf [amd.com] [amd.com] a bit better. As you can see, AMD is no longer too concerned with dual core but has moved on to targeting multi core.

    Do I want to see Intel evaporate? No way. I want to see these two companies go head to head and drive prices down. You may mistake me for an AMD fanboi but I simply was in agony in high school when Pentium 100s costed an arm and a leg. Then AMD slowly climbed the ranks to be a major competitor with Intel--and thank god for that! Now Intel actually has to price their chips competitively and I never want that to change. I will now support the underdog even if Intel drops below AMD just to insure stiff competition. You can call me a young idealist about capitalism!

    I understand this article also tackles execution types and I must admit I'm not too up to speed on that. It's entirely possible that OOOE could beat out the execution scheme that AMD has going but I wouldn't know enough to comment on it. I remember that there used to be a lot of buzz about IA-64's OOOE [wikipedia.org] [wikipedia.org] processing used on Itanium. But I'm not sure that was too popular among programmers.

    The article presents a compelling argument for OOOE. And I think that with a tri-core or higher processor, we could really start to see a big increase in sales using OOOE. Think about it, a lot of IA-64 code comes to a point where the instruction stalls as it waits for data to be computed (most cases, a branch). If there are enough cores to compute both branches from the conditional (and third core to evaluate the conditional) then where is the slowdown? This will only break down on a switch style statement or when several if-thens follow each other successively.

    In any case, it's going to be a while before I switch back to Intel. AMD has won me over for the time being.
           
  • Israel (Score:1, Interesting)

    by Anonymous Coward on Thursday April 06, 2006 @10:58PM (#15081933)
    So apparently Intel had to go to Israel to find computer engineers to design their flagship architecture for the next 5+ years. With a population of only 7 million how is it that so many brilliant chip designers are in Israel?
  • Since this is a dupe (Score:4, Interesting)

    by TubeSteak ( 669689 ) on Thursday April 06, 2006 @11:04PM (#15081954) Journal
    Can someone summarize nicely and neatly, the practical difference(s) between out-of-order and in-order executions?

    Why is it important that Intel is embracing OOOE and everyone else is moving away.
  • by Gothmolly ( 148874 ) on Thursday April 06, 2006 @11:25PM (#15082044)
    Wasn't the Achilles heel of the P4 and Itanium crappy code, that caused a pipeline stall on their very long pipes? Every time someone pointed out that AMD didn't have this problem, an Intel fanboy would reply that "with better compilers" you could avoid conditions where you'd have to flush the pipeline, thus maintaining execution speed.
    Well, those "better compilers" don't seem to be falling from the sky, and AMD is beating Intel in work/MHz because of it.
    Is Intel finally deciding "screw it, we'll make the CPU so smart, that even the crappiest compiled code will run smoothly" ?
  • by Nazo-San ( 926029 ) on Friday April 07, 2006 @01:08AM (#15082217)
    I just thought it should be stated for the record. Moore's law isn't a definite fact that cannot be disproven. It has been working so well up to now and will for a while yet that it is rather easy to seriously call it a law, but, we shouldn't forget that, in the end, there are physical limitations. I don't know how much longer we have until we reach them though. It could be five years, it could be twenty. It is there though and eventually we will hit that point to where transistors will get no smaller no matter what kind of technology you throw at it. At that point, a new method must be put into place to continue growth. This is why I personally like reading Slashdot so much for articles on things like quantum computing and the like. Those may be pipe dreams perhaps, but, the point is, they are alternate methods that may have hope someday of becoming truly powerful and useful. Perhaps the eventual sucessor to the current system will arise soon? Let's keep an eye out for it with open minds though.

    Anyway, I do understand a bit about how it all works. OOOE has amazing potential, but, in the end the fact remains that you can only optomize things so much. The idea there is actually to kind of break up instructions in such a way that you can actually kind of multi-thread a task not originally designed for multi-tasking. A neat idea I must say, with definite potential. However, honestly, in the end the fact remains that you will run into a lot of instructions that it can't figure out how to break up or which actually can't be broken up to begin with. If they continue to run with this technology, they will improve upon both situations, but, in the end, the nature of machine instructions leads me to believe that this idea may not take them far to be brutally honest.

    Let's not forget that one of the biggest competitors in the processors that focus on SIMD is kind of fading now. Apple is going to x86 architechure with all their might (and I must say I'm impressed at how smoothly they are switching -- it's actually exciting most Apple fans rather than upsetting them) and I think I read they no longer will even be producing anything with PowerPC style chips, which I suppose isn't good for the people who make them (maybe they wanted to move on to something else annyway?) At this point it's looking like it's more and more just the mobile devices who benefit from this style of chip, which is primarily just due to the fact that between their lack of need for higher speeds and overall design to use what they have efficiently, they use very little power and do what they do well in a segment like that.

    Multi-threading, however, is a viable solution today and in the future as well. It just makes sense really. You start to run into the limitations as to how fast the processor is going to run, how many transistors you can squeeze on there at once, power and heat limitations, etc, however, if you stop at those limits and simply add more processors handling things, you don't really have to design the code all THAT well to take advantage of it and keep the growth continuing in it's own way. I can definitely see multicore having a promising future with a lot of potential for growth because even when you hit size limitations for a single core you can still squeeze more in there. Plus, I wonder if multicore couldn't work in a multi-processor setup? If it can't today, won't it in a future? Who knows, there are limits on how far you can go with multi-core, but, those limits are further away than single core by far and I really feel like they are more promising than relying on smart execution on a single core running around the same speed. In the end, a well designed program will be splitting up instructions on a SMP/multicore system much like the OOOE will try to do. While the OOOE may be somewhat better at poorly designed programs (ignoring for a moment the advantages that multithreading provides to a multitasking os since even on a minimal setup a bunch of other stuff is running in the background) overa
  • Re:GHz (Score:2, Interesting)

    by jawtheshark ( 198669 ) * <{moc.krahsehtwaj} {ta} {todhsals}> on Friday April 07, 2006 @04:01AM (#15082702) Homepage Journal
    That was one thing that really annoyed me about the P4; a 2 GHz P4 was NOT more than twice as fast as a 850 MHz P3. It meant one couldn't compare CPUs with each other any more.

    You never could do that in the first place. Within a CPU family, it used to be possible. (With Intels naming schemen today, I can't do it anymore either!) Compare a P-III 500MHz to a P-III 1GHz and you knew that the latter was approximately twice as fast. An 2GHz AMD Athlon XP was approximately twice as fast as a 1GHz AMD Athlon XP. I say approximately because cache sizes could influence these results. You never could compare a P-IV to a P-III or a P-IV to an AMD Athlon, expect by falling back on benchmarks and you *know* that all these benchmarks are pretty much artificial and can skew results in favour of a certain architecture.

    Really a long time ago, it was even dubious within the processor family: is a 486DX2/66 slower than a 486DX4/100? After all, the bus speed of the DX2 was 33MHz and the DX4 had a 25MHz bus. Back in the day such things has a major impact. (Even today it can have a big impact...)

    You can also recall the Pentium Pro (The CPU on which both the P-II and the P-III were based on) It was a horrible performer for 16-bit code, but on 32-bit code it was pretty much king. Also don't forget the extremely fast cache that it had. A PPro200 with enough RAM can handle Windows 2000 without a hitch. (I know, I had one with 256Meg RAM) The P-II came out, with a less performant cache and it couldn't beat the PPro clock-for-clock. That's why the lowest P-II came at 233MHz. (Yeah, it also included the MMX instruction set, I know, I know...)

    In summary: within processor families you can compare, outside processor families you are pretty much SOL.

    Besides, I know I'm going to sound like someone saying "we have enough processor power", but my primary laptop is a P-III 500MHz mobile with 512Meg PC100 RAM. You know what? That baby runs pretty much everything I throw at it: Windows XP Pro SP2, OpenOffice 2.0.2, Firefox 1.5.0.1, Thunderbird 1.5, AVG Antivirus, PuTTY, Filezilla, Acrobat Reader 7, iTunes6, Quicktime, Media Player classic, Borland Delphi Personal, Eclipse 3.0, Tomcat and The GIMP (but I have to be patient when handling big images). Perhaps not all at the same time (I never tried), but I often run at least a selection of the above. Sure, sometimes I have to wait a few seconds for a program to start, but it's not as if I'm that of a hurry.
    If I need more oompha, I just switch to my own AMD Athlon MP 2400+ SMP machine (4Gig RAM) or to my wifes P-IV 2.6GHz Hyperthreading (2Gig RAM). Frankly, that doesn't happen often...

  • by JollyFinn ( 267972 ) on Friday April 07, 2006 @07:33AM (#15083138)
    It is only natural to extend this idea to the sharing of all resources on the chip. This is accomplished by putting them all in one big core and adding multicore functionality via symmetric multi-threading (SMT), a.k.a. hyperthreading. The secret is designing a processor for SMT from the start, not bolting it on a processor designed for single-threading as happened with the P4. I strongly believe that such a design would outperform any strict-separation multicore design with a similar transistor budget.

    Too bad it doesn't work that way. Lots of structures in CPU are n in complexity where n is width of processor. Also when the travaling of information across a die takes more than 10 cycles you need to have smaller structures, it will increase latencies of instructions.

    Here's one example, the bypass path needs to connect load port and every integer unit to every integer unit So there is (n*n) Connections between units, and the number of stages it needs to go in selecting input hampers the clockspeed eventally. There is practical limit on core size if we go bigger the clockspeed penalties and latencies will reduce more performance than adding core resouces will increase. Also SMT hurts cache hit rate, and that penalizes per thread performance also. When you put more execution units the maximum distance between execution units grows so the time its needed per cycle increases, due to delays moving data between execution units. But execution units are *NOT* the area where widening hurts mosts, its still easiest to explain. So then you either use 2 cycle latencies or go for very lower clockspeeds, or increase the voltage but power consumption is relative to v so no matter what the efficiency goes down.

    I believe SMT isn't completely dead, it can make a comback in intel machines at somepoint, with SOME additional per core resources. But from now on there is multiple cores.

    To make it clear, the transistor budget right now is so large that putting them all in single core isn't efficient, due to need to move data inside the core, and the n complexities.

Work is the crab grass in the lawn of life. -- Schulz

Working...