Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel

Intel Plans to Overhaul Chip Architecture 359

Carl Bialik from the WSJ writes "Intel is planning to announce an entirely new chip architecture later this month at the company's developer forum, the Wall Street Journal reports. The company isn't discussing details yet, but it's expected that Paul Otellini will discuss a 'technology foundation designed from scratch to improve energy efficiency and make it easier to add more than two processors.'"
This discussion has been archived. No new comments can be posted.

Intel Plans to Overhaul Chip Architecture

Comments Filter:
  • What does this mean? (Score:5, Interesting)

    by AKAImBatman ( 238306 ) * <`akaimbatman' `at' `gmail.com'> on Friday August 12, 2005 @12:54PM (#13305542) Homepage Journal
    One thing the article didn't make clear is what exactly Intel means by "A New Chip Architecture". i.e. Do they mean a new architecture as in the Itanic (but low power!), or a new chip architecture as in, "We're ditching the 20 stage pipeline in exchange for a more reasonable 6 stage pipeline, swapping out most of the control circuts for those from our StrongARM line, and rewriting the microcode to execute all of the Pentium instructions on a simple, low power RISC core."

    While they could go either way, I hope they've learned from the Itanium [wikipedia.org] and EM64T [wikipedia.org] debacles that they should stick with a compatible microcode. Leave the super-instruction sets to the MIPS and SPARCs of the world.
    • One thing the article didn't make clear is what exactly Intel means by "A New Chip Architecture"

      ftfa: The company said the new technology will be described by Paul Otellini, Intel's chief executive, later this month in San Francisco during a speech at the company's twice-yearly conference for hardware and software developers.

      i hav a hunch no one outside of intel knows just yet. probably have to wait for the conference to find out. this article just says, ?hey, things are 'a chanin'".

      • i hav a hunch no one outside of intel knows just yet.

        I have a hunch no one outside of Intel's PR department knows. They still haven't gotten their previous "new architecture" EPIC ramped up.

        My bet is that such pre-announcements of radically new stuff is mostly a way of freezing the market to stop supercomputer vendors from looking at IBM Cell chips in much the same way Itanium stopped people from using PA-RISC, Alpha, MIPS, etc.

        • I have a hunch no one outside of Intel's PR department knows.

          see now you have been reading too much dilbert. :P could be true, i have never worked for a company over 10 people so i dont have any first hand knowledge of pr people.

          • It's called i860 :-) (Score:5, Informative)

            by Jeremiah Cornelius ( 137 ) on Friday August 12, 2005 @01:31PM (#13305899) Homepage Journal
            The Itanium will be re-christened "Xeon failure edition".

            Intel i860

            The Intel i860 (also 80860, and code named N10) was a RISC microprocessor from Intel, first released in 1989. The i860 was (along with the i960) one of Intel's first attempts at an entirely new, high-end ISA since the failed Intel i432 from the 1980s. It was released with considerable fanfare, and obscured the release of the Intel i960 which many considered to be a better design. The i860 never achieved commercial success and the project was terminated in the late 1980s. No known applications of the chip survive and it is no longer manufactured.

            Technical features

            Intel i860 MicroprocessorThe i860 combined a number of features that were unique at the time, most notably its VLIW (Very Long Instruction Word) architecture and powerful support for high-speed floating point operations. The design mounted a 32-bit ALU along with a 64-bit FPU that was itself built in three parts, an adder, a multiplier, and a graphics processor. The system had separate pipelines for the ALU, adder and multiplier, and could hand off up to three instructions per clock.

            One fairly unique feature of the i860 was that the pipelines into the functional units were program-accessible, requiring the compilers to carefully order instructions in the object code to keep the pipelines filled. This achieves some of the same goals as RISC microprocessor architectures, where complex microcode, a sort of on-the-fly compiler, was removed from the core of the CPU and placed in the compiler. This led to a simpler core, with more space available for other duties, but resulted in much larger code, with negative impact on cache hits, memory bandwidth, and overall system cost. As a result of its architecture, the i860 could run certain graphics and floating point algorithms with exceptionally high speed, but its performance in general-purpose applications suffered and it was difficult to program efficiently (see below).

            All of the buses were 64-bits wide, or wider. The internal memory bus to the cache, for instance, was 128-bits wide. Both units had thirty-two 32-bit registers, but the FPU used its set as sixteen 64-bit registers. Instructions for the ALU were fetched two at a time to use the full external bus. Intel always referred to the design as the "i860 64-Bit Microprocessor".

            The graphics unit was unique for the era. It was essentially a 64-bit integer unit using the FPU registers. It supported a number of commands for SIMD-like instructions in addition to basic 64-bit integer math. Experience with the i860 influenced the MMX functionality later added to Intel's Pentium processors.

            Performance (problems)

            Paper performance was impressive for a single-chip solution; however, real-world performance was anything but. One problem, perhaps unrecognized at the time, was that runtime code paths are difficult to predict, meaning that it becomes exceedingly difficult to properly order instructions at compile time. For instance, an instruction to add two numbers will take considerably longer if the data is not in the cache, yet there is no way for the programmer to know if it is or not. If you guess wrong the entire pipeline will stall, waiting for the data. The entire i860 design was based on the compiler efficiently handling this task, which proved almost impossible in practice. While theoretically capable of peaking at about 60MFLOPS for the XP versions, hand-coded assemblers managed to get only about up to 40MFLOPS, and most compilers had difficultly getting even 10.

            Another serious problem was the lack of any solution to quickly handle context switching. The i860 had several pipelines (for the ALU and FPU parts) and an interrupt could spill them and need them all to be re-loaded. This took 62 cycles in the best case, and almost 2000 cycles in the worst. The latter is 1/20000th of a second, an eternity for a CPU. This largely eliminated the i860 as a general purpose CPU.

            Versions, Applica

            • by stripes ( 3681 ) on Friday August 12, 2005 @04:56PM (#13307856) Homepage Journal
              Intel i860

              OkiData had a short lived Unix workstation product line based around these. I used them for a while.

              One fairly unique feature of the i860 was that the pipelines into the functional units were program-accessible, requiring the compilers to carefully order instructions in the object code to keep the pipelines filled.

              Only the floating point pipeline was directly exposed. Your first two FMULADD got garbage as the result, the third got the result of the first FMULADD... I *think* it also had a mode with more conventional stalling if you tried to do a 2nd FMULADD before the first completed (or if you used the result register).

              The i860 also had a mode where you it would execute two instructions per clock, but you had to pair one integer instruction with one floating point instruction in that mode (and the pairing was static, if you put two integer instructions in a row the CPU would fault with an illegal instruction fault).

              Paper performance was impressive for a single-chip solution; however, real-world performance was anything but.

              It outperformed it's contemparary SPARC and MIPS CPUs by a considrable margin in FP, and by a small margin in integer heavy code. It was competitave with the HP snake systems (HPPA). It predated the Alpha, and was badly outmatched when the Alpha finally came out.

              One problem, perhaps unrecognized at the time, was that runtime code paths are difficult to predict, meaning that it becomes exceedingly difficult to properly order instructions at compile time. For instance, an instruction to add two numbers will take considerably longer if the data is not in the cache, yet there is no way for the programmer to know if it is or not. If you guess wrong the entire pipeline will stall, waiting for the data.

              That was extremely common at the time. The best the contemporary CPUs had was the IBM ROMP (pre-IBM POWER!) that had register scoreboarding so it didn't take a stall until the result register was used. It wasn't until five to seven years later that out-of-order CPUs were commercially available (and I can't remember who did them first, maybe the TI SuperSPARC? Or was it the MIPS R8000?)

              Another serious problem was the lack of any solution to quickly handle context switching. The i860 had several pipelines (for the ALU and FPU parts) and an interrupt could spill them and need them all to be re-loaded. This took 62 cycles in the best case, and almost 2000 cycles in the worst. The latter is 1/20000th of a second, an eternity for a CPU. This largely eliminated the i860 as a general purpose CPU.

              It seemed of handling disk interrupts, mouse movement, and even the relatively tiny FIFO for SoundBlaster 16 audio out. Maybe this was more a problem in theory then in practice? Clearly the i860 never got far in the embedded space though, and this couldn't have possibly helped.

              The i860 did see some use in the workstation world as a graphics accelerator. It was used, for instance, in the NeXTDimension, where it ran a cut-down version of the Mach kernel running a complete PostScript. In this role the i860 design worked considerably better, as the core program could be loaded into the cache and made entirely "predictable", allowing the compilers to get the ordering right. This sort of use slowly disappeared as well, as more general-purpose CPUs started to match the i860's performance, and Intel lost interest.

              I think it was also used as part of the "geometry engine" on SGI's Reality Engine product. There were something like 4 per GE, and up to 4 GEs on a Reality Engine, which was pretty impressive in 1991ish, but other then having something like 196 bits of memory per pixel falls pretty far short of today's $100 graphics cards.

          • by dbrutus ( 71639 ) on Friday August 12, 2005 @02:22PM (#13306395) Homepage
            I have a hunch that Steve Jobs knows. Apple goes to Intel during the 2006-2007 time frame because of their low power consumption chips out there on their roadmap. Now Intel is launching low power consumption chips. I would be shocked if Apple didn't have access to early chips as a condition for switching architectures.
        • by pjbass ( 144318 )
          Being someone who works in the logic development side of Intel, I can say for a fact they know exactly what they're talking about, and it's not marketing FUD. Think of how long it takes to make a chip design (I'll give you a hint - it's roughly 1-2 years for a rev. 0 design, with 6 months to a year more for a production-worthy design to be available). If we're announcing something next month, be rest assured that the designs, expectations, and work have already been done in knowing what this beast will lo
      • I wonder if Intel even knows yet. I think it's time they take their marketing guys, put them in a bunker and give them one phone surrounding by poisonous snakes. Then let the engineers do it right.
    • TFA claims the new chips will be in PCs in 18 months - given the incredibly long design times of modern processors, that means they've probably been working on it for at least a couple years.
    • Last time Intel pre-announced a new Chip Architecture there was a lot of strong competitiion in the 64-bit computing space. Leading players were Alpha, PA-RISC, Sparc, MIPS.

      Intel announced some fud about EPIC, and except for fujitsu who kept Sparc alive despite Sun's layoffs this FUD wiped out the entire market.

      Methinks they saw the power of this approach and if the last round killed 4 leading nplayers, this round will kill off the remaining 2 (IBM & AMD).

      • Methinks they saw the power of this approach and if the last round killed 4 leading nplayers, this round will kill off the remaining 2 (IBM & AMD).

        Except for one problem. Everyone now thinks that Intel is the boy who cried wolf.

        While Intel's FUD did destroy the high-end server market, they failed to account for AMD's move into that market. As a result, AMD has managed to take the development lead away from Intel. Any future attempts by Intel at new processor architecture will be met with a luke warm res
    • This is what I have been predicting: Apple had a seemingly better choice with AMD's current processors (on a performance per watt basis). However, Intel have already showed their pipeline to Apple and this is what prompted Apple's decision to migrate.
    • One thing the article didn't make clear is what exactly Intel means by "A New Chip Architecture".

      Option 1 (probability: high) ... it's either a new core or a new spin on the P6 core.

      Option 2 (probability: medium) ... it's yet another Funkitecture, with a good JIT in firmware on chip, like Crusoe.

      Option 3 (probability: medium) ... it's one of their existing Funkitectures, ditto.

      Option 4 (probability: low) ... it's option 3, with Alpha as the CPU and Freeport Express as the JIT.

      Option 5 (probability: medium-
    • by hotchai ( 72816 )
      Nope, this is still the same old x86 architecture we all love. What they mean by a "new architecture" is that they are ditching the Pentium-IV micro-architecture and developing a new one based on the Pentium-M micro-architecture (itself loosely based on the Pentium-!!! design). As a result, the new chips promise to deliver higher performance at lower power levels.
      • Yep this is exactly what they've been building up to for a year or two now, ever since AMD trounced them so badly with performance per watt (and they realized there is no economical way they can scale a P4 based architecture past two cores).

        I really do hope they keep the high performance per core that the pentium m architecture can offer. Having 8 cores is nice, but if they individually aren't very high performing, traditional apps like games are going to suffer badly on such an architecture.

        I know game de
      • Yes, that's the first thing I thought of. I've read that dual-core and 64-bit versions of the Pentium M with improved FPU performance have been in the works. The key fact here is that Intel has NEVER announced a desktop version of the Pentium M, even though the rumor mill has made the phasing out of the P4 a certainty. So, TECHNICALLY, it's a new CPU architecture.

        These will probably be announced as desktop-only chips, and should be available within a year. 18 months...no way Intel will wait that long.
    • by Sycraft-fu ( 314770 ) on Friday August 12, 2005 @01:20PM (#13305821)
      Though they may not want to admit it, Intel knows they've lost the 64-bit format war for desktops at least.

      So probably what the are working on is a next gen x86 architecture. Those don't come out too often, usually the design one and just modify it for a number of years. It sounds like they are going to start using modifiations on their Pentium M for desktops, which is cool since it is efficient both thermally and in terms of what it does per clock, but there's a limited life to it and they know it. The Pentium M is something of a throwback to the P3, which itself is really based on the Ppro design.

      So my guess is Intel figures it's time to unviel a new design for a core, but on x86 architecture.
      • The 64bit war for the desktop will be long and hardfought. I don't think they lost, just suffered a tremendous setback. That happens sometimes.

        Don't get me wrong, as a gamer, I want the highest gaming performance, and AMD is my chosen one. I don't particularly care for Intel. But to write them off already is a bit silly (imo).
      • by jiushao ( 898575 ) on Friday August 12, 2005 @02:09PM (#13306290)
        AMD won the 64 bit war in the sense that their instruction set approach ended up on top, on the other hand Intel easily ships far more 64 bit x86's than AMD at this point.

        Also it should be noted that the Pentium M is like the P3 in much the same way the K8 is like the K7. It is a very redesigned and improved core, so the ancestry in itself is no sign of it being an old design. As such I am not that sure that the new core wont be a Pentium M derivative as well, possibly simply a take on the Israeli Penium M by on of the US design teams.

        Otherwise I very much agree with you, the CPU projects at Intel are probably all x86 at this point, so we will probably just get to see Intel "get back on the track" after the somewhat failed experiment with the P4.

      • by philipgar ( 595691 ) <<ude.hgihel> <ta> <2gcp>> on Friday August 12, 2005 @02:20PM (#13306374) Homepage
        No... I doubt they'll be using the Pentium M core for this redesign. The new push will be for multithreading. The pipeline may shrink a bit, but long pipelines are nice because they allow for very high clock speeds due to low fanouts. When designing high power software going from 4 threads to 16 is often not too difficult. At least if you use the right paradigms. Combined with low-latency communication (L2 cache speeds) this makes for a very powerful combination.

        When designing such a machine its important to consider what the software will look like. Is it better to run 16 threads each with a CPI (cycles per instruction) of 1.2 or run 32 threads with a CPI of 1.6? This will actually push us much further back than the P3.

        The cores on these processors are far more likely to resemble the original Pentiums. Simple pipelines, in-order execution, minimal instruction level parallelism. When the current P4 superscalar beasts can rarly pull a CPI of 1, whats the point of allowing 4 instructions to execute simultaneously (at least if the core is only executing one thread).

        The new push will be to have 8 very simple cores (albeit with advanced SSE4 units with even wider vector instructions such as 256 or 512 bits) and allow each core to run 2 or 4 threads. This won't be hyperthreading as hyperthreading is a form of SMT (although Intel may reuse the name). It will be a form of fine-grained multithreading that allows context switches on L1 or L2 cache misses, as well as other latent operations. Of course their will also be logic to allow all the threads to run equally.

        With these processors we'll be able to run 16-32 threads simultaneously (or almost simultaneously). For applications that can be massively threaded this will result in a huge boost in performance. For the single threaded applications that aren't easily parallelizable .. . many of them don't need more power than what a simple 4GHz core can offer them. Those that require more computation than that will likely be reprogrammed to support multi-threading.

        This technology will scale tremendously. These new processors will essentially be supercomputers on a chip. I think this because of a presentation I saw by one of the lead P4 architects who was talking about future processors. This will be the future, and the time is now to rethink any applications you currently have and find someone competent in multithreading.

        Phil
  • by danielDamage ( 838401 ) on Friday August 12, 2005 @12:55PM (#13305552) Homepage
    You remember back in the day when processors had only one core?
    • I still use a single-core processor, you insensitive clod!
    • Yeah, I also remember when they had a 4.77 MHz processeor and no pipeline.

      And guess what? Those days SUCKED.
    • Could people recommend some multithreading-related books? It's clear that in the future the best way to get the best performance for your app will be using all the power of those cores simultaneously
      • I don't know if there are any complete books on the topic. I don't know why you'd need one, honestly. It's a topic that comes up regularly in CUJ [cuj.com].

        Here are some keywords to plug into Google if you want to read up on the basics of synchronising threads and sharing resources: semaphore, mutex, baton passing.

        • again, what we need is multithreading/multicore aware compilers/interpreters that allow you to make your program in the old way and distribute the workload between the cores without much hazzle.

      • CSP (Score:3, Informative)

        You might start here [usingcsp.com]. Lots of other books will tell you how to use semaphores and mutexes. This book will help you to understand why to use semaphores and mutexes (and perhaps open your eyes to better concurrency constructs), and teach you how to reason about your multithreaded design so that you won't get any nasty surprises when it comes time to run it.
  • It's Conroe (Score:5, Interesting)

    by Hack Jandy ( 781503 ) on Friday August 12, 2005 @12:56PM (#13305560) Homepage
    Conroe according to Anandtech...
    http://anandtech.com/cpuchipsets/showdoc.aspx?i=24 92 [anandtech.com]

    HJ
  • Announcement (Score:5, Insightful)

    by Ryan Stortz ( 598060 ) <ryan0rz@gLISPmail.com minus language> on Friday August 12, 2005 @12:56PM (#13305563)
    Who wants to bet that the announcement includes a integrated memory controller? I wouldn't be suprised if they just licenced Opteron technology from AMD; it would be alot cheaper than developing their own. Although, they could always just outright steal it.
    • Re:Announcement (Score:3, Informative)

      by AKAImBatman ( 238306 ) *
      I wouldn't be suprised if they just licenced Opteron technology from AMD

      Intel already did that [wikipedia.org] with their EM64T technology. It's already present in the latest Xeon processors, and is now considered the future of the x86 platform.

      Intel has pretty much admitted that they got egg on their face for that one. Especially since one of the purposes of the Itanium design was to create an architecture under which the AMD cross-licensing deals wouldn't apply. Talk about backfiring.
    • As pointed out, Intel is licencing AMD's instruction set. AMD is licencing SSE 1, 2, & 3.

      They have cross-licencing agreements with each other, IBM and other partners for this sort of thing.
    • Intel doesn't have to license it.
      IIRC, Intel and AMD have a cross licensing agreement, this is what allowed Intel to impliment X86-64.
  • by sbaker ( 47485 ) * on Friday August 12, 2005 @12:57PM (#13305576) Homepage
    On NPR this morning, they mentioned that Intel had said that a typical PC user wouldn't notice any change as a result of this new architecture. So one presumes this means no major instruction set revisions or anything.
    • And the typical PC user takes notice of major instruction set revisions? This could mean anything from them rebranding the Pentium 4 as the Pentium 5, or completely switching to RISC, as the average computer user probably doesn't care as long as they can check their mail and sync their iPod.
      • by ArsonSmith ( 13997 ) on Friday August 12, 2005 @01:03PM (#13305644) Journal
        When Apple said they would switch to intel what they didn't say was that Intel was switching to PPC.

        • Mod parent up as insightful!

          A lot of intel's planned 'improvements' closely mirror the major advantages of PowerPC architecture. Intel has clearly been influenced by Apple or is trying to push IBM out of the high-end market.

          Either way, I welcome some good innovation from Intel. I was far from being impressed with the Pentium 4 (with the exception of the M). Over the past 4 or 5 years, AMD has been the clear winner in terms of cost, technology, innovation, and speed. Intel has been the winner on the busi
          • (PS: Doesn't the way they're describing this make it sound like it's gonna be a super-powerful RISC chip with x86 emulation?)

            That's what the P4 (and the P3 and the K7 and K8) already are.

            They are RISC implementations "under the covers" with a x86-to-internal-risc-ISA converter on the front. Intel calls their RISC instructions "micro-ops" and even have a dedicated micro-op cache to reduce the need to retranslate the same x86 instructions over and over again in situations where the code loops or is otherwise
  • totally cool (Score:4, Interesting)

    by ackthpt ( 218170 ) * on Friday August 12, 2005 @12:58PM (#13305581) Homepage Journal
    no-way-the-old-architecture-is-totally-cool

    This is kinda funny in two ways..

    • 1. Intel often comes out with new processors which run HOT, pushing the chip to extremes of physics.
    • 2. The old architecture is a dinosaur, harkening back to the 8088 and rather inefficient in many respects, where RISC processors were supposed to trump it. Which is still around? It seems you can come up with all the technological advances you like, so long as it is still a pumped up 8088.

    'technology foundation designed from scratch to improve energy efficiency and make it easier to add more than two processors.'

    Not overheard anywhere: "We are peeking through a knothole in AMD's fence and seeing what they are up to.

    Nitpick: "The company isn't discussed details yet"
    The proper word is ain't.

    • How about hasn't?
    • pushing the chip to extremes of physics.
      I didn't know 50 degrees Celcius was the extreme end anymore;-)
    • * 2. The old architecture is a dinosaur, harkening back to the 8088 and rather inefficient in many respects, where RISC processors were supposed to trump it. Which is still around? It seems you can come up with all the technological advances you like, so long as it is still a pumped up 8088.

      Actually, only the instruction set harkens to the 8088. The actual core is much more similar to a RISC processor, but with microcode galore that makes it ACT like a CISC processor. Which is not to say that the current Pe
    • On point number uno:

      Intel kinda got out of that habit. The Pentium 4 was meant to follow this rule as closely as possible because it lead AMD into direct competition, which is good for both Intel and AMD; they are moving a lot of volume.

      Now AMD's tired of following Intel's chain around, so Intel's actually using smarter designs. I wonder if the Pentium 4 wasn't just a diversion tactic to get whatever was wrong with the Pentium M worked out. It would make sense to me, especially now, where their Penti
  • by Iriel ( 810009 ) on Friday August 12, 2005 @12:58PM (#13305591) Homepage
    One has to wonder if Apple had any 'insight' to these plans when they signed the deal.
    • by 99BottlesOfBeerInMyF ( 813746 ) on Friday August 12, 2005 @01:09PM (#13305715)

      One has to wonder if Apple had any 'insight' to these plans when they signed the deal.

      Actually, it is pretty likely that Apple was given a full roadmap and a few engineers to explain the whole thing while in in discussions and under NDA. The real questions are did this have anything to do with Apple's decision, is this in response to the deal with Apple, or is this just coincidental.

      • It would take a couple of years to develop a new processor architecture, and get chips out based on it. This has been in the works for a while now, and I'm pretty sure it would have been part of the road-map shown to Apple.

        • No kidding. The original Netburst design had hints that it started in 1993 with the "Williamette" moniker. Of course, I can't validate this right now, but I know some creative Googling will find the paper I'm talking about.

          One wonders if the engineers who took one look at what Netburst became and said "this would be a great diversionary tactic". Design technologies for other projects, slap them together on the Netburst-endlessly-extendable pipeline and ship. I wonder this because the Pentium M seems to h
    • Of course they did. The pentium 4 has a horrible performance / watt ratio, and teh Steve clearly showed a very different picture when comparing the G5 to "Intel".
    • by blamanj ( 253811 ) on Friday August 12, 2005 @02:40PM (#13306545)
      It's pretty clear they did.

      "A big emphasis is going to be performance per watt," -- Bill Calder, an Intel spokesman.

      "When we look at Intel, they've got great performance, yes, but they've got something else that's very important to us. Just as important as performance, is power consumption. And the way we look at it is performance per watt. For one watt of power how much performance do you get? And when we look at the future road maps projected out in mid-2006 and beyond, what we see is the PowerPC gives us sort of 15 units of performance per watt, but the Intel road map in the future gives us 70, and so this tells us what we have to do." -- Steve Jobs, Apple CEO
    • Would that be a good idea? If crypto is implemented in software, you can (relatively) easy fix bugs; if it's done in hardware, then every bug found means you're basically screwed.

      That being said, the attacks described are *not* remotely exploitable per se, and they can easily be worked around by not using hyperthreading, anyway, so they're really a tempest in a teapot.
      • If crypto is implemented in software, you can (relatively) easy fix bugs; if it's done in hardware, then every bug found means you're basically screwed.

        you better not screw up a block cipher implementation. that is the point. FIPS certification is a good clue you got it right.

        That being said, the attacks described are *not* remotely exploitable per se, and they can easily be worked around by not using hyperthreading, anyway, so they're really a tempest in a teapot.

        • "This paper reports successful extractio
  • switch? (Score:3, Funny)

    by minus_273 ( 174041 ) <aaaaa&SPAM,yahoo,com> on Friday August 12, 2005 @12:59PM (#13305606) Journal
    mac switches to intel
    intel switches to PPC ? :-p
  • by Anonymous Coward on Friday August 12, 2005 @01:00PM (#13305607)
    According to various preliminary benchmarks from The Tech Report [techreport.com], Tom's Hardware [tomshardware.com] and AnandTech [anandtech.com], AMD's desktop dual-core chips are significantly better than Intel's dual-core desktop offerings in terms of performance and power consumption. This is partly due to the fact that the AMD solution has a better inter-core communication architecture and lower memory latency.

    Meanwhile, Intel's desktop dual core chips seem to offer much more aggressive pricing at this time. AMD's lowest price dual core chip, the X2 4200 is almost twice as expensive as Intel's lowest cost dual core processor. However, an interview [pcper.com] with three AMD execs on PCPerspective.com claims that "AMD would eventually have lower priced Athlon X2 processors via the waterfall effect in the future".

  • by realmolo ( 574068 ) on Friday August 12, 2005 @01:00PM (#13305618)
    As we all know, the Pentium 4 is a pretty goofy, shlocky design. The Pentium M is good, but it's essentially a Pentium Pro. That's 10 years old.

    Intle NEEDS to prove that they can still make a good x86 chip from "scratch".
    • Intle NEEDS to prove that they can still make a good x86 chip from "scratch".

      Isn't "from scratch" and "x86" (aka backwards compatible aka carrying around crude hacks) kinda contradictory?
    • I'd say, if it works well and is competitive, does the age of the original design really matter? Keep in mind that the core if the Pentium M is heavily re-factored in terms of the overall logic design, improved branch prediction and so on. Making something completely new for its own sake isn't a worthy goal unless there are sufficient benefits versus the cost of such a design.

      For all I know, the Athlon64 core might have as much similarity with the core of the Nexgen 5x86 chip as the Pentium M might have w
    • Sure, its old. But it works very well. They can keep pace with P4's upwards of twice as fast, and not consume anywhere NEAR as much power in the process.

      Honestly, they should get an award for that. The basic design is, as you say, 10 years old. But it is *still* holding up next to far newer designs. Thats a huge accomplishment. It's hard enough to build a superior CPU architecture for *right now*. Building one that will still be relevant A DECADE INTO THE FUTURE is absolutely staggering. And not sim
  • by mikeophile ( 647318 ) on Friday August 12, 2005 @01:01PM (#13305627)
    I hope their new logo isn't as easily confused with a feminine hygine product. [tomshardware.com]
  • by Anonymous Coward
    Performance per watt? Notice now Intel is singing the exact same tune that Apple is? I'm not saying that it's being made specifically for Apple, but clearly Steve Jobs looked at the roadmap and, since Intel wants something new, saw a common goal that he could pursue.
  • I wouldn't be surprised if this is what Apple intends to put in some of their machines this spring. I really hope it's something good.
  • Will I still be able to run my current x86 OSs and code on these chips?
  • Didn't they make about the same claims when they came up with their 64 bit CPU that was not backware compatable and didn't give up real estate required to run the old legacy code? AMD rubbed their nose in it. Intel, like the customer, is a victim of their own marketing approach over the years. The customer and even the industry has learned to acceept awful penatalties in order to run old outdated legacy code. It's a bad design, but one that was promoted strongly by Intel. Are they doing anything different h
  • Semantics (Score:5, Informative)

    by frankie ( 91710 ) on Friday August 12, 2005 @01:12PM (#13305733) Journal
    The word "multiprocessor" should be "multicore". They're talking about 4 or 8 cores on a single CPU, which might be nice for blades but not so useful for a laptop or a gamer.

    And of course, Macheads note the phrase "performance per watt" [google.com].
  • by orz ( 88387 ) on Friday August 12, 2005 @01:12PM (#13305742)
    The article seems to pretend that the Israeli design teams low power Pentium M doesn't exist. It says the last major design change was the Pentium 4 (which was prior to the Pentium M), and doesn't mention that current and (already announced) future Pentium M based designs match the description given.
    • The last four major new Intel x86 core architectures, in reverse-chronologogical order, were the Pentium 4, Pentium Pro, Pentium, and 486.

      The Pentium M is a fairly serious revision of the Pentium Pro-Pentium II-Pentium III core series, but is clearly a revision of that series, not a truly new architecture.

      At a random guess, Intel may be having difficulty with multiple multicore Pentium Ms because the original PPro was only made to work in quad-processor machines.
      • Yep. Intel was very secretive about the Pentium-M's architecture when it first launched, mostly to hide the fact that it was based on the same P6 core as the old Pentium Pro (ie. something that's been around for more than ten years). The big announcement is a new x86 core, intended to replace the P6.

        The other slightly embarrassing (for Intel) twist is that the new architecture will be a lot closer to the P6 than to the P7 ("Netburst") core used in the Pentium-4. Essentially, the Pentium-4 was a dead end, an
        • by nutshell42 ( 557890 ) on Friday August 12, 2005 @05:41PM (#13308160) Journal
          The other slightly embarrassing (for Intel) twist is that the new architecture will be a lot closer to the P6 than to the P7 ("Netburst") core used in the Pentium-4. Essentially, the Pentium-4 was a dead end, and all Intel's x86 plans now involve Pentium-M derived chips.

          Yeah, the idea behind Netburst was to streamline everything for clock frequencies as high as possible. This offered marketing advantages (before ppl became used to AMDs xx00+ ratings) and there was a time (shortly before and after the clawhammer) when it seemed like Intel had been right. It seemed that whatever AMD did Intel could just crank up the frequency another 200MHz, there was already speculation about 6GHz and more. But then they ran into the 4GHz barrier (and they weren't the only ones. IBM originally put the Cell at 4GHz+ and now they seem to have troubles at 3.2GHz) and since then Netburst has been dying a slow and painful death =)

    • Dillhole:

      Pentium M is a low power pentium 3. the same old p6 architecture from 1996.

      Pentium 4 architecture came after Pentium 3, hence "the latest".

      Got it? Good.
    • How did this get marked insightful?

      They specifically mention the Pentium M in the article and they specifically mention that this is completely different from the Pentium M arch.
  • I was under the impression that Intel was going to concentrate their efforts on beefing up the Pentium M, by adding SSE3, 64-bit extensions, dual cores, hopefully a faster bus.

    Sure the 686 architecture certainly has been around for a while, but the PM is a pretty damn good chip.

    It's sad, but the era of exotic CPU's in servers and workstations seems to be ending; X86 is just better "bang for the buck", so much so that even Intel can't compete with it (Itanium)! I hope they know what they're doing.

  • DRM'd! (Score:2, Funny)

    by __int64 ( 811345 )
    "The company also is more aggressively building in specialized circuitry for such purposes as improving computer security, some of which also are expected to be part of the new architecture."

    Oh sweet! That sentence was written so balmily I think it has even qualmed my pre DRM large-scale nationwide deployment fears.

  • Its going to be strikingly similar to Athlon 64.

    'Bout time they admitted the P4 burst arch is antiquated.

    Raydude
  • I wonder if they plan to incorporate this technology [slashdot.org] into their design at all.

    Perhaps its still too new, but you'd think they would be looking to the future for ideas...
  • Yes but ... (Score:3, Funny)

    by bizitch ( 546406 ) on Friday August 12, 2005 @01:16PM (#13305784) Homepage
    Will it run Lotus 1-2-3?
  • More efficient. More powerful. Great for games too !

    If they sold one at the store that had 2 of these chips in them and ran XP/game and linus I would never look back at serial General purpose chips.

    http://www.gpgpu.org/ [gpgpu.org]
  • I forsee these improvements in their new chip line..

    • Integrated dual channel memory controller (DDR2)
    • Integrated x16 PCI-E for graphics
    • Seperate parts of the chip for TCP/IP Offloading, other specalized tasks
    • New socket (not LGA775, probably closer to 1100 pins)


    Or at least thats what I think.
  • Mesh (Score:2, Interesting)

    by Tarlyn ( 136811 )
    Two days ago HP came into my office and gave a 2 hour roadmap presentation to let us know what will happen to Risk/Alpha over the next few years.

    Well, Risk and Alpha are going away, and Itanium is the way of the future for HP-UX and OpenVMS. What was interesting was what they told us about the forthcoming Intel processors - the entire Alpha team was hired by Intel and the next gen intel chips will use the Alpha style switchless mesh architecture. This style architecture removes roadblocks inside the bo

    • Re:Mesh (Score:3, Insightful)

      Two days ago HP came into my office and gave a 2 hour roadmap presentation to let us know what will happen to Risk/Alpha over the next few years. Well, Risk and Alpha are going away, and Itanium is the way of the future

      Ten YEARS ago HP told us that Risc is going away and that EPIC/Itanium is the way of the future. Remember, their Intel/EPIC announcement happened back in 1993 [clemson.edu].

      My bet is that HP continues being a Windtel/x86 leader and that RISC (thanks to Cell and Niagra) move on with out them.

      (oh, you

  • and instructions. The x86 architecture is due for an overhaul and now is as good a time as ever to do it. CPU speed has stagnated and no new innovations are on the horizon except for the square-peg-round-hole solution of "add more cores!"

    I don't want more cores, why is it my Sony PS1 could do all it did with 33MHz and very small RAM? My $20 DVD player can decode MPEG1-4, JPEG, etc. with no massive videocards and CPU's. Sure, these are fairly specialized items, but when you think about it you could take the
  • AMD's beating the crap out of us. Let's restart from scratch and copy their model!
  • Considering that no one could make a 64-bit processor as good as the DEC Alpha, and that HP basically GAVE Intel the plans for the Alpha, I believe this new architecture is going to be heavily based on the best 64-bit processor evar: DEC Alpha. It's not so much new as it is just continued development of an already existing superior technology.

I judge a religion as being good or bad based on whether its adherents become better people as a result of practicing it. - Joe Mullally, computer salesman

Working...