Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel

Intel Scraps Plan For 4 Ghz P4 Chip 379

bizpile writes "It was reported earlier that Intel would be delaying the release of their 4Ghz Pentium 4 chips, but it now appears that they will be cancelling them altogether. The announcement came Thursday and Intel says they are going to rely on approaches besides faster clock speed to improve the performance of chips. Engineers are working to add additional cores to a single chip and improving the efficiency in how the chips interact with the rest of the system. Intel spokesman Chuck Mulloy said, "Those are the sort of things where you get more capability out of a processor by designing specific silicon solutions as opposed to just keep turning the clock faster." In the meantime, Intel is planning on releasing a 3.8 Ghz chip with 2mb of cache."
This discussion has been archived. No new comments can be posted.

Intel Scraps Plan For 4 Ghz P4 Chip

Comments Filter:
  • by Catroaster ( 176308 ) on Thursday October 14, 2004 @06:54PM (#10529953)
    Mhz do not always = performance!
    • by phalse phace ( 454635 ) on Thursday October 14, 2004 @06:57PM (#10529985)
      Um... Intel realized that when they switched to Processor Numbers [intel.com] earlier this year.
    • by Timesprout ( 579035 ) on Thursday October 14, 2004 @06:58PM (#10529999)
      I think Intel have realised they are reaching the point of diminishing return with trying to keep cranking up the Mhz on the current architecture and there are cheaper performance gains to be had else where.
      • Or they just found how hard it is to get the speed up that high and work properly.
      • I think Intel have reached the point of desperation. Admitting MHz isn't everything is a giant climbdown for a company that has always marketed heavily on that front, and killing further ramp-up on Prescott is a sad end for a troubled core.

        (A premature one, too, surely; multi-core and Pentium-M-based desktop kit isn't due for ages is it? And won't multi-core chips have to be developed from P-M tech anyway? I can't see *two* Prescotts on one die being easily coolable...)

        Bunging more cache on the chip is a
    • by Fallen Kell ( 165468 ) on Thursday October 14, 2004 @07:02PM (#10530042)
      I mean come on. We all know their engineers knew that MHz != better cpu. It just took them this long to finally convince their PR department to give up on the multi-billion dollar investment they have made in making "consumers" know that MHz == better cpu.
      • OR (Score:3, Insightful)

        by Ayanami Rei ( 621112 ) *
        Maybe they realized they weren't going to be able to reliably cool the netburst architecture at those speeds so they're going to have to switch to the lower-clocked, possibly multicore Pentium-M arch.

        They'd be FORCED to use a numbering scheme because any conspicuous lowering of the MHz would cause Joe Shmoe to say "What the hell?"
      • I don't know if I'd agree exactly with this comment. While a 3.8 GHz P4 does not perform as highly as a 3.8GHz Athlon chip would, an AMD chip can not physically run at these speeds. The pipeline would not support it.

        The slashdot crowd is quick to attack Intel because they're the big guys, but the NetBurst architecture is an extremely powerful and (gasp!) good architecture. While the engineers designing it designed a processor for maximum pipelinability (over 30 stages now) this is not really a bad thing. Pipelining a processor is a good thing in general. Its main claim to usage is that it allows a processor to run at a higher clock speed. That is what pipelining was created for; to break down the time into smaller slices so more can occur in parallell. This process works great when each stage is of approximately equal length, and I have enough faith in the Intel engineers that no single stage was much longer then the next longest stage.

        Back to the point though the pipeline does have downsides. A processor with 20 stages will lose ~ twice as many cycles on a branch missprediction (and more on a cache miss, but that number varies further) when compared to a 10 stage processor. However assuming that by using 20 stages we cut the cycle length by even 50% the additional stages were worthwhile. Cache misses are not a "common" event and branch prediction is in the 95+% range now, so the stalls added there are not as large as you'd think.

        What the pentium 4 has done was manifest these to a larger scale. Unfortunately the engineers desiging the processor did not realize the massive leakage currents that are seen with processors at the speeds Intel is using. From a computer architect's standpoint they build upon past assumptions, and more stages in a pipe generally help out, so thats what they did. While the end result is not as impressive as they were hoping the end result is not a poor product.

        Now what has the NetBurst architecture offered to the consumers? Well one of the main offerings its had is building an SMT processor (hyperthreading in marketing speak). SMT is more then mere marketing hype. It was not an afterthought thrown onto the P4 due to less then stellar performance as people have hinted at. SMT was originally designed for the Alpha ev8 chip that was scrapped. Intel however bought the alpha design team and used the SMT technology (albeit to a lesser extent then some would hope for) in the NetBurst architecture.

        What else has NetBurst added? The trace cache is a wonderful feature as well. This removes the x86 decode logic from the runtime pipeline for most instructions.

        So where can Intel go from here? My hope isn't so much in the multicore logic that some talk about. While multicore is interesting, I personally would rather see a wider P4 core (more execution units) and have them extend their implementation of SMT to allow for more concurrent threads of execution. a 4 or 8 way SMT processor could show some real results.

        And for those of you who are going to question what I'm saying... No I don't work for Intel. And no my desktop processor is not an Intel processor either (I run an athlon 1600 for my workstation). However in my lab I am working on algorithms designed specifically around SMT processors (as well as cache aware/prefetching enabled applications). Intel's processors happen to enable quite a bit of optimization if done properly.

        While I never agreed with Intel playing the MHz game, or their ridiculous prices, I would not say that the engineers were completely against the super-pipelining of the NetBurst architecture. While they may have questioned the reasons behind it, the real world performance gain does exist do to it.

        Philip Garcia
    • Mhz do not always = performance!

      Mhz do not always = Sales.

      By some accounts AMD and VIA have up to 40% of the global processor market now.

    • by Jugalator ( 259273 ) on Thursday October 14, 2004 @07:06PM (#10530084) Journal
      At last! Intel realizes that....
      Mhz do not always = performance!


      Yes, but only when they have a hard time increasing the clock speed do they "realize" it. It's no coincidence they didn't say this during the days of 2 GHz Pentium's, but is doing it now... Always spend the minimum effort of improving the architecture when you can just crank up the clock speed and show your customers it's the best thing to do.

      But I guess they've waited with this announcement (it was actually true since the day Intel designed their first microprocesor) because they dread the day when they have to start explaining how higher clock speeds aren't really everything.
    • Hopefully this means that we are going to be seeing a revolution in bringing multi-core processors to the desktop. Imagine a CPU that incorporates 4 cores, 4gb cache, 4gb ram, and 40gb storage all on a single die. At that point, the only upgrades you would need to worry about would be for mechanical drives like DVDRW and HighDensity Hardrives, and the latest graphics card. Of course some kind of liquid/vapor cooling would need to be used to pop out the full potential of these new processors, but then tha
      • Multi-core dies generate less heat than that number of processrs, and the trend is to use low-power/lower-speed chips. This means that the computer on your desk in a year or two (hopefully) will not need noisy/expensive cooling, and will draw much less power than current models.

        I know it's not a multicore device, but this [orionmulti.com] is an example of what's possible. 36 Gflops @ 220 Watts. (24 Gigs RAM, 1TB storage, $10,000) I want one.

      • If what you got today aint enough, it never will be...

        In any event, multicore will continue to fall short of expected performance since the software cant handle it.

        It has always been this way, never go to dual cpu till you maxed out the single one.

        Even then you will find that going multi-cpu is the opposite of the way softare companies want to go. They want to pay less to their programmers, but if they go multicore, they will have to pay more for people that can write code to take advantage.

        Honestly, c
    • Shucks! I alread bought the stir-fry and some peanut oil. A wok may do, but for that just smoked flavor there is nothing else quite like a serving of Cooked CPU...
  • Whee (Score:2, Insightful)

    by Anonymous Coward
    Good job. Now I might be able to get a decent bus speed.
    • P4 already offers a higher 800mhz fsb than the Athlon XP range.
      • Re:Whee (Score:2, Interesting)

        by eddy ( 18759 )

        On the other hand, the Athlon XP range is End-Of-Lined.

        Let's talk Athlon64 and Opteron instead.

        Intel will have to put the memory controller onto the CPU sooner or later. If they want to go "Not Invented Here", it's going to cost them $$$ BIG $$$ BUCKS $$$ in cache.

        • Having to put the memory controller on the CPU generates its own pain in the ass set of of problems.

          For starters if a new RAM type comes out you can't upgrade your board. No siree bob! New CPU for you, buddy! Want to run your CPU in dual channel mode? Well you gotta throw out all your socket 754 gear and get ready to pay for it all over again for socket 939 kit.

          Secondly it takes away transistors from the CPU when it could be devoted to more cache. Let's face it. No matter how much latency you save, it's a
  • AMD (Score:5, Insightful)

    by chill ( 34294 ) on Thursday October 14, 2004 @06:55PM (#10529968) Journal
    Wasn't that the entire reason behind AMD's use of the P-ratings? That performance was measured in more than just MHz.

    Hell, Intel has spend DECADES convincing the public that MHZ is king and now they are (once again) following AMD's lead.

    HA!

    -Charles
    • That is irrelevant (Score:5, Insightful)

      by megalomang ( 217790 ) on Thursday October 14, 2004 @07:16PM (#10530164)
      Intel continued to use the MHz race because the public was on board, and simply because they were able to maintain a demonstrable lead in the race due to their process technology lead. They preserved their enormous market share and high margins by spending decades convincing the public that MHz was the key.

      It will be difficult for them to apply as much inertia into another simple metric that the public will understand and by whose measure they will be able to remain the clear leader. They need to come up with another marketing story that pushes yet another metric that is again closely tied to their process superiority. I don't know what this is, but I'm sure they have a new story that we will see when they do their multi-core HT rollout.

      AMD did not exactly "win" simply because they gave up the MHz war so soon. Yes, they were the first, but they didn't have much of a choice since they knew they could not scale to 65nm process geometry like Intel could. They had to alter their architecture earlier. Intel did not, and it worked in their favor for more years.

      It is obvious from the past that Intel's marketing story will never resemble AMD's. They are not "following AMDs lead" unless by that you mean they were able to scale clock speed for a longer time than AMD was.
    • Re:AMD (Score:4, Interesting)

      by mgrassi99 ( 514152 ) on Thursday October 14, 2004 @07:16PM (#10530169)
      Remember, Intel sells products to millions and millions of people, most of which do not realize that MHz does not equal performance. One of my friends was just complaining the other day how this laptop he wanted with a Pentium M cost more than the Pentium 4 but ran half as fast. Marketing rules all, and when you're trying to crank out a profit, you do what you need to do to sell your product.
  • But what about Moore's law? Is nothing sacred?
    Seriously though, this seems like just what geeks have been saying for about a decade now -- clock speed isn't the be-all-and-end-all of CPU wars. Looks like Intel is agreeing with us!

    --
    Free gMail invites! [slashdot.org] (with references from the folks who're already there)
  • bits (Score:2, Funny)

    by laurent420 ( 711504 )
    32bit is sooooo 1998
  • Finally (Score:3, Interesting)

    by TimmyDee ( 713324 ) on Thursday October 14, 2004 @06:56PM (#10529982) Homepage Journal
    I knew this would catch up with them. I'm glad Intel is off the MHz thing. This doesn't mean the general populace will be more informed when buying a processor, but at least they might be looking at other features that may matter more (i.e. shared video memory, backside cache, etc.). Maybe.
  • by ThePlague ( 30616 ) * on Thursday October 14, 2004 @06:57PM (#10529987)
    Does anyone really care about clock speed anymore? Yes, I know some applications need all the muscle they can get, such as video manipulation and scientific computing. However, it seems the interest in clock speed has waned considerably since the 1 GHz mark was hit. Basically, unless you are doing high end gaming or one of the aforementioned activities, increasing clock speed does very little for you. Consequently, it seems to me that the inevitable increases don't garner the same excitement they once did--going from 133 to 166 MHz was a big deal. Going from 3.0 to 3.8 GHz isn't nearly as useful, though the percentages are the same.
    • CPU performance has become a pissing contest at many places. The latest games do very well with a 2.0GHz chip, the difference in real world performance isnt so bit compared to a 3.0GHz chip. Especially if you consider the cost difference.

      Cache is a big deal, think of the duron/athlon difference. Frontside bus is big, 266 and 533 FSB yield different performance metrics. And of course harddrive speeds havent changed much since the ATA100 since the Pentium2 days. Theyve changed, but not as much. Thats anothe
    • Pah. Are you saying that running the latest version of Windows (Whatever Longhorn/Aero/Avalon become) something that no one cares about? Some of it will be GPU intensive but I suspect a lot of it will also be CPU intensive. Minimum requirements for developers are can be found at:

      http://msdn.microsoft.com/Longhorn/Support/lhde v fa q/

      It's still too early to determine the final hardware requirements. You can run the developer preview on a typical machine from the past two years, although it's a better exper
  • What this means is that Intel will probably be releasing a multi-core HT product in the same market window that the 4MHz part occupied.

    Isn't this a full quarter in advance of what we expected? Won't this put their release in the same window as AMDs multi-core release?
  • by dsanfte ( 443781 ) on Thursday October 14, 2004 @06:59PM (#10530013) Journal
    You can't increase clock speed indefinitely. There's a fundamental limit we're brushing up against here, and it's called 0.8c.

    Electrons on copper travel 3cm per nanosecond. At four Gigahertz, each clock cycle, the electrons can only travel a theoretical maximum of 0.75cm. I don't even think that covers the diameter of a single core these days.

    You can't turn up the clock much faster than it's already going without getting into nanotechnology. The only viable solution is to optimize chip efficiency through other means, and add more cores to the chip working in parallel.
    • Whoops, got my units wrong. That should be 2.4cm/ns, and 0.6cm per hertz at 4Ghz.
      • by drmerope ( 771119 ) on Thursday October 14, 2004 @07:52PM (#10530434)
        You're also off the mark. It is almost certain that there is no electrical pathway that spans the chip without hitting some logic. The number in 90nm (for best performance) is about 12000\lambda (\lambda = 90nm). Often signals propogate much smaller distances in a cycle. I assure you in one cycle no one is making a signal traverse the entire core. Modern CPUs are highly pipelined which is essentially to say that in one clock cycle data is transfered and processed within a very small section of the chip before being passed on to the next stage. This then frees the stage for the next bit of data. see http://en.wikipedia.org/wiki/Pipelining As a side consequence, what you mention is not the limiting the factor. Signals simply do not need to propogate across the chip in one cycle. What has really happened is the drive current available from each transistor has gotten smaller as the transistor itself has shrunk. The wiring capacitance has remained the same and begun to predominate over the gate capacitence. Thus, making the transistors smaller does not make the circuit faster as it once did. Also, as someone else pointed out, the mobility of electrons in semiconductors is no where near the numbers you quote. Electronics simply don't work the way you claim.
    • Actually your theory has already been destroyed by the folks who got 6GHz [slashdot.org] out of their P4. It was even stable enough to boot XP, so something is flawed with your ideas. So maybe there is a limit, but not anywhere near 4GHz.
      • Was it supercooled by any chance? If it was, the matterials may have achieve superconductor-like properties.

        I think the parent was talking about regular conducting materials above room temperature.

        • Superconductor or not - the electrons aren't going to go faster than c...

          A better argument is that the core is designed in a manner that signals don't have to cross it in a single clock cycle.

          Still, we are getting close to the limit of c. Maybe we can double in speed once more, but maybe not...
    • Now are you talking pure copper or what? Copper is the best conductor, certainly. But, in chip manufacturing you don't use pure copper, because copper won't adhere to silicon well. That's why they use an aluminum and copper blend to get the best properties of both Aluminum and Copper. But, the conductivity will go down. So, you really can't even go as fast as you are saying, according to my additional error analysis.
    • But at the same time, you're not necessarily transfer individual electrons in a circuit. The actual net electron drift velocity is much smaller than the speed of an electron. When you call the UK the electrons (assume a copper wire) are not travelling at or near the speed of light. They are traveling around 72e-6 cm/s, or 72um/s. Yet, the call goes through almost instantly.....
    • by Sebastopol ( 189276 ) on Thursday October 14, 2004 @07:17PM (#10530172) Homepage
      Common misconception. Electrons don't move at the speed of light. In fact, electrons aren't the primary charge carrier in half the transistors in the chip. Holes are (P vs. N).

      Charge carriers propagate at about the speed of molasses. Go read this website, it is great:

      http://amasci.com/miscon/eleca.html#light

      Here's an excerpt --

      THE "ELECTRICITY" INSIDE OF WIRES MOVES AT THE SPEED OF LIGHT? Wrong.
      In metals, electric current is a flow of electrons. Many books claim that these electrons flow at the speed of light. This is incorrect. Electrons actually flow quite slowly, at speeds on the order of centimeters per minute. And in AC circuits the electrons don't really flow at all, instead they sit in place and vibrate. It's the energy in the circuit which flows fast, not the electrons. Metals are always full of movable electrons, and when the electrons at one point in the circuit are pumped, electrons in the entire loop of the circuit are forced to flow, and energy spreads almost instantly throughout the entire circuit. This happens even though the electrons move very slowly.
    • They are still managing to squeeze more transistors into less space, remember. It's just that the P4 design took things way too far. This has happened before, most notably with the MIPS R4400.

      You also have to remember that the whole thing is probably clocked at least partially asynchronously. The on-die L2 cache doesn't need to operate at 4 GHz.

      And modern semiconductor manufacturing *is* nanotechnology.

      But, your ending conclusion still holds. Without changes in our understanding of physics and/or sub
  • This is cool, IMHO (Score:3, Insightful)

    by inode_buddha ( 576844 ) on Thursday October 14, 2004 @07:00PM (#10530015) Journal
    Just as a die-hard linux user, I will say that the mHz wars are *so* 1990, the big questions that I ask is "How much cache" and "How many bits wide?"

    It just kills ppl when they see my Pentium Pro box keeping up with XP on a P4, for desktop stuff.

    • by Anonymous Coward

      >It just kills ppl when they see my Pentium Pro box keeping up

      Do they die of laughter?

    • The problem is what terms do we use to compare two processors to the layman. In order for this to work, someone needs to up with some new metric. I am not claiming that I have the imagination to come up with one. The thing there needs to be some way to identify that a chip is faster, otherwise how do we know the new gee wiz chip really is. I am wanting an easy way, not 'cache, pipeline length, etc'.
  • Big news (Score:3, Insightful)

    by 14erCleaner ( 745600 ) <FourteenerCleaner@yahoo.com> on Thursday October 14, 2004 @07:00PM (#10530017) Homepage Journal
    Intel says they are going to rely on approaches besides faster clock speed to improve the performance of chips.

    They've been doing this for a long time; basically all this says is that they're attempting to change the focus of their marketting from clock speed to other measures. I predict that consumers won't like it, and they'll go back to cranking up the marketting-clock-speeds ASAP.

  • Yawwn... (Score:2, Flamebait)

    by Quixote ( 154172 )
    Who needs 4, when we already have 6 [slashdot.org] ? ;-)

    Running 2.8GHz overclocked (aircooled) to 3.4GHz here...

  • ... Intel says they are going to rely on approaches besides faster clock speed to improve the performance of chips.

    About stinkin' time. How much of the typical users' workload is CPU-bound? Let's work on some of the other parts, like through-put to RAM and to disks.

    • Well, I used to do molecular dynamics simulations that were, but that's just the exception that proves the rule.

      I tried to run Seti@home on a 2.8 GHz P4. Wonderful speed, but that damn fan noise (quiet - Ramp Up - REALLY LOUD - Ramp Down - quiet) bugged me, so I got rid of it.

  • It's about time. (Score:4, Interesting)

    by Raptor CK ( 10482 ) on Thursday October 14, 2004 @07:06PM (#10530081) Journal
    Considering the fact that my 3 year old PC died, I replaced the 1.4GHz Athlon (T-bird) with the Socket 754 Sempron 3100+.

    Same RAM, same disk, same video, but a new motherboard.

    I *feel* like I'm getting more than a 28% speed boost from it, so it's clearly not just the clock speed that's doing it. Making a chip run faster never was the right idea, and I'm glad to see that they're walking away from that.

    Now, if we can just get a core like the Pentium M, but for desktops, then maybe we'll see some real competition.

  • by HiThere ( 15173 ) * <charleshixsn@@@earthlink...net> on Thursday October 14, 2004 @07:08PM (#10530111)
    Multi-cores (i.e. parallel processing) is clearly the correct approach. The only fly in the ointment is a few software packages that charge on a per cpu basis, and count each core separately.

    Ouch!
  • This sounds like a score for the good guys, with Intel finally realizing what others (like AMD) have realized alot earlier.

    But...

    Lets see what is actually going to happen. There are plenty of previous examples of Intel changing direction, and it is not always for obvious reasons. Remember slot1 and slot2, that Intel praised as a superior way to interface cpu's to motherboards as opposed to sockets, and when all came down to it, it was nothing but a stunt to try and make life harder for competitors.

    Could
    • My understanding with the Slot 1 and 2 were designs to keep the cache on the same package, since they couldn't get 256k and 512k on a socket 7/8 sized package along with the SSE/MMX instructions. That way you didn't get asshole retailers that shipped expensive processors with no L2/L3 cache leaving the customers in the lurch.
  • by KrackHouse ( 628313 ) on Thursday October 14, 2004 @07:11PM (#10530128) Homepage
    ...of Microsoft realizing it had missed the boat with the Internet back in the '90s. Let's hope the paraniod play fair.
  • Strange... (Score:3, Interesting)

    by Fnkmaster ( 89084 ) * on Thursday October 14, 2004 @07:15PM (#10530154)

    Intel says they are going to rely on approaches besides faster clock speed to improve the performance of chips


    Strange, I thought the point of the big numbers was to sell more chips, not to make them faster. Wasn't part of the reason that Intel made the P4 pipeline as long as it is so that they could keep cranking the MHz up for a long, long time so they'd have lots of generations of P4 processors to sell? Because I don't think you really need that long a pipeline for purely performance reasons.


    I wonder if AMDs inroads into the 64 bit market have Intel getting a bit scared about the future?

  • Whuzzat? (Score:5, Funny)

    by phillymjs ( 234426 ) <slashdot AT stango DOT org> on Thursday October 14, 2004 @07:16PM (#10530170) Homepage Journal
    Intel spokesman Chuck Mulloy said, "Those are the sort of things where you get more capability out of a processor by designing specific silicon solutions as opposed to just keep turning the clock faster." In the meantime, Intel is planning on releasing a 3.8 Ghz chip with 2mb of cache."

    So to sum up:

    1) We've realized it's dumb to just keep increasing the clock speed.
    2) Buy our new Pentium 4! It's going to have a higher clock speed!

    ~Philly
  • That a 4ghz clocked Pentium 4 melted through the motherboard every time they tested it. Ok, I'm just kidding, but its great to see a company so headstrong about the Megahertz myth finally admit that it takes more than just core clock speed to make a computer run faster. With this said, I wonder what surprises we'll see coming out of Intel.
  • Banias for desktops? (Score:4, Interesting)

    by Theovon ( 109752 ) on Thursday October 14, 2004 @07:36PM (#10530308)
    I recall some earlier discussions about how Intel was finally starting to wise up and design processors that are efficient, rather than just raise the clock speed.

    The first incarnation of this is the Banias, also known as the Pentium M. It's basically a P3 pipeline, but with P4 branch prediction (and some other technologies). The P4 has to have very advanced branch prediction in order to even HOPE to get reasonably efficient use of its pipeline. Applying this to the P3's shorter pipeline results in a much higher IPC.

    In other words, something philosophically like the Athlon.

    Since then, I haven't heard anything about it. And then there's this article. Is there any relationship?
    • Dothan (the successor to Banias) is currently in many laptops.

      This [theinquirer.net] is an intersting development... a P-M mobo for desktops. I personally would love one for a SFF box. But Intel says NO to P-Ms in desktops on a large scale. Wouldnt want to canabalize all those Prescott sales, would we?
  • Eff that! (Score:4, Interesting)

    by Anonymous Coward on Thursday October 14, 2004 @07:48PM (#10530403)
    I just want a desktop Pentium M system, without having to browse some Japanese-only Hitachi site.

    I don't want more power, I want a fast enough machine that runs silently.

    I guess it's my fault for waiting for Intel to provide this instead of just buying a Mac.
  • anyone know if the 3.8 ghz chip will be for socket 478? or just for 775? i couldn't find any info on this.
  • Intel is just admitting what the rest of the processor industry has known for years. AMD stopped playing the Mhz game with their 64 bit chips. IBM, and Sun have had 64 bit chips for years and are already shipping multicore CPUs. Sun has plans for dozens of cores per die. Intel will have to work overtime to catch up with these other companies.
  • by minus23 ( 250338 ) on Thursday October 14, 2004 @08:21PM (#10530653)
    For 3D Rendering all you need to do *is* just turn up the clock speed. It doesn't matter how fast the memory bus is... or even how much cache is on a chip beyond a certain mimimal level.

    You can build super cheap (except for processors) computers to use in a renderfarm.. (I use Lightwave 3D, Modo, and SoftImage XSI)... and hard drive speed / graphics card speed / Memory speed / Cache on die, Do nothing to speed up a render once you hit that "Render" button. Sure... SSE extensions and the like do speed it up if the code is optimized... but there isn't really a way to optimize the code with this new direction Intel is going.
    • For 3D Rendering all you need to do *is* just turn up the clock speed.

      Or increase the number of processors. Turning th clock speed up is turning the heat up; I think that's probably the reason behind this announcement. One hyper-fast processor is not better than 4 medium-speed ones, especially if it draws 2kW and meltsdown everytime the water-cooling pump drops below 95% speed.

      TWW

  • Not just MHz (Score:5, Informative)

    by mwdmeyer ( 803276 ) on Thursday October 14, 2004 @08:27PM (#10530691) Homepage
    Remember intel has done other things to increase speed other than just MHz increase. Such as: 1) Increase Front Side Bus (in the p4's case 400 -> 533 and now 800MHz) 2) Increase Cache (256 -> 512 -> 1024 -> 2048kb) 3) SSE 1, 2 and 3 4) HyperThreading
  • by Ralph Spoilsport ( 673134 ) on Thursday October 14, 2004 @08:42PM (#10530793) Journal
    "the sort of things where you get more capability out of a processor by designing specific silicon solutions as opposed to just keep turning the clock faster."

    But the droids blinkered by intel FUD put their fingers in their ears sang "lalalalala" and barked "NO - faster clock speed is a FASTER CHIP!!!"

    Now, suddenly: oOooooo - cycles per second isn't as important!

    Oh well. It will certainly be very interesting to see what Intel does over the next few years.

    Here's an interesting question, related to this topic:

    Assuming they go multicore (like IBM and Power[x] chips) what are the limits involved there? What would logically stop the development of multicore chips from increasing their number of cores?

    And: What next?

    RS

  • Prescott a failure? (Score:3, Interesting)

    by ameoba ( 173803 ) on Thursday October 14, 2004 @08:51PM (#10530840)
    If Intel's primary motivation behind going from the Nortwood core to the hotter & less efficient Prescott core (longer pipelines result in a Presocott chip with double the cache of an equally clocked Nortwood actually being slower) was that the Pressy would allow them to scale to higher clockspeeds than the Northwood would allow does this make the Prescott a failure?
  • by vincecate ( 741268 ) on Thursday October 14, 2004 @09:13PM (#10530956) Journal
    It seems Intel has plenty of 2.8 and 3 Ghz chips, more than they can sell, but very few 3.6 Ghz chips. So they have an inventory problem. Once people realize they want the NX-bit for worm protection and 64-bit so they can run the next Windows, this inventory will be nearly worthless.

    Intel released their Q3 results [intel.com] late Tuesday. In their conference call [intel.com] they were evasive about a suprising drop in their tax rate and also about the amount of their inventory writeoff. Intel claimed their inventory was down $43 million to $3.2 billion with an unspecified writeoff amount. Investors were happy to see inventory did not go up again and the stock went up Wednesday. In several [marketwatch.com] different [fool.com] articles [marketwatch.com] people are working out the mystery of the writeoff amount. Normally Intel's "cost of sales" is a steady number. Any writeoff will add to this number. So you can estimate the writeoff just by seeing how much this increased. With this calculation, it seems Intel had a writeoff of $472 million.

  • by Sean Johnson ( 66456 ) on Thursday October 14, 2004 @11:07PM (#10531814)
    Okay, this means it's time for the CPU performance increases to take a back seat. Maybe now the rest of the computer can have some time to catch up better with the CPU. I am talking about bus and memory bandwidth. This is one hurdle that needs to be overcome.

    Low latency and high bandwidth up the wazoo is one aspect that supercomputers for example have over standard pc components, besides massive parallelism of course.

    It would be cool to see intel start making inroads from R&D on the memory front. I'm not talking about on-die cache, that is a given. The questions to be answered are how to get the main memory up to snuff with the rest of the system.
    If the current state of the art in CPU power stagnated from here until 5 or more years from now, it really wouldn't be an issue if the same efforts during that time were put into lower latencies across the whole sytem architecture itself.

    So what am I saying? The CPU has had enough innovation in it's current form. It's time to focus on other lagging components. Pci-x is a step in the right direction, but it is nothing without main memory advances and other mainboard bus architectural improvements.
    • Interesting you would talk about speeding up the rest of the computer because with AMD putting the northbridge memory controller on the CPU itself, the Hypertransport motherboard level data connections, DDR2 system RAM, PCI Express, Serial ATA, and UltraSCSI 320, most of the other components on the computer are also getting quite a bit faster, too. And external connections are getting faster with USB 2.0 and IEEE-1394b becoming increasingly common, too.
  • by Eric Damron ( 553630 ) on Thursday October 14, 2004 @11:37PM (#10532010)
    It seems to me that there has got to be maximum rate at which we can push the clock.

    I have a 3.2 GHz Pentium 4. How far can light travel in one clock cycle at that speed?

    186000 miles / 3.2 billion is about 3.7 inches isn't it?

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...