Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel

Intel Northwood CPU Review 233

gcshaw2nd writes: "Here it is, the first hands-on review I've seen of Intel's new Northwood chip, running at two gigahertz. It overclocks like a hog, easily to 2.5Ghz."
This discussion has been archived. No new comments can be posted.

Intel Northwood CPU Review

Comments Filter:
  • by Glonk ( 103787 ) on Sunday January 06, 2002 @07:02PM (#2795227) Homepage
    Aside from the meager "5-10%" performance boost per clock that GamePC reports, the new PC1066 RDRAM and 533MHz FSB coming in a few months offers a "12%" performance boost per clock, when used with the original P4.

    Northwood + 533MHz FSB/PC1066 RDRAM should be quite nifty.

    The PC1066 benchmarks are here [aceshardware.com].

    According to that chart there, PC1066 RDRAM actually has lower latency than PC133 SDRAM. I don't know how accurate that is, but it says PC1066 RDRAM takes 207 cycles for 128 bytes, and PC133 takes 229 cycles (PC800 took 270)?

    Maybe I'm reading that wrong or don't know some specifics about RDRAM architecture, but that sounds nifty...
  • Is this needed? I mean seriously needed. I saw the benefits a few years ago in getting higher speeds out of celerons, but mother of god, are you seriously going to notice a difference overclocking chips that are already at 2000 or so Mhz? I'm not saying it's a bad idea, but why take life off your cpu(s) when you don't need to? I don't know about most people, but I can't afford a new $800 chip every 4 or 5 months. Hell, I still can't afford to upgrade from my dual pentium pro's...
    • Re:Overclocking (Score:4, Interesting)

      by BWJones ( 18351 ) on Sunday January 06, 2002 @07:36PM (#2795336) Homepage Journal
      When performing calculations that can take hours or days even, an increase in performance of even 10% can result in significant time/money savings. There are those mid level workstation users (like me) and high end users that can and do need every last bit of performance they can get. At this level, a few hundred $$'s every few months is nothing.

      Hell, just the yearly support costs of a single SGI Octane are such that I could afford to purchase a new Macintosh G4 with a flat panel yearly for what it costs. So buying a new $800 chip twice a year does not even make me blink.
      • by Alien54 ( 180860 )
        Normally I recommend to most people that they purchase one or two levels below the top level. I feel that paying maybe 50% to 100% more for maybe 5 or 10% performance increase is not really worth it, especially when waiting a few months will bring bring the processor within reasonable cost.

        Otherwise I am spending thousands of extra dollars for the "blessing" of being on the bleeding edge.

        I can see the need to shorten compile times, etc. especially for big projects. But otherwise I look at 'good enough"

        • I can see the need to shorten compile times, etc. especially for big projects.

          Compiles of really large projects are disk I/O bound. A faster processor does not really give that much of a boost. Invest in a fast drive and fast controller. It will save more time.
      • Re:Overclocking (Score:2, Insightful)

        by adadun ( 267785 )
        When performing calculations that can take hours or days even, an increase in performance of even 10% can result in significant time/money savings.
        I was under the impression that an overclocked CPU usually ran rather flaky with spurious crashes and lockups. Do you really dare to run calculations that take days to complete on an overclocked processor?
        • Burn in programs [demon.co.uk] that can test the entire system for failure (processor errors to memory errors to etcetera) from overclocking are usually run on any machine before it is "blessed" for production work.
        • It depends upon how much you are pushing your hardware. If you have to resort to liquid cooling etc... you are either running Crays at the really high end (some Cray models are cooled internally by a fluorocarbon that actually bathes the CPU's and bus boards believe it or not) or pushing Intel/Motorola/AMD/IBM level stuff to levels I am not comfortable with. Modest overclocking/CPU upgrades can help considerably with reliable performance and no statistical data corruption depending upon the application.

          For instance, I remember when I was an undergraduate, my stock Macintosh IIci would take three days to complete a series of calculations I was working on for a stroke study. Bumping up the performance just a bit by mild overclocking cut about 7 hours off of that calculation. I ran this machine overclocked by about 15% for almost five years after as an image capture/file serving machine before finally retiring it to a friends son. That kid used it for another two years before giving it to his sister who finally upgraded a couple of years ago. Last I heard it was still running in overclocked form as a web camera server in a biology lab.
      • Re:Overclocking (Score:3, Insightful)

        by TheLink ( 130905 )
        huh? If you are really doing calculations that cost so much, I think you wouldn't want to overclock. Just buy the fastest, or if already at the fastest, hopefully you can do some parallel processing.

        You won't save money by spending days to get the wrong answer :).

        Only way is if you can certify and test the overclocked chips yourself thoroughly, however that would mean spending more money AND time. And when it comes to time, by the time you're done with those thorough tests, a faster chip may have come out :).

        So it only makes sense in rare cases. Or for overclockers who don't care about correct results.

        If 100% correct results don't matter, might as well ask an experienced person to guess :).
        • See the above reply. As for purchasing the fastest, there are times when the fastest is considerably more expensive and performance approximations can be had by building/modifying lower end stuff. For instance, a SGI Octane runs anywhere from $35000-$50000. Going to the next level is considerably more expensive than that. There is some code that is only available for SGI and for that you run the SGI hardware. Other software can be run on cheaper hardware such as Macs or Intel/AMD. The performance of those boxes is typically lower than that of the SGI in some tasks, but can somtimes be overcome by mild overclocking.

          Clustering is another potential solution, but one needs specialized code for most applications and benefits or solutions can often be very problem specific. Clustering code that is easy to use and adaptable to various problems could be incredibly useful here. See http://exodus.physics.ucla.edu/appleseed/appleseed .html for some easy to use clustering solutions on the MacOS. Low cost Linux clustering solutions are also available.
      • When performing calculations that can take hours or days even, an increase in performance of even 10% can result in significant time/money savings. There are those mid level workstation users (like me) and high end users that can and do need every last bit of performance they can get. At this level, a few hundred $$'s every few months is nothing.

        I agree with your VFM argument, but surely, if you're performing calculations that take hours or days (weather forecasts, biotech, scientific research, nuke simulation), they're likely to be important enough that you'd like to have confidence that they're actually correct, rather than save a few bucks/hours and run an overclocked system, right?

        --

    • What proof is there that overclocking lowers the life of the chip beyond its likely gross obsolescence anyways? I have a Celeron "450a" (300 overclocked 50%) that ran fabulously for a couple of years, including increasing the voltage 50%, and it's still working 100% as well as a test machine (despite the fact that the market value of the chip now is in the very low 2 digits).

  • What's the point? (Score:3, Interesting)

    by werve ( 548765 ) on Sunday January 06, 2002 @07:04PM (#2795234)
    Does any one really notice the difference in speed between even 1.7GHz and the 2.0Ghz? I know it will scrape time from a kernel re-compile, but what non-IT consumers care about this? Especially considering you can get a dual 1.3 GHz celeron system for next to nothing.

    I think Intel would make more money by even lowering prices even further and offering P4 SMP (non-Xeon) - they'd sell more chips... and make me happier ;-)
    • The point? Hmm, leave it to an IT guy to think only IT guys have use for processor speed.

      Some reasons for those extra 300 MHz: 3D rendering, MP*/QT encoding, video transition rendering, image manipulation (rotate a 20 meg image clockwise 1 degree anyone?), audio DSP, and making Windows not seem sluggish.
      • The point? Hmm, leave it to an IT guy to think only IT guys have use for processor speed.

        3D rendering, MP*/QT encoding, video transition rendering, image manipulation (rotate a 20 meg image clockwise 1 degree anyone?), audio DSP, and making Windows not seem sluggish

        3D redering -- Speed comes from RAM, video hardware and bus speed, these days, not CPU speed.

        Video -- Hardware video decoders are a must. Why would you want your CPU to grunt over decoding video?!

        audio DSP -- Been done in commodity hardware since the early 90s....

        Making windows seem ... not windows -- Been done by installing Linux since the mid 90s.
    • Taking a chip .3 ghz higher than its factory clock will show a difference in speed.. And there is a l33t factor involved in this. Its kind of like putting a $5000 stereo system in an $800 car. People like to get the most of their rigs.
    • Mircosoft Windows XP?
    • Re:What's the point? (Score:3, Informative)

      by zmooc ( 33175 )
      You're absolutely right; nobody needs more than 640K RAM too....

      The usage of computers changes along with the possibilities and there's still a lot that's not possible. Think about photo-realistic realtime interactive movies (have you seen the latest Chemical Brothers video-clip "Star Guitar"? THAT's what I want to do in realtime and interactively), multi-track samplers that can do a lot of effects without any latency, predicting the weather more exactly without the use of what we call supercomputers nowadays, SETI, simulations of large neural networks etc. etc. That's why we need the Hz's, not for the stuff we we're doing nowadays. As long as I cannot easily create my own Hollywood-production in 16384:1024 with 16-channel sound on my desktop, create the soundtrack for that with a software sampler with professional quality (latency) etc, we're not there yet.

      • by FFFish ( 7567 )
        We all want to do that. But the make-or-break point isn't going to come at the difference between 1.7 and 2.0GHz, or even 1Ghz and 2.5GHz.

        IOW, spending twice as much isn't getting you twice the performance... and it's usually not even getting you a substantially appreciable difference in performance.

        IMO, the bottleneck these days isn't so much with the CPU as the busses.
        • The Ghz-increase that we are seeing with CPU's now has indeed totally left the bus-, memory- and storage-speeds behind, but we'll still need to have fast CPU's. So indeed, at the moment there's not much use for such fast CPU's since the other components are just too slow, but they will catch up. What I was trying to say is that it's a bit narrowminded to say that we don't need such fast systems because the applications we use nowadays don't require them. I didn't mean to say that such speeds have any use for the `normal' user nowadays.

          ...and...I'm pretty sure there are numerous applications nowadays that DO benefit from such fast CPU's. For example lower latency in music-applications while being able to use a lot of effects [with changing parameters] in realtime at the same time. Such applications will benefit a lot more from fast CPU's than they benefit from faster memory...as long as there's enough cache.

    • by not_cub ( 133206 )
      Does any one really notice the difference in speed between even 1.7GHz and the 2.0Ghz? I know it will scrape time from a kernel re-compile, but what non-IT consumers care about this?
      On the contrary. I think most non-IT consumers will care about it. When I walk into a consumer electronics store to buy, say, a stereo, all the components have little tags on them with meaningless quantifications like "frequency response: 20-25000hZ", "power output: 240W". These numbers are all meaningless. They will not have been measured in a meaningful way (most likely the power output is measured across a resistor than the speakers themselves). There is no mention of whether it sounds good qualitatively. Everything in the shop probably sounds terrible, compared to an audiophile hi-fi.

      Similarly, your average punter in the shop on a Saturday to buy a computer because he read so much about the internet, or because he thinks he should get one for his kids, is going to look at the first number past the first bullet point on the thing and buy the one with the biggest number. This is why Intels policy of cranking up the number as high as possible with the P4, and not worrying about actually making it go faster, is such a good marketing move.

      not_cub
    • Especially considering you can get a dual 1.3 GHz celeron system for next to nothing.

      Where from? I'll take 5!

      Why do I live in NZ :(

    • Re:What's the point? (Score:2, Interesting)

      by archen ( 447353 )
      Personally I really don't care about processor speed any more. My 1.4Ghz Athlon is plenty fast for me. But keep in mind that the processor does a lot more than it used to. Pop open my Pentium 133 and you find a LARGE card for just about everything. Nowdays you get these skimpy little cards that make the main CPU do everything for them.

      But really, if someone gave me the option of a 10Ghz computer, or a computer with twice the bus speed/bandwidth - I'd take the better bus any day.
    • Man, I get so sick of watching it nerds saying the only use for a faster processor is to 'scrape time from a kernel recompile'....

      If you're like me, and you make music with computers, and try and do it entirely with real time apps such as Reaktor, Max/MSP, Supercollider, software synths in VST/Logic, faster processors make a big difference... as most people these days are limited purely by their processing power.

      If someone gave me some mythical 10 Ghz machine, it would probably only take me a week or two to get used to that being the bottleneck....
    • A number of people I work with have DV video cameras and are buying DVD-R/DVD+RW burners. I don't know of anything that consumes raw cycles like video processing. Even with clean source, it can take 4-5 hours to process a mere 25-30 minutes of video to MPEG2 if you want good video quality (and that's on an Athlon 1.4GHz!)

      In the past 3 months, 4 of the 30 people in my work area have picked up DV cameras and looked at DVD burning their home vids. Every one of them has been greatly disappointed to find that they can't do it with their "old" 800MHz PIII boxen without leaving the job running over night.

      So I guess the point is that you don't need much more power than currently available for raw compiles and such, but you can expect the upcoming flood of DVD burners and DV cameras to push a significant number of people to upgrade.

  • Bah (Score:1, Funny)

    by Anonymous Coward
    Story posted 2 minutes ago, server is hosed already.

    Maybe they need to quit talking about the P4 and pop one in their server already?
  • by tunah ( 530328 ) <sam.krayup@com> on Sunday January 06, 2002 @07:08PM (#2795250) Homepage
    Now I *know* my computer is obsolete: my CPU speed is just the difference between reccommended and possible clock speed settings on the new one.
  • Only 2.5 Ghz? (Score:5, Interesting)

    by Deltan ( 217782 ) on Sunday January 06, 2002 @07:09PM (#2795255)
    Some people [amddiyoc.com] have had that little gem up to 3Ghz. Not exactly in English, but pictures say a thousand words.
  • by Horse Cock ( 548609 ) <horsecock_2k2@yahoo.com> on Sunday January 06, 2002 @07:15PM (#2795270)
    Really, does anybody really need a faster processor? They really should concentrate on increasing the size and speed of the data bus, rather than increasing the speed of the processor.
    • Yes, of course we need faster processors. Not only does it bring down the cost of the mainstream processors, but you always need more power. Always!

      The big deal here isn't the 2GHz anyway. 2GHz has already been done, the big deal is the new core. The Northwood core is the first Pentium 4 chip to use copper interconnects instead of aluminum, it's much smaller (read: cheaper to make), it runs cooler (2.0GHz ran at 31C maximum temperature, overclocked to 2.5GHz it was 41C max temp). And it can clock much higher.

      It's the new core that matters, not the speed (although Northwood is debuting at 2.2Ghz...)
    • by DocSnyder ( 10755 ) on Sunday January 06, 2002 @08:00PM (#2795395)
      When I bought my home workstation about three years ago, the CPU (K6-II/400) was one of the cheapest parts of it. What really made it powerful are the SCSI cache controller and a fast Seagate drive, as well as an adequate (for these days 128 MB were more than enough) amount of RAM. Of course it depends on what you do with your box, but to be honest, most of the time you're waiting for the harddisk, either for loading data or for swapping virtual memory.

      Three years later, the only thing I added was some more RAM, with the rest of my workstation being the same. It is still very usable, and I rarely see the need for a more powerful CPU.

      In contrast, my former office workstation was a P3-800 with 192 MB RAM (some of which to be "abused" for graphics), an IDE drive and a one-chip-does-everything Intel i810 on the mainboard. SETI was the only thing it could do faster. On pushing the IDE system or the network, sound playback got distorted, and the X server became quite unreactive, it even stalled for a few seconds. A compile run made it impossible to do anything different in parallel, so I would have needed two machines - one for compiling and one for the desktop.

      AFAICT especially on Intel systems the trend goes towards integrated one-chip-does-everything systems like i810 and its successors, which can handle everything from graphics to sound to IDE to networking. Of course the Intel people want their customers to come back later, and save some money by using only one chip instead of several ones. Most users will think it's the CPU which is too slow... and buy a new 2 GHz monster with another i8xx-crippled mainboard.
      • Now, if you really wanted to get rid of that HD bottleneck, you could get a solid-state drive [cdw.com]. I wish I had that kind of disposable income...
      • It has been said before: there are benchmarks and lies. What we need is a bench mark for system responsiviness. But i haven't seen this yet. But there are a few things that give you a clue we need a good test for this :

        -There are lots of remarks the new linux kernel feels unresponsive, but in the benchmarks it test better than ever before. There is a low-latency patch that did not make it because there is not clear test this helps.
        -I know a PC with 256 MB ram responds better than a pc with 128 MB ram from experience (i.e. MS word starts up faster) But in the test this not very visible.
        -A stutter in the sound is written of to bad drivers, but is very annoying. Is there any test for this? (other then just listen to a mp3)
  • "It overclocks like a hog..."
    LOL, I've never seen an overclocked animal :)
  • I'm trying to get a visual around "overclocks like a hog" .... What's up with that?
  • Apples for apples, I'm not going to throw away my Athlon!
  • by Anonymous Coward
    Here's a printer-friendly [gamepc.com] version of the article, which is easier on the viewer and the server.
  • Here I am, just comming back from a week vacation, and they can make 2Ghz chips already? When I left, the best I could get was a 266Mhz!
  • by sh0rtie ( 455432 ) on Sunday January 06, 2002 @07:36PM (#2795337)


    now even quicker !! this page claims it has a world record

    3023mhz ! [vector.co.jp]
  • That it is possible to overclock... who could live with a bare 2GHz? Not that I would know, fastest I have accessible is 1GHz. :)

  • by J.D. Hogg ( 545364 ) on Sunday January 06, 2002 @07:43PM (#2795355) Homepage
    Richard, Intel's VP of marketing : John, we need to produce higher gogohertz CPUs, other CPU manufacturers are creaming us right now.
    John, Intel's VP of engineering : You mean "gigahertz" I 'spose : well, we can't, CPU core designs are reaching their limits. Technically, it's not very realistic you know.
    Richard : Who's talking about technical stuffs here ? I mean, just crank up the gogohertz man, we need more hype fast.
    John How about if we designed synchronous processors instead ? now *that* would be sexy.
    Richard : You mean hotter than more gogohertz ?
    John : Sure, it would impress the technical crowd, and we'd have a real actual breakthrough in CPU design in less than 5 year. That's pretty hot, I'd say.
    Richard : Yeah, well, tech people are nice, but the Joe Sixpacks who walk into Fry's and buy a new PC, they want more gogohertz.
    John : *sigh* Well, I guess we could double the clock and put a frequency divider inside the CPU ...

    etc ...

  • Something tells me that the nVidia NForce chipset helped out in a few of those benchmarks. Still, the Athlon is quite impressive, and I have heard it will overclock all the way to 2.6 GHz.
  • Are hogs known for their overclocking potential? Have I missed something?
  • Why is it that every story about ever increasing CPU speeds is met by about 50% postings claiming that such speed isn't necessary unless you "want 500 fps in Quake"? I've seen these sorts of posts since back in the BBS days when the new 486 came out, immediately to be met by 50 posts (usually by people who feel a need to justify whatever they own) claiming that "a 386/33 is more power than anyone needs anyways!". Bah.

    There are countless benefits to the increased speed (and of course like always: Once you use a higher speed system for a while suddenly you notice, clear as a sunny day, that yes there IS a very noticable difference, and suddenly that previously adequate machine seems pokey), and if you don't realize what they are then continue using whatever it is you use, but save the "500 fps in Quake" rhetoric (here I am with what would have been a cutting edge machine one year ago and Operation Flashpoint runs with frame rates in the single digits, yet even still it isn't a fraction as complex of a "world" as it could be if more powerful systems were prevalent).

    • True, but it also seems to me that processor speed is becomming more and more irrelevant. Sure it's faster, but can we even feed the thing anymore? We left the bus speed in the dust. RAM is slowly creaking along, and accessing the hard drive is an absolute catastropy in terms of performance. I'm sure the same people are bitching for the same reason that they've always been. But the way things are today; they're probably more right than they used to be.
  • 2.5? Higher... (Score:3, Informative)

    by Magus311X ( 5823 ) on Sunday January 06, 2002 @07:57PM (#2795388)
    Try 2.8 GHz [vr-zone.com]

    Or why not 3.0 [ocworkbench.com]!

    -----
  • by tcc ( 140386 ) on Sunday January 06, 2002 @08:20PM (#2795446) Homepage Journal
    Also To 2.8ghz [theinquirer.net]

    But now, a little rant...

    Look at the next and last alpha that will come out, it will SMOKE at a fraction of the speed the P4 will be at the end of the year. That's what I call going out with a bang. It's a shame such a technology got obliterated in part because of Intel's markettign muscle, we're going to be set back for 2 years for them to catch up on this technology and performance (they've got some alpha people in their staff now that will teach them how to do more than overclocking and recycling like putting SMT on chips for example, that will be one great leap, it's in at least one of the northwood flavor I think).

    Watch out for the Hammer when it will come out, it still going to beat intel's latest offering mhz per mhz in desktop performance (can't speak too fast about the server side though.)

    Look at the PowerPC, as much as I hate apple's hyping to keep their blinded userbase into beleiving that they hold the only computer that should exist on the surface of the planet, the powerPC architechture has proven itself to beat the crap out of Intel mhz to mhz side to side (and no I am not talking about that "hey loading an image in photoshop on the mac is 2x faster than on a PC, but I won't mention that it's totally unrelated to the CPU and the mac is running on a SCSI cheetah while the PC was running a 5400 rpm 5 gig drive", I am talking about rendering on different crossplatform software like premiere, lightwave, maya, etc).

    This goes without mentionning SUN or MIPS or any other cpus on the planet that has interresting technologies ASIDE FROM CLOCKSPEED.

    So again, Screw the MHZ hype, I am a power user, I love doing 3d rendering, I administer a small renderfarm at work, I love raw power, anything that comes out and had a power factor gets my attention, it's nice to see stuff running fast, but I am not impressed at ALL with the MHZ hype. Especially for the PRICE you have to pay to cover all the media and marketting hype... Intel's Hype tax like I often call it. Also, what you see is the MHZ going skyrocket... How much do you thihnk they had to cut in the design so that it stays that stable at these speeds? why do you think the athlon4 can't run at 2.2ghz? design.. intel had to cut in some places so the cpu could be easily cranked up that much, they had to redesign part of it, and that's why you need SSE2 optimisation and the pipeline for standard FPU is so bad. It's not because Intel doesn't know how to design a FPU, it's because they HAD to cut on it...

    My precision 530 workstation that runs dual 1.7ghz P4 for the price I payed, I find a dual AthlonXP or MP 1900+ far more impressive for 1/2 of the price. Heck, you want to be in the cool factor? get a Dell 8100 with DVD/CD-RW Geforce2Go 32megs and 1600x1200 15" LCD screen, now THAT'S a nice little piece of technology. And quake plays soooo smooth on it... I won't waste my personnal money or blast my budget for a chip that can generate small blackholes because of it's so great CLOCKSPEED. Gimme raw speed at a decent price. For the price you'd pay that northwood alone you could probably get 2 athlonXP and it's mainboard that beat the crap out of the P4 for the same price of the cpu alone.

    enuff :)
  • Could you imagine a beowulf cluster of these?

    I really get tired of people who say that.

    D/\ Gooberguy
  • In the articles, the tester admits to having the SSE-2 patch installed on all the machines prior to the rendering test in Photoshop. Even with the advantage to the Pentiums, the athlons still beat them out. Not to mention that in the other benchmark tests (IE: quake 3) are specially optmized for the Pentium. I give credit to the review in the fact that they are giving the honest results, but this is far from impressive coming from intel. It certainly won't want to make me buy a much more expensive Intel based system over Athlon for a few FPS in quake.

    -Lance
    • And check out the quote from the conclusion [gamepc.com]:

      "In reality, the Northwood Pentium 4 is an amazingly fast processor. It's safe to say that as of right now, the Northwood at 2.0 GHz is a faster gaming processor in comparison to the 1.67 GHz (2000+) Athlon XP."

      Now, what makes them say that? In Q3A the Norhtwood usually leads, but in RTCW, which is using the Q3A engine, the XP's consistently lead.

      WTF gives? Anyone got an answer?
  • by Anonymous Coward
    All the comments here along the lines of, "Does anyone really need this much speed?" (which I agree with, BTW) point out the Big Ugly Problem facing Intel: The industry has hit a plateau in terms of demand for horsepower. In a very short time the PC industry has converted from a "build out" phase to a maintenance phase, and that's very, very bad news for all the CPU companies, particularly Intel.

    Everyone loves to talk about how much longer we can push Moore's Law, but no one seems to want to address the real issue--how much longer will demand for ever-faster PC's be high enough to fuel the ever-more-expensive development of those new CPU's? I think we're about to find out that Moore's Law was subvervient to the law of supply and demand all along.

    Intel's other big problem is the IA64 and its hideous architecture that puts an amazing burden on compiler writers. Even worse, it's more than a little reminiscient of the IAPX-432 fiasco from a bunch of years ago, the last time Intel tried to introduce a spiffy, all-new architecture...

  • by musicmaker ( 30469 ) on Sunday January 06, 2002 @09:34PM (#2795714) Homepage
    Even though it lost every benchmark except the Quake III one, and the 3DMark by a mere 2%. The small fact that DDR SDRAM is barely half the price of P4 ram, and the entire system is not nearly as expensive. They still recommend it for gamming? This is unabashed Intel hype. Not to mention all that wonderfull waste in the third ALU, poor design, and abysmal FPU performance. The P4 is a joke. Look at the performance improvements made by AMD when they moved to 0.13 micron. They improved the prefetch and release a truley better CPU. 'Intel - we inch forward - leaps and bounds and we could hurt ourselves!' The P4 has nearly twice the memory bandwith, >20% higher clock speed and it's still loosing benchmarks! Pathetic - utterly Pathetic.

    Sorry - but I'm not buying Intel anytime soon.
  • by Chazmati ( 214538 ) on Sunday January 06, 2002 @10:21PM (#2795895)
    Well if I got anything out of reading the benchmarks, it's that AMD is dead on with their "Athlon XP 2000+" marketing game. I wasn't sure I liked the idea, after reading benchmarks where the 1.67 GHz AMD chip ran neck-and-neck with the 2.0 GHz Northwood, I have to concede that it seems accurate.

    Bottom line? This might be healthy competition between AMD and Intel. Let's hope these two continue to push each other higher than they might climb on their own.
  • Hmm, in the old days we used to use empty slots in a 19" rack to slide in a pizza to keep it warm...

    With 2.2 GHz CPUs that all changes, every machine comes with a built in microwave.

    Anyone know what effect the CPU has on 802.11b? Like might be tricky with both in the same box???

  • Actually, Tom's Hardware has a review of the 2.2 GHz Northwood and they compare it to the Athlon XP2000+.

    Looks like they just posted it.

    Here's the link:
    http://www.tomshardware.com/cpu/02q1/020107/inde x. html
  • If I can't compile that horrendously large prog in 108 minutes instead of 120 minutes.
  • Toms Hardware has just put their review [tomshardware.com] of the 2.2ghz P4 up, and it seems it beats the Athlon XP '2000+'.

    Personally I get the distinct impression Intel are just toying with AMD; they've already demonstrated the Northwood core at 3ghz, but if they were to release it right now they'd blow AMD away and loose their profit margins from 'early adopters'. It's in their interest to keep AMD going, in much the same way Microsoft kept Apple afloat. If there's a competitor then you can't be branded a monopoly.

    At the moment though, I'll just have to get by with my 850mhz Celeron..

Real programmers don't bring brown-bag lunches. If the vending machine doesn't sell it, they don't eat it. Vending machines don't sell quiche.

Working...