Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel

Intel's 2.4GHz Pentium 4 Unleashed 284

EconolineCrush writes: "Intel has released a 2.4GHz version of its Pentium 4 processor, and The Tech Report does an excellent job comparing its performance with previous Pentium 4 processors, and the latest in AMD's Athlon XP stable. There's more to this story than just another notch on the MHz pole, as the review showcases some new benchmarks in an already diverse set of tests, and shows the new P4 leveraging an impressive performance from RDRAM-based platform. Incidentally, the slack demand for RDRAM has it almost as cheap as DDR SDRAM."
This discussion has been archived. No new comments can be posted.

Intel's 2.4GHz Pentium 4 Unleashed

Comments Filter:
  • Please oh please let the next article read:

    "Average number of offspring has decreased to 2.4"
  • by Deag ( 250823 ) on Tuesday April 02, 2002 @05:45PM (#3273022)
    come up with only 243 pins?
    • Break it up, it's easy.

      The four corners make 6x6 pin "squares," that's 36 each, 144 total.

      The four rectangles that connect the corners are 6x14 pins (94 pins), times four make 336.

      144 + 336 = 480. Plus, one corner is keyed, and is missing two pins. Voila! 478.

      Now try counting the pins on a VAX 7000 CPU slot (the pins are in the system, not on the CPU) :o).
  • Yippee... (Score:5, Funny)

    by Eryq ( 313869 ) on Tuesday April 02, 2002 @05:46PM (#3273024) Homepage
    Now MSWord can bring up the Paperclip animation even faster...
  • It seems to me that as of late the chipmakers are like '..ohhh look it's 100Mhz faster than the chip we released last month..' I am impressed by the engineering that it takes to do this don't get me wrong but until the sledgehammer or clawhammer or itanium come out does this small clockspeed jump really count as news?
  • by vkg ( 158234 ) on Tuesday April 02, 2002 @05:48PM (#3273037) Homepage
    Interference with the processor...?

    I mean, wouldn't that just suck? Somebody walks in to the room with a new Pentium and your network dies????!!!!!
    • it doesn't broadcast

      • Bullshit.

        Intel's P-IV (including 2.4 GHz) datasheet [intel.com] states the power consumption at 49.8 amps @ 1.5 volts. That means nearly 75 watts!

        Couple that amount of power with 478 waveguide-like pins to direct it, and you've got yourself a nice little white-noise broadcasting station. Just for kicks, I'd like to see the performance of an 802.11b PCI card trying to coexist with one of these!

        How long before some clueless induhvidual brings one of these (in a case with a window mod, thus defeating the Faraday-cage effect) to a LAN party? I give it a couple weeks.

        Let's see... P-IV @ 75 watts, vs. 802.11b @ about 1 watt? Which one do you think will win?

        The noise floor for 802.11b is going up a few steps, that's for damn sure.

        • AFAIK there are no PCI 802.11b cards. There are PCMCIA 802.11b cards bundled with a PCI > PCMCIA adapter card. Since the PCMCIA cards already have their own faraday shield and the antenna is outside the computers chassis, I doubt that there is very much interference in either direction. Also, I dont think that any of those 478 pins actually cary any 2.4 GHz signals and probably a third of them are power or ground pins.

          • There are several PCI cards out there. Most of them hold a stipped down PCMCIA card and plug into a PCI board.

            More information here .. Netgear [amazon.com] and 3COM [3com.com]. PCI Cards are really useful when you don't want to rewire an office to provide someone connectivity at their desktop. Ok. I'm just nitpicking. :)

          • by Raetsel ( 34442 )

            First, I never said the pins were carrying 2.4 GHz signals. I said they were "waveguide-like". They will likely facilitate the radiation of some of the ~75 watts dissipated inside the chip package. Simple physics: energy goes from source to sink -- there is less similar radiation outside the package, thus there will be leakage. Fact of life. Need to reduce / prevent interference? That's what the grounded metal case is for.

            Second, at 2.4 GHz a signal doesn't follow a wire (or a circuit board trace) like it does at 60 Hz. At 2.4 GHz a wire is more of a 'suggestion' than a 'command'. This is why (radar | microwave ovens | certain satellite communication systems) use waveguides instead of wires. It's also one of the reasons everything isn't running at the same clock speed.

            Third, one of the Ten Commandments of /. -- Thou shalt query Google. [google.com]

            None of these are PCMCIA > PCI adapters, though some of them look like they're using the same innards. I'm not even going to include all the 'Mini-PCI' cards being used in laptops these days. Yes, they all have some shielding. No, it's not as complete as a PCMCIA card -- if I even dare call that complete.

            PCI Cards are installed with the PCB facing in the general direction of the processor (in the ATX spec). I don't know the shielding capabilities of circuit board material, but it sure isn't a solid conductor -- and... many of your traces are exposed to the radiation inside the case. This is where I expect problems and performance degradation to have their roots.

            Perhaps you remember a few years ago when it was trendy to install shielding around your audio card for a greater Signal/Noise Ratio? I saw people use copper flashing (the stuff you use to keep your roof from leaking) to construct a box, doing a very nice soldering job, use stand-offs for installation... all to remove a little static. The whole trick was to construct a Faraday cage that would allow the ISA connector (remember those?) as little clearance as possible, without actually shorting it.

            We may see a resurgence of that technique.

            • I stand corrected on the PCI 802.11b cards.

              As far as waveguides and faraday sheilds go, doesn't a waveguide have to have a greater than or equal to the wavelength of the signal it carries (reasonable multiples and fractions may also work)? Similarly, doesn't an opening in a faraday shield have to be larger than the wavelength of a signal for that signal to get through. Since the wavelength of a 2.4GHz signal is about 5 inches, I don't think that it's likey that these processor pins will function as waveguides for it nor is it likley that any 2.4Ghz emissions that make it past the enormous heatsink and the motherboards groundplane will get through the holes in those shields.

              If I am grossly wrong about any of this please correct me.


              • "Waveguide"

                "Faraday Cage"

                You're catching buzzwords and missing the point. The P-IV processor is packaged in materials not known for their radiation absorption. While the heat spreader is nickel-coated copper, the substrate itself is "Fiber-reinforced resin." (P-IV Datasheet, [intel.com] Page 55)

                Plastic.

                I have never seen a Pentium (MMX | Pro | II | III | IV) use a grounded heatsink, either.

                If you were harboring any illusions that Intel puts shielding in its' processors, please check them at the door, thankyouverymuch. That's what the computer case is for.

                If you've ever looked at a Class B (that's home use!) shielded case, you'll see the (unused) external drive bays covered with metal. IBM used to put a very nice braided wire rope gasket on the joints of the PS/1 (among others). You'll also find similar leakage prevention in many rack-mount servers.

                Heck, the PS/1 was in the original Pentium days, when processors were running at 200 MHz -- that means a 1.5-meter (nearly 59-inch!) wavelength! All that shielding effort wasn't just for fun, you know.

                And, since I'm bothering to respond to all this, I might as well make a point about Faraday Cages:

                • Look at your microwave oven. Specifically, the shielding for the window -- or find one that
                • has a window if yours doesn't. The holes are quite small. (The energy that oven puts out is at nearly the same frequency as this version of the P-IV, incidentally.)

                  Now, what if I were to cut a 3-inch hole in that window? It's easily smaller than the 5-inch / 12.5-cm wavelength. By your logic, no radiation will escape. Would you be willing to turn it on and stand directly in front of it for an extended length of time?

                  (Hint: not a good idea.)

                • While you've provided some interesting practical examples, please explain to me exactly where my misunderstanding about faraday cages [physlink.com], and waveguides [stanford.edu] lies.

                  As far as I can tell, in order for a waveguide to be functional, it has to have a diameter that is a multiple of the wavelenth (I say again a processor pin won't cut it as a waveguide for 2.4GHz), and faraday cages are generally effective at blocking wavelengths down to about 10x their aperature size (none of the shields on those 802.11b cards looked like they had gaps >.2 inches).

                  Could you please try a real explanation and not just anectdotes? If there's somthing I'm missing I really do want to understand it and I'm not just being argumentative.

    • You mean like when I am waving my cell phone and the mouse pointer moves on the screen? These things happen already. This might be a serious concern. I think that the processor might actually generate som e noise on this frequency. The problem though is, that P4 don't work on such a high frequency anyway unless utilized on 100%. So most of the time, no problems and if you want to test it, start kernel compiles. (At this speed, rather quite a few of them :))
    • How Wireless Networks Work [howstuffworks.com]

      spread spectrum devices are designed to work around interference at specific frequencies. Anyone know if the processor would mess up if not properly shielded?

      metric
    • The question is bandwidth - how much bandwidth of electronic interference does the chip radiate? Probably not a lot.

      Read up on the effects of narrow band transmitters on spread spectrum recievers and visa versa. Typically the frequency hopping mechanism can avoid interference with narrow band trasmitters and narrow band transmitters typically recieve low background noise when adjecent to spread spectrum transcievers. In summary the two devices can co-exist on the same frequencies and pretty much not interfere with the two.

      This is probably why the USAF claims their awacs network is unjamable...
  • pushing MHz (Score:2, Interesting)

    by recursiv ( 324497 )
    While astute computer users know that raw MHz does not automatically translate to application/game speed, not so in the case of the typical user.

    When AMD broke ahead of Intel in the MHz race, their marketing department was quick capitalize on this with a media blitz that even included some TV commercials.

    However, now that Intel once again taken the lead in the MHz race, astutely AMD has once again retreated its marketing tactics to the knowledgeable and computer savvy.

    Every unbiased hardware review page has said pretty much the same thing, clock cycle for clock cycle the AMD is still faster. However, the average computer buyer is still tied down to the more is better idea.

    And honestly, that is something that is hard to refute. More RAM is better, bigger HDs are better, bigger monitors/screens are better, faster modems are better...why don't CPU's follow the same rule?

    The answer is a pretty complicated one and to explain that would require some basic knowledge that you just can't squeeze into a 30 second commercial. AMD has made noise about a marketing campaign that will educate the public, however so far it has been just that, noise.

    • by doooras ( 543177 ) on Tuesday April 02, 2002 @05:53PM (#3273066)
      bigger monitors/screens are better, faster modems are better...why don't CPU's follow the same rule?


      I wouldn't want a 21" CPU

    • Re:pushing MHz (Score:4, Insightful)

      by Darth Maul ( 19860 ) on Tuesday April 02, 2002 @05:59PM (#3273112)
      Same reason people think the "24 valve" emblem on their car makes it "go faster".

      They don't even know what a valve is, let alone what the number of valves represents in engine design, but hey, 24 is more than 16.
      • Re:pushing MHz (Score:2, Insightful)

        by afidel ( 530433 )
        But is that a 24 valve V8 or a 24 valve V6, if it's a V6 then the 16 valve V8 (especially if it's a big block =) will kick it's ass!
        • Re:pushing MHz (Score:4, Informative)

          by Zeinfeld ( 263942 ) on Tuesday April 02, 2002 @08:25PM (#3273924) Homepage
          But is that a 24 valve V8 or a 24 valve V6, if it's a V6 then the 16 valve V8 (especially if it's a big block =) will kick it's ass!

          Not necessarily. The V6 GTI I bought for the wife creates more horsepower than the majority of US made SUVs which are typically based on engines that were originally designed in the 60s. Equally the V8 in my XK8 will easily outperform the V12 engine Jaguar used to use [and still do 20 Mpg arround town rather than 10]

          What really matters though is the chasis the engine goes in. For example the GTI will nail any SUV in the street, even if you dropped the Jaguar engine into it. Heck you could drop the engine out of a Ferrari F40 into a Ford Exploder and the Jag would beat it round any track. To go fast arround a circuit you brakes matter as much as your engine.

          Its pretty much the same when you get to MHz. A 2.4MHz processor will probably go faster than a 2.0MHz processor all things being equal. However how much faster is pretty variable and all things are usually far from equal.

          Unless you have the motherboard and O/S design that will support the beast you will probably notice about as much improvement from a 2.4MHz processor as painting a go faster stripe on the box.

          Unfortunately most of the O/S in common use tend to spend a lot of time in unnecessary wait states. They ask a piece of hardware to do something, guess how long it will take and poll for the result. This isn't the way it should be but it only takes one baddly written driver to stonk the whole machine.

          Of course back in the days of real operating systems there were these asynchronus service traps...

          The bottleneck in UNIX and Windows is the GUI interface in both cases. The Windows GUI has lots of unnecessary blocking states. X-Windows falls foul of the lousy performance of interprocess communications on most modern UNIX boxes.

          • Equally the V8 in my XK8 will easily outperform the V12 engine Jaguar used to use

            Cool. But I would sure love to have a ride in the XKEE that R&T reviewed about 30 years ago... I've heard good stories about the Jag 3.4 to 4.2 6-cylinder engines, and the thought of two welded end to end is just too fun.

            Unfortunately most of the O/S in common use tend to spend a lot of time in unnecessary wait states.

            That is one of my favorite thought experiments I like to bring up when someone asks how well a dual-CPU system might perform. In general, most people would expect to get 20% to 80% over a single CPU, but in certain cases where the first CPU was stuck in a wait-state swamp, I believe that more than double the original performance. Of course, a better solution would be to add a cheap, dedicated microcontroller to stand on top of the polling, but a $2 savings to the card manufacturer is more important than $50 of CPU upgrade to the end user (see: winmodems).

            • Cool. But I would sure love to have a ride in the XKEE that R&T reviewed about 30 years ago... I've heard good stories about the Jag 3.4 to 4.2 6-cylinder engines, and the thought of two welded end to end is just too fun.

              I very much regret not buying one of those instead of my MGB. Although the MGB cost only a fraction of the cost of an XKE ($2K instead of $10) I have since spent $10K restoring it, the XKE would not have cost much more.

              The main disadvantage of the older cats is that they are about as reliable as a Soyo motherboard overclocked to 3GHz in a sealed biscuit tin running a beta release of Windows 3.0.

              I like to bring up when someone asks how well a dual-CPU system might perform. In general, most people would expect to get 20% to 80% over a single CPU, but in certain cases where the first CPU was stuck in a wait-state swamp, I believe that more than double the original performance.

              Exactly, my twin processor 650MHz machine kicks the butt of most single processor machines when it comes to console work. It is not as hot for compilation but I have engineers to do programming for me these days.

      • Re:pushing MHz (Score:3, Insightful)

        by GTRacer ( 234395 )
        Well, in some cases, it can, by increasing the amount of fuel mix/exhaust that can be pushed through the cylinders. Given head design limitations and the need for distinct intake and exhaust valving, more smaller-diameter valves can be beneficial to throttle response, torque peaks and max RPM.

        GTRacer
        - It's true! It says so right here on this cereal box!

    • by pyrros ( 324803 )
      More RAM is better, bigger HDs are better, bigger monitors/screens are better, faster modems are better...why don't CPU's follow the same rule?

      They do. Faster CPU's are better.
    • Re:pushing MHz (Score:3, Insightful)

      by Steveftoth ( 78419 )
      For the past few years, about 4-6 by my estimation, the real bottleneck in all PC systems has been the HD. Most speed problems can easily be solved by getting a HD that spins @ double the speed. Of course this won't make your quake game faster, or encode mp3s faster, but most of the time, the percieved slowness in a computer is due the HD being slow.
      RAM can help, in fact I place ram as being the second thing that you should upgrade after a HD. Mostly because you don't gain much after you double your ram capacity in a PC. After about 400 megs of ram, you really won't see too much improvment in normal usage. (No, editing 100 meg TIFFs in Photoshop/GIMP is not NORMAL, sorry if your camera generates those)

      Of course you can throw all these reccomnendations away if you don't use the PC in a 'normal' enviroment. Servers, crazy mp3 machines and video toasters won't benefit from the same upgrades as a normal PC.
      • Re:pushing MHz (Score:2, Interesting)

        by wik ( 10258 )
        I'd argue that in the future, it's going to go down even another level in the memory hierarchy: the network.

        I store a lot of my information on remote filesystems (or, yuck, access it through a web browser). How many people use their machines just for email (maybe stored on an IMAP server) or browsing the web? The CPU and even the disk are sitting on their thumbs here. I think that if I finally get one of those palmtop PC's, it's only going to be remote display for something that is stored/running on another machine, just like how I use my laptop now. Sadly, there is no easy upgrade that will "double your network".

        In the (database) server market, you're going to find a horrible bottleneck at the memory system, outside the L2 (or L3) caches. Disks, fortunately, are an easy problem to solve. Just throw more spindles at the machine and make sure your database is balanced across them. The number of requests you have hitting the machine can hide the latency of each individual disk. The same sort of thing will not help the PC, since just about everything you do on the PC, to first order, is single-threaded and waiting for an IO to complete (e.g. loading the mozilla binary into memory).
      • For the past few years, about 4-6 by my estimation, the real bottleneck in all PC systems has been the HD. Most speed problems can easily be solved by getting a HD that spins @ double the speed.

        For all common PC usage scenarios, I completely disagree! I think you're almost completely wrong.

        The hard drive is not a performance bottleneck of any kind for desktop use, unless you're performing work like video capture/editing, or if you've got a serious RAM deficiency and you're constantly paging memory to/from disk.

        With any kind of modern hard drive, even 5400rpm ones, you've got ideal burst transfer rates of around around 50MB/sec, and sustained transfer of around 25-30MB/sec. Even chopping those transfer rates in half to allow for real-world conditions, think about how much data you're moving back and forth from the hard drive. The answer is: not much. Even large applications are typically a few megs in size, and rarely greater that 10-20MB even including all the associated libraries that need to be loaded.

        Additionally, assuming you haven't got a RAM shortage, once the applications are launched, they STAY in RAM. So even a slow hard drive would make your application load more slowly (perhaps by a few seconds) but they'd run just as quickly once they're loaded.

        For most tasks, the hard drive is absolutely not the bottleneck. For a few tasks (games, rendering, scientific apps, kernel recompiles) it's the CPU. For games, it's a combination of the video cade and the CPU.

        In a lot of underpowered consumer systems, a lack of ram is the real killer. In this case, HDD speed *does* come into play since the swap file's constantly being thrashed, but if it's constantly thrashing your rig is gonna be slow even with a very fast HDD.

        The REAL bottleneck in an average desktop PC, though, is the user. Watch the CPU usage... unless you're running a SETI@HOME or something in the background, the CPU is idle about 99.9999% of the time. Most casual users would be unable to tell the difference between a 800mhz with a 5400rpm hdd, and 2.4ghz PC with a 10,000mhz SCSI hdd... aside from the noise, of course. ;-)
        • with a 10,000mhz SCSI hdd...

          Wow, I didn't know that HDs could spin so fast!
    • Re:pushing MHz (Score:4, Insightful)

      by Sebastopol ( 189276 ) on Tuesday April 02, 2002 @07:59PM (#3273785) Homepage
      The answer is a pretty complicated one and to explain that would require some basic knowledge that you just can't squeeze into a 30 second commercial.

      You have essentially identified the root of many, many problems, for example, in my world, I personally consider these issues to be very important:

      1) Why don't people listen to Ralph Nader?
      2) Why do people listen to Britney Spears?
      3) Why do people eat Vitamin C and Echinacea in massive quantities?
      4) Why do some people believe Creationism belongs in public schools?
      5) Why is Prozac(tm) legal and marijuana illegal?

      The discussion required to analyze these issues last longer than 30 seconds, so instead:

      1) 97% of the voting bloc votes republicrat
      2) Britney spears has sold millions of albums
      3) Herbal remedies run rampant w/nearly zero clinical support
      4) Evolution is market for extermination by some board's of education
      5) ...i'll quit while i'm ahead...

      Anything that takes longer than 30 seconds to understand is far beyond the Oprah-fried brains of the masses.

      What makes us think the masses would care about facts?

      • I'll agree with most of your points, but...

        3) Why do people eat Vitamin C and Echinacea in massive quantities?

        There's real peer-reviewed science on the benefits of vitamin C. See, for instance, Khaw et al, Lancet 2001; 357 657-63. The authors of this article followed nearly 20000 people for four years, measuring their plasma ascorbic acid (i.e. vitamin C) level. Over four years, the mortality rate of the 20 percent with the highest levels was about half that of the 20 percent with the lowest levels. The probability of this result happening by chance is estimated to be less than 1/10000.

      • 1) People prefer not to have to think and learn anything. This is the primary reason why people are not active in education or goverment.

        2) People prefer not to have to think and learn anything. See #1. It's pretty well accepted that you'll be even dumber for having listened...nonetheless, zero effort was used to achieve this effect.

        3) Vit C is not a herbal remedy and has over 30 yeras of research which supports it's effects. In fact, the only valid question still left that effects it's use is the required dosage level. Having said that, most studies indicate that it's somewhat higher than what the FDA pushes. MD Anderson even has done cancer research which used Vit C (amongst many others) which yielded a 5% - 15% higher recovery rate than traditional cancer treatments. This was in the fevor of when the FDA had plans to make viters illegal save only for prescription. It's echinecea that has little to no historical support for treatment. Please don't get confused.

        4) Because they have a different view on life...and like most people, feel their view should be supported as well. This often has little to do with being right or wrong.

        5) Given you're context, actually both are illegal. Having said that, both can be legal given a valid context and a prescription in hand.

        Anything that takes longer than 30 seconds to understand is far beyond the Oprah-fried brains of the masses.

        What makes us think the masses would care about facts?


        This is, about the only thing we seem to agree on , however, I can't stress enough how valid it is. See my answers to points #1 and #2 to support your claim.
  • Another Article (Score:4, Informative)

    by WndrBr3d ( 219963 ) on Tuesday April 02, 2002 @05:50PM (#3273053) Homepage Journal
    Tomshardware [tomshardware.com] has also posted an article [tomshardware.com] today putting it against the latest Athalon XP.
  • looking at the bench marks although intel may have the lead in MHz the athlon still is keeping up if not beating the P4 in the benchmarks. That is were it counts.
  • by Medievalist ( 16032 ) on Tuesday April 02, 2002 @05:51PM (#3273058)
    Since they only tested with a single OS, and that OS was Windows XP(a fairly new release of a historically unstable operating system, probably rife with performance bottlenecks that are more apparent on some types of hardware than others) these benchmarks are principally useful to Microsoft Windows users.
    It'd be nice to see similar tests with a couple of linux kernel variants (1.0.x, 2.2.x, 2.5.x) and some BSDs, Solaris, whatever. Just get some heterogenity in there and see what difference OSes make, hardware vendors are famous for tuning their systems to meet benchmarks after all.
    --Charlie
    • by MisterBlister ( 539957 ) on Tuesday April 02, 2002 @05:55PM (#3273084) Homepage
      Since they only tested with a single OS, and that OS was Windows XP(a fairly new release of a historically unstable operating system, probably rife with performance bottlenecks that are more apparent on some types of hardware than others) these benchmarks are principally useful to Microsoft Windows users.

      Since Microsoft Windows users are about 90% of the desktop computer using population and about 99.9% of the gaming population (as even Linux users who game tend to have Windows partitions because that's where all the games are) and these benchmarks are primarily focused on gaming...Why should they bother testing non-Windows platforms?

    • Some Linux Benches (Score:4, Informative)

      by TheMatt ( 541854 ) on Tuesday April 02, 2002 @06:03PM (#3273141) Homepage Journal
      There are some benches on *NIX flavors here: link [nthu.edu.tw].

      They aren't the most recent, but they effectively show that for us theoretical chemists, nothing beats P4+RDRAM+ifc for Gaussian98 (the timings are in minutes, not the sad seconds on most sites). Of course, more processors help, but the benchmarks looked at single chip+motherboard.

      • [quoting from their conlusions]
        The Intel Fortran Compiler is able to further optimize the binary of GAUSSIAN 98 compared to PGI Fortran, and invaribly provide speed-up for AMD Athlons.

        I found that one particularly interesting.

        Do I understand correctly that using Intel's FORTRAN compiler under Linux provides speed-up over the Portland Groups FORTRAN compilers for the AMD CPU?

        Sounds to me as if maybe AMD ought to put a few dollars into PGI and into the gcc effort, or are the tricks of the Intel FORTRAN compiler just too expensive to replicate?

        Either that, or Intel needs to put in a "go slow" branch when on the AMD CPU:)

    • Since they only tested with a single OS, and that OS was Windows XP(a fairly new release of a historically unstable operating system, probably rife with performance bottlenecks that are more apparent on some types of hardware than others) these benchmarks are principally useful to Microsoft Windows users.
      It'd be nice to see similar tests with a couple of linux kernel variants (1.0.x, 2.2.x, 2.5.x) and some BSDs, Solaris, whatever. Just get some heterogenity in there and see what difference OSes make, hardware vendors are famous for tuning their systems to meet benchmarks after all.
      --Charlie

      XP is basicly Windows 2000 (NT 5.1 vs NT 5.0). Win2k has proved itself to be extremely stable as an OS, and XP is likely no exception.

      Also, gamers usually use windows. It's just that way.
  • by Anonymous Coward
    I've got an AMD XP 1500+, and an Intel 1.13GHz Laptop and both are faster than I need by far. 800Mhz to 1GHz is all that anyone needs for standard apps. Hell if people would focus on improving the existing apps instead of adding more bloatware we'd need 1/2 of that. (My all anyone needs comment is too similar to the 640K comment from our hero billy g.)

    Hell most the clients of my company have pentium class computers and access us via the web. They have no problems outside of bandwidth limitations. Speed is an insignificant issue.
  • A New Benchmark... (Score:3, Informative)

    by CajunArson ( 465943 ) on Tuesday April 02, 2002 @05:54PM (#3273070) Journal
    This is really interesting because it shows that
    Intel wasn't entirely stupid in choosing RDRAM. The P4 really needs the stuff (and the new 533 MHz FSB is really needed too). Meanwhile AMD is
    going to be using Hypertransport to get DDR II to
    run properly with the chip (DDR 333 is not that great a performer because its 166 MHz base is not synched with the chip's 133 Mhz base FSB).

    My rambling point is that clock speed of the processor is rapidly becoming less of an issue in performance than the chip's ability to move data fast. So, should we start trying to talk about chips in terms of data throughput rates as
    a better 1 line answer to how fast is it?

    I know that CPU speed is a very complex and tricky thing to measure, but sometimes its nice to have a metric that can give you a snapshot of
    performance. Raw clock speed used to do that, but now maybe we need something different.

    • by taniwha ( 70410 )
      actually I'd disagree - the thing they need to do is get rid of the FSB and drive the ram directly - this would get rid of latency across the bus and remove the serialising nature of having a FSB at all (otherwise a lot more of the parallelism within the CPU's load store unit could be made apparent to the memory controller - which would actually improve performance of many-banked architectures like rdram).



      In many ways it's easy to go after MHz (in CPUs or memory transfer data rates) Intel's really good at that - it's something their marketting people can sell. But in the real world average latency is today's memory performance killer

  • Let's see, my Athlon900, Duron700, PII233, PII300, and P166 are all still chugging along. Does this mean that I am more obsolete than ever?

    I gues I am getting further and further behind the Jones'.

  • Corrrection (Score:4, Insightful)

    by ziggy_zero ( 462010 ) on Tuesday April 02, 2002 @05:55PM (#3273077)

    "Incidentally, the slack demand for RDRAM has it almost as cheap as DDR SDRAM."

    Correction: The increasing demand for DDR RAM has caused DDR RAM prices to rise dramatically in the past few months, and the prices are approaching those of RDRAM.
  • by cOdEgUru ( 181536 ) on Tuesday April 02, 2002 @05:57PM (#3273093) Homepage Journal
    spend under 200 and buy a Pentium 4 1.6 Northwood with 512kb cache and overclock it to 2.4Ghz with an Abit motherboard..as these guys [hexus.net] did.
  • [H]ocp review (Score:3, Informative)

    by beckett ( 27524 ) on Tuesday April 02, 2002 @05:57PM (#3273095) Homepage Journal
    [H]ardocp also has an article [hardocp.com] on the p4 2.4 comparing it to the athlon XP 2100+. Easy to understand graphs for those who shun reading, and it's not on fifteen different pages to generate hits.
  • by Sabalon ( 1684 ) on Tuesday April 02, 2002 @06:04PM (#3273146)
    Take the latest from both Intel and AMD

    Run standard stuff on it, AMD moves faster at a much smaller mhz.

    Run stuff optimized for P4 on it, Intel now has the advantage.

    Pay through the nose for Intel's latest and greatest.

    So...whenever one of them releases a chip it comes down to do you run something that is intel optimized where you would get the performance boost? Also, do you want Intel on Intel, which'll work with 99.9% of stuff out there, or do you want to save a bundle and get AMD on Via/AMD/AliMagic/Whatever and have some possible incompatabilities?
    • Pay through the nose for Intel's latest and greatest.

      Just a little searching ...
      Athlon 2100+ ... ~$241 (on pricewatch.com)
      Pentium 4 2.4Ghz ... ~$583 (on pricegrabber.com)

    • AMD's new multiprocessor chipset is very stable, so much so, that it pays to pay the $100 premium to get a dual processor board with it - EVEN if you're going to only put one processor in it. It has turned the AMD Athlon platform from a flaky VIA hell-hole to a somthing like the days of the Intel BX chipset days - things just work.

      • Nforce is seeming like a nice solid platform these days, I'm using an SiS 735 board here and it's NO trouble whatsoever... the AMD 761/762 (uni/smp) northbridges are stable.

        If I hear ONE more person say "I'm not buying AMD 'cause VIA chipsets suck", I'm going on a killing spree.. damnit.
      • by T5 ( 308759 )
        Unless you're talking about the USB support, which is broken in the latest AMD 760MPX chipset. Most vendors are shipping a USB PCI card to make up for it, but for some that loss of a PCI slot is very painful.

      • Still bad memories of my ATI card and the AMD on AMD I had 3-4 years ago (the shuttle hot-603 board - that they wouldn't link on their website cause of fear of reprisals from Intel).

        Personally I'm waiting for the fixed MPX to come out.

        Via - won't touch :)
  • Of course, my machines do just fine at 200-550Mhz.
  • Time for .13 Athlon (Score:3, Informative)

    by jmv ( 93421 ) on Tuesday April 02, 2002 @06:07PM (#3273163) Homepage
    This shows that it's really time for AMD to release Athlon XP's at .13 um before Intel are too much ahead of them. From what I understood, .18 Athlon are stuck at PR 2100+.
  • by josquint ( 193951 ) on Tuesday April 02, 2002 @06:13PM (#3273202) Homepage
    ...reports of fatal data collisions are up 300% today, due to little 1's and 0's comming down off a 2400mhz processor slamming into 333mhz ram and careening down to a 33mhz PCI bus...that's GOTTA hurt!

    :-)

  • At risk of sounding like bill gates (no one will ever need more than 64k of ram...) do that many people really care if the mhz line has been pushed forward another couple of yards? In the past it has seemed that the software industry has kept right up with the hardware companies. When they release a new video card they jump on it, a new processor, etc. Now it seems that the hardware companies have gotten so far ahead of the software industry that its going to take years before they take advantage of this. The only people a processor like this will benifit are those doing serious computations in photoshop, digital video or mathamatica and those industry professionals aren't using pentiums anyway. I'm not really sure what my point really is, just that it seems like this war between amd and intel is really pointless. No one is going to need or use this speed for years.
    • Yes, but if we get a few years down the road and dont have the processors fast enough to handle our software, hardware developers will be in a crunch. Really not much harm done by letting the hardware get ahead so we have the technology when we need it, not to mention getting it to work well, instead of having a quickly developed high-tech piece of crap when we suddenly need some extra speed.
    • M$ will find a way to make that shiny new P4 2.4GHz crawl along like a P100. A few more security bugs, a couple hundred more features in Office, a few more annoyances like Messenger, and we're there.
  • by Jethro ( 14165 ) on Tuesday April 02, 2002 @06:24PM (#3273262) Homepage
    I'm about to read the review, but I'm guessing the Intel CPU performed better. Otherwise the headline would have been "AMD Slams Intel Once Again!".


    In order to defeat the lameness filter, I will point out that MP Athlon boards are a lot cheaper than a few months ago and that I want one, and that it's about timeto hit Pricewatch.
  • Wow! This is 100 times more MHz than in 486 machine running OpenBSD acting as my home firewall
    and wireless router (with 1 wireless and 2 ethernet interfaces).

  • Seen better (Score:3, Interesting)

    by room101 ( 236520 ) on Tuesday April 02, 2002 @07:01PM (#3273501) Homepage
    I didn't really like this review because the number of variables weren't reduce sufficently. He compares the older P4s with DDR SDRAM to the New P4 with RDRAM.

    I still don't really know how the new and old P4s compare. For all I know, it might be the memory difference.

    I understand that you probably can't get the new P4s with DDR SDRAM, but he should have used RDRAM on the old ones to compare, not DDR SDRAM. Both would have been fine, so you can compare those as well.
  • by TheViffer ( 128272 ) on Tuesday April 02, 2002 @07:06PM (#3273527)
    a) 2.4 GHz
    b) 2.4 Megabit
    c) 2.4 ERA
    d) 2.4 Linux Kernel
    e) Article 2 Section 4 of the US Constitution
    f) 2.4 Cowboy Neals
  • It seems that the only thing the P4 can beat the Athlon on is anything that's memory bandwidth intensive . . . That's the difference between the Content Creation 2001/2002 suites that everyone seems to be completely baffled by - the new version includes apps that are bandwidth limited.

    I'd be interested to see the performance of the Athlon XP if it had access to the same amount of memory bandwidth as the P4. . . I'd be willing to put money on the Athlon coming out on top.

    So, is there a dual channel DDR chipset for the Athlon? Give the thing 4200GB/s memory bandwidth, and watch it kick the P4's arse even more . . .

    himi
  • by tshak ( 173364 ) on Tuesday April 02, 2002 @08:16PM (#3273879) Homepage
    Why are we comparing a ~$600 chip (P4) to a ~$250 chip (Athlon)? Sure, it's fun for a little ego brawl to see who has the fastest chip on the block, but this has little practical information for the consumer. All this says for Intel is "Hey look, I can build a slightly faster chip for SSE2 optimized apps for 250% more!". I'm not impressed. It's not only the MHZ that don't matter, the AMD "model numbers" should be irrelevant too. What really matters is price/performance. I'd rather see a ~$250 Athlon benchmarked against a ~$250 P4. Then simply mention that if you want P4's fastest offering, you can plunge $600 for it.

    We don't compare the MHZ or model numbers between the Geforce and Radeon video cards - we only compare price and performance. The same should go for CPU's.
  • by Crag ( 18776 ) on Tuesday April 02, 2002 @08:35PM (#3273969)
    This may be redundant since I browse at 4, but I saw no mention in the entire article of the prices of the CPUs and their support hardware.

    Pricewatch doesn't list 2.4Ghz P4s yet, but a P4 2.2 mb/cpu combo is $570, and the Athlon 2100 combo is under $300. The fastest Intel mb/cpu combo under $300 listed is 1.9Ghz, which can NOT keep up with an Athlon 2100 setup.

    There's certainly more to a purchasing decision than price and performance, and I don't expect every article to cover every angle, but the disparity in price/performance ratios between the companies seems VERY signifiant to me.

    Perhaps this article is too targeted for gamers. Business and home users will be more concerned with economy, and professional high-performance users (server/workstation/research) will probably spring for dual processors if raw throughput is so important.

    In any case, I look forward to AMD's next moves.
  • Rambus for a reason (Score:2, Informative)

    by jbischof ( 139557 )
    We are all finally learning why Intel chose Rambus, they just maybe should have supported DDR first and weened us off of DDR, and not the other way around. I remember when Rambus first came out and everyone was preaching DDR and saying that RAMBUS completely sucked. It is frustrating now to see the big effect it has at the higher Ghz and watch Intel abandon it because of marketing and ignorance of the general populus.

    Well lets add another technology to the long list of products that were better than many commonly used products, yet never got significant market share. (BeOs, Alpha Processors, etc. etc.)

  • by HiyaPower ( 131263 ) on Tuesday April 02, 2002 @09:27PM (#3274252)
    For that kind of dough, I can roll a dual 2100+ system and run rings around it in most real life tasks that would require this sort of speed processor (like video encoding).

    For the moment, Intel may even have the highest preformance, lower priced processor (so as to exclude the Alphas, Itanics, etc.), but on a total price performance basis, the AMD chips beat them hands down.
    • Of course, an old 486 found in the dumpster has the best possible price performance, since it has a price is zero, and it's performance is non-zero. But nobody cares about price performance. 99.999% of the market is willing to pay a substantial premium (over a free 486) to get a machine with modern performance, and, moreover, 80% of the market is willing to pay a premium for the Intel brand.
  • Many people are talking about the need for a new killer app to use all this speed. Some people say that killer app has already arrived in the form of video and other signal processing tasks. Ultimately I don't think that is how it works though. I think most of the processing power will just be absorbed by a much more diffuse and amorphous collection of tasks and requirements than just a single "killer app". For example:

    • Windows NT. Many people (and most gamers) are still running Win9x. For a variety of reasons, these people will want to migrate to NT/XP/Happy Meal in the future. That takes processing power, both directly and indirectly (e.g. NT uses more memory, which means the OS has to move around more memory, which means it needs a faster CPU, etcetera).
    • Improved human computer interaction and other "soft" areas such as localization and internationalization. For convenience I'm including things like 200dpi displays and input devices with very high sampling rates/throughput as well as sane error messages and effective automated troubleshooting -- think Clippy, or the IBM effort towards "self managing" systems (if I got the term right).
    • Increased focus on/awareness of security. It is nice if the computer prevents people from tampering with your data by verifying credentials at every step. It also means the computer has to verify credentials at every step.
    • Interpreted applications. Someone described Internet Explorer as an "advanced interpreter" on Slashdot the other day. That is a very accurate characterization. Think also of things like Flash, Java(script), and VB.
    • Bloat (or, what most of you guys would call bloat -- I don't think many of you could or would want to design their own fonts). Think of things like document templates, fonts (and complex font rendering technology) and desktop backgrounds (200dpi desktop images anyone?). Think also of the incorporation of "real world" quantities into software; things like measurements (pixels, inches, cm), "favorites" lists, ISP lists, stock media, etcetera.
    • Backward compatibility, both on the hardware as on the software level. This includes thunking layers, virtual machines, emulators, and what not. Open source software, incidentally, can avoid some of the cost of backward compatibility, because when you change a piece of software, you can usually simply recompile software which depends on it. It is truly remarkable how much code any random application contains because of the requirement for binary backward compatibility.
    Obviously this is just a very fragmented list and there is a lot of overlap in the things I mentioned as well. Still you have to ask, why is it that a 2 GHz machines can take anywhere from 15 to 30 seconds just to boot up? That is more than it used to take my CP/M Bondwell to start up Wordstar. And that was over fifteen years ago. Just a thought.
  • by Jagasian ( 129329 ) on Tuesday April 02, 2002 @10:51PM (#3274661)
    Just a little competition and we have cheap ultra high performance CPUs! Back in the 80s, no one would dream of computer hardware with such performance.

    One monopoly in the OS market and we have restrictive bloated ultra expensive insecure operating systems! Back in the 80s, I wonder if this is what people were dreaming about...
  • So.... (Score:2, Funny)

    by Ziviyr ( 95582 )
    Who else counted the pins in the picture? :-)

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...