Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

What 1.7Ghz Is Like 172

Beanie writes: "Today Intel announced their 1.7GHz Pentium 4. It's crazy to think about the fact that just one year ago we were breaking the 1GHz barrier and now we're almost up to 2GHz. AnandTech has a review of the Pentium 4 1.7GHz and they compare it to the recently released AMD Athlon 1.33GHz." And Otis_INF writes "Tweakers.net had the oppertunity to run some benchmarks on a system with an Intel Foster CPU on board, placed on an early i860 based board. The complete sneak preview (in english) is here. It smokes the P4 in some benchmarks."
This discussion has been archived. No new comments can be posted.

What 1.7Ghz Is Like

Comments Filter:
  • by Anonymous Coward
    Hrmm.. Most consumers are idiots is true. Most computer geeks aren't. Most computer geeks influence alot of their idiot consumer friends on what to buy when they buy it. The geeky kid in the store who reads slashdot all day, you know;

    AMD will survive.
  • i mean, since when did it become okay to rewrite what was said, giving it an entirely different meaning

    You know Murphy's Law? It's not what most people think.

    Oh, and Sturgeon said crud, not crap, although the meaning is pretty much captured.
  • ...is all the Mac users who for the past year+ have said that no one need more than a 400-500MHz machine, and now they get OS X and realize because there's practically no hardware Quartz acceleration, every one of your window resizes forces the CPU to do all the AA, drop shadows, updating the contents, etc. and guess what...a 400-500MHz G4 can't handle it!

    in fact, a G4/733 can't handle it. no current Mac is fast enough to handle it. so now Mac users see just how underpowered a 500MHz G4 can be.

    - this is not a troll. I am typing this in OS X on a G4/400. I, however, am not someone who thinks MHz is worthless.
  • by crayz ( 1056 )
    I've used it on machines from a G3/400 to a G4/533, and it is slow on all of them. If you think the speed is fine, your standards are pathetically low.

    Try opening OmniWeb or IE - slower than opening IE on OS 9.

    Try opening a new window in the Finder or OmniWeb. Slower than opening a new window in the Finder or IE in OS 9.

    Try live resizing a window in OS X to near fullscren - far slower than OS 9, or than Windows, which also does live resizing.

    Try moving or deleting large amounts of files in the Finder. Slower than OS 9(although if you use the terminal it can be very fast).

    I've been using OS X full-time for the past three weeks. Don't try to tell me it runs fine. Oh yeah, "handle it" - well my 5200/75LC could "handle" Quake, it was just dog slow - just like OS X. I don't consider 1FPS or less on some window resizes in column view "handling it"
  • by crayz ( 1056 )
    When I have just booted into OS X, and have no apps or anything open(other than Apache, which probably is not serving any pages at the time), wtf is the PMT doing not giving the Finder 100% of the CPU for window resizing? What is it doing, saving 75% of the cycles in case I choose to do something else while resizing the window?

    Stop being such an idiot.
  • What a stupid name to use for their new motherboard chipset.

    As if any other names for chipsets would be any better. None of the modern chipset names tell anything to me...

    Okay, I'm a *NIX programmer. I've been using Linux since 1996. But if there's *one* thing that makes me cry mama, it's the kernel compiling.

    "Uh, so, does my machine have i39842309843 and i49284? I don't know, do you?"

    I'm not a hardware geek. "I just bought my new machine from the store." =) I know what Mostech 6510, VIC2, CIA and SID are, but all this modern hardware babble makes me puzzled. Really puzzled.

    (FWIW, it wasn't that bad. I got my kernel to compile back when I got this machine and it works pretty well.)

    Does anyone have "An Idiot's Guide to Latest Achievements in PC Hardware"? =)

  • The i860 was widely used for embedded applications. It's successor, the i960, is still available for Intelligent I/O (I2O) usage.

    The i960 was out a fair number of years before the i860. The i860 was a floating point monster for it's day (and the first superscaler I ever used). Oki tried to sell a line of workstations based on it. In fact I think they sold well to NTT, but they didn't sell well in the USA. The only embedded use I know of was in SGI's Reality Engine as geometry engines.

    I had two in my office for about a year.

  • by booch ( 4157 ) <slashdot2010&craigbuchek,com> on Monday April 23, 2001 @10:34AM (#271017) Homepage
    What a stupid name to use for their new motherboard chipset. They had previously used the i860 name for a series of CPUs (which are not compatible with the x86 series). The i860 was widely used for embedded applications. It's successor, the i960, is still available for Intelligent I/O (I2O) usage. Searching for i860 on Intel's web site is really going to be confusing.
  • I got the fun little error message on page 2 of "Cannot Connect to MySQL". In addition to the database server picture with the caption: "All your database are belong to us."
    Error pages are fun.

    Well done article.
  • Agreed. I have a 500 Celeron that is way faster than I need. The only time I wish it was faster is when I'm doing DivX encoding, then I could finish more than one movie a day.
    Other than that I could be running a 233 and I probably would never know the difference.
    Yes, gamers want the fastest. But damn, if you're into it to the point where you are getting the lastest damn CPU every year, gaming is a pretty expensive hobby.
  • I think the problem is that even the current kernel for Linux (2.4.3) and the upcoming Windows XP are not going to take advantage of the longer pipelines and SSE2 instructions on the Pentium 4 out of the box just yet.

    It's only a few high-end applications and high-end games that will use the power of the Pentium 4, and even those are very uncommon nowadays.

    I think people are going to realize that when AMD goes to the Palomino Athlon core, there still will be no advantage to going to Pentium 4.
  • The Blade 100 uses a Sparc IIe, not a Mips.
  • There's Moore's Law as originally stated, and then there's Moore's Law, the marketing strategy of planned obsolesce.
  • by Augusto ( 12068 )
    but don't act like its a game on its own...)

    So, who cares ?!?! It's still a lot of fun, and very popular.
  • Why are there so many AMD zealots around? What they fuck do you think you're supporting? The goodness of mankind because Intel is somehow evil? AMD is a publicly held corporation just like Intel and they've made as many hardware mistakes as Intel has in the past (the K6-3 for example). "I switched to AMD because of the FP-DIV erro" or some stupid bullshit just goes to show how easily led you are. If you're going to choose a chipset choose it because you're really getting the most for your dollar. Right now AMD gives you the most for your dollar. So pick it because of that not because of a floating point bug only 1 in a thousand people ever ran into.
    Technically the P4 is a pretty good design for a processor and would in actuality be a really good processor if Intel had done some pre-release planning. Before the launch of the P4 developers should have been offered P4 reference boxes and a completed 5.0 version of their compiler at very reasonable (VERY LOW) prices. This would have given them lots of time to recompile and optimize their binaries so when the P4 came out they could offer a P4 upgrade of their applications. This would have provided a better market for the P4 when it was launched. If I'm Joe GraphicsDude working on my aging P2 450 system and Adobe sends out an email saying I can get a P4 optimized set of their graphic design apps for a low low upgrade price, I can take that to Joe PriceWaryEmployer and ask for a small investment to get me more performance and thus more output. The Intel engineers had to stick a simpler FPU on the P4 due to remain within their transistor budget so the FPU performance of the P4 is lower than that of the Athlon. However if some operations were redone to use SIMD instructions as opposed to FP instructions they could bypass the crappiness of the FPU and get similar if not better FPU performance out of the P4.
    As explaied so many times to so many stupid people clock speed != performance. The fact a 1.7 GHz P4 competes with a 1.2GHz Athlong Tbird shouldn't really be a point of contention. The 2GHz P4 ought to easily best a 1.33GHz Tbird (assuming the gigaheartz to gigahertz performance ratio is kept) and the P4 can easily be clocked up to 3Ghz while the Athlon line will top out around 1.5GHz. I think the hardcore performance coming out of the P4 core won't be seen until Foster (Xeon) is released at the end of the quarter. The bigger cache with its ultra high bandwidth data channel to the processor will make it a powerhouse performing lots of the same instructions (ray tracing for example). The DP and MP versions of Foster will probably make it a killer in the graphic workstation market. One note to developers, do a better fucking job of threading your apps. The Xeon is going to have hardware support for SMT and deep pipeline of the P4 makes multiple threads work good so write your apps to make better use of threading. I know some things are really hard to thread because you can only do perform some functions once other functions have completed by please try! Anyone with more than one processor in their system can attest to how rarely a program is effectively using both processors. The new Xeons will have lots of bandwidth and a deep pipeline use it!
  • I'm not being zealous for any particular chipset or processor. Trying to attack my argument on grounds of being "typical and transparent" is ludicrous. How about some factual basis why my argument isn't sound. You'd be hard pressed to come up with a decent argument due to your apparently infantile reasoning ability. The K6-3 was a major blunder for AMD because they lost money on its production. They had a low production volume and the on die L2 cache only added to the cost in silicon. Three cache layers the fastest of which runs at the speed of the memory clock? Thats not what I call a great idea. The K6-3's performance was NOT on par with that of a P3 in any sort of test you could run. They also timed the release of the K6-3 badly compared to the release of the Athlon. The K6-3 was a competitor for the P2 and a poor one at that which was too little too late. The Athlon however was released concurrently with the P3 to act as a direct competitor for it. My friend's Athlon 500 tops my P3 500 (Katmai) in 3DMark by a couple points with really similar hardware which made the Athlon a nice choice for people interested in the most bang for their buck. I think the P4 is almost as much of a mistake as the K6-3 was, it costs alot and performance per clock isnt on par with competing chips. Your comment is all I expect anymore from the ten year olds frequenting slashdot nowadays.
  • by Jethro73 ( 14686 ) on Monday April 23, 2001 @09:56AM (#271026)
    On a related note, AMD is testing a new silicon [theregister.co.uk] that is said to help with the heat issues, which will help their own 1.7 GHz chip [theregister.co.uk].

  • For you trivia buffs out there, Windows NT was originally developed on the Intel i860 before porting to the i386. :-)

    Windows NT Historical Timeline [uwa.edu.au]

    July 1989 - The first bits of NT run for the first time on a system built by the NT team using the Intel i860 processor.

    January 2, 1990 - Bill Gates brings together NT's top designers to discuss the importance of running NT on Intel's 386+ processors and to choose a new RISC processor other than the Intel i860.

  • > One can't help but notice the overwhelming bias against Intel and towards AMD on slashdot, to the point of zealousness

    Yep, some of use are the kind of assholes who expect a product that costs twice as much to actually be better.

    > i.e. only noting the benchmarks where AMD is ahead but pretending not to see the benchmarks where Intel is ahead

    It's duly noted that the P4 creams the T-bird on Q3 Arena. If I ever set up a dedicated Q3 Arena box, I'll keep that in mind. But I'll also keep that 2x price for the processor and memory in mind, so price:performance might keep me from buying a P4 even for for my dedicated Q3A box (in the unlikely event I ever build one).

    > ut Intel will have much higher clock rates in the same price range, and ultimately, equivalent and/or better performance, particularly over the next 1 to 3 years, as Intel puts out CPUs with humongous clock rates

    Good for them! Maybe I'll buy one in 1 to 3 years, if your prediction pans out.

    More likely I'll be running a 64-bit AMD machine with several gigs of cheap open-architecture memory, though.

  • > the truth is if Intel shipped a computer with a heatsink that doubled as a Grillmaster, I would buy it.

    Problem is, it would only cook meat that you bought from the RAMSTEAK consortium.

  • > The only ones that are applicable for me are a) constant computing (folding@home) and b) gaming; both areas where the P4 excels.

    Actually, several of the review sites are showing that the AMD beats the Intel in about half the gaming tests.

    And that at about half the price, both for processor and for memory, based on what's currently listed on pricewatch.com [pricewatch.com].

    I'm also curious about folding@home. Does it run through a lot of memory? The review sites left me with the impression that the P4 gets its advantage -- when it does get one -- from the bandwidth performance of its horribly expensive RDRAM.

    If folding doesn't use a lot of memory, then the Athlon might actually win. (Either way, it probably wins at performance/price by a large margin.)

    If you happen to have access to both kinds of machine, I'm sure lots of people here would be interested in a folding benchmark, though.

  • what's the difference between a 486DX and a 486SX? intel intentionally fucked up the FPU on the 486SX! yeah, on the original 486SX's, there WAS an FPU, but it was disabled. thanks intel!

    Didn't they actually use the parts with defective FPUs as 486SXs? I would call it recycling. That is very common in the industry. The reason chips overclock so well is that they are mostly made for a high speed and the ones that only work at the slower speed are sold that way for less.

  • Current Stats [gamespy.com]

    And note that the Half-life stat includes CS Pete

  • Why wont have AMD have 0.13 by the end of this year? In this business, half a year is a very long time.

    And if it isn't 0.13 micron technology, they probably will have access to other improvements. As was said before in this discussion, the desktop PC's don't _need_ more speed. It's always handy, but for most tasks the things are fast enough.

    For servers more power is always useful. But then again, quality and reliability are too. I hate to think of the time a machine has to have multiple processors, just to keep it stable. For that's where we're heading, if we keep pushing

  • by Pengo ( 28814 ) on Monday April 23, 2001 @10:27AM (#271034) Journal

    I have a G4 466 (OSX), a Dual CPU Intel 800, a Origin 3000 class server (Work of course) .. 4 CPU model.. A SGI Indy (RS5000 180mhz), a Sun Blade (500 Mhz Mips IIe), Athlon 700 (Windows 2k), Sony Viao 650mhz PIII laptop.

    I would say the only machines that don't perform in relation to the mhz rating is the SGI Origin 3000 and the Sony Viao. The SGI is clearly much much much faster than my PIII w/equiv total equiv mhz (~1600)... and the Sony Viao laptop is much slower than my Apple G4 466.

    But, my Athlon 700 kicks the crap out of my Sun Blade workstation at just about everything I give it. My Dual PIII workstation absolutely blows away my G4 Workstation running OSX at just about everything I do. (Fair enough, my Dual 800 is SCSI, my G4 is not)... and funny enough, my G4 seems to be just a hair slower than my 500 mhz Sun blade workstation.

    Of course there are no numbers or hard benchmarks. these are just my personal observations on equipment that my company and myself owns (it's my company) ;-) .. For the most part I believe that the MHZ myth is a myth in itself. At the core you will find that the G4 and the Athlon are not that much different. the CISC / RISC architectures made a huge difference a while ago, but now it is all bus and bandwidth. (Hense SGI performance with ccNUMA).

    But again, without this myth alive how would I be able to justify my 2-3x pricetag in buying commercial Risk hardware over Intel? ;-) I use SGI kit because it's very specialized on our needs of performance and scalability. Could I have done it on intel w/a better architecture? Probably.... ;-)

    Anyway, thats my take..

    geez.. and what a rant.

    Would you like a Python based alternative to PHP/ASP/JSP?
  • Eh... I Care, and so does everyone else.

    Actually, no, not everyone cares (at least not very much). You'll only see a negligible improvement in boot times from a faster CPU; at boot-up, the bottleneck is in the hard disk, not the CPU. As for making MP3s, in my experience the actual MP3 encoding process takes far less time than getting a nice, clean rip from the CD. Again, the bottleneck is not the CPU.

    Sure, compiling and game-playing will see an improvement from a faster CPU, but I personally dont't care if a kernel takes 30 seconds less to compile, or that I can play quake at 80 fps rather than 70. I can understand that some people would care about that sort of thing, but not everyone does.

    "Any fool can make a rule, and any fool will mind it."
  • Actually, you're quite wrong. Speaking as a computer engineer (hardware, not software), you are quite right that RISC chips have fewer and simpler instructions. That means that you typically have to execute more instructions to do something than you would on a CISC chip. While a CISC chip might have a divide opcode, your typical RISC chip probably doesn't.

    However, you are completely wrong that a RISC chip does less per clock cycle. Given a Pentium and a PowerPC at the same clock speed, the PowerPC will average far more instructions per clock cycle (not that you can execute an instruction in one clock cycle to begin with, a typical instruction cycle on any chip lasts several clock cycles, but we're talking the average work done per clock cycle). Why is it that the RISC chip can do more? Well, since there are far fewer instructions and they are much more basic, you can (and do) execute several instructions in parallel. Now, you can't always do this because one instruction might be a branch, and you can't necessarily know which way you will branch before you get to the branch test instruction, but in general you can execute several instructions in parallel. Of course, there are clever ways of solving the branch problem too, such as guessing which one is going to happen, and evaluating those instructions, then discarding the results if you're wrong. In any case, each clock cycle does effectively achieve more on a PowerPC chip than on an intel chip.

    Now, at current clock speeds, you've got the PowerPC in the sub-GHz range and the P4 midway between 1 and 2 GHz range, your P4 is going to beat your PowerPC anyway.

    As for the P3 and P4 being more RISC-like than the G4, you nearly made me spew chocolate milk out my nose. Get a good book on computer engineering and learn a bare minimum about the subject. Please, if you're going to make totally uninform\ed comments put [troll] in the subject.
  • Well, I'll probably be accused of trolling, but... who cares about the MHz? For the regular PC user, or even gamers or big number crunchers, the MHz doesn't make that much difference. Especially as far as gamers are concerned, the graphics card is far more important. At this point, there isn't really much difference between a few hundred MHz.

    A nice gaming rig still needs a fast processor, or all the power of the graphics card goes to waste. You can put a GeForce3 in a P2-450, but I'll bet you'd see higher framerates with that same video card paired with a top of the line CPU that can run the game and feed the card all the geometry data and such that it needs.

    I do, of course, agree that these clock speed increases are largely unnecessary, at least until Doom 3 is released. Allegedly, counterstrike is the most popular online game now, and it uses a modified Quake 2 engine. Year old machines can run Quake 3 pretty well, so there's no need.

  • My spell check never went so fast!!!

    Next test, loading http://www.userfriendly.org/static.

  • Actually, there were working computers at the time. Stratus, a vendor of ultra-expensive fault-tolerant servers, has used m68k, i860, and PA-RISC processors in successive versions of their servers. The old Stratus XARs were RISC-based, and I believe that they were out in that timeframe. (It was really funny to have a machine running an i860 chip next to another machine with a i960-based network card.)

    Of course, these machines would've been completely unsuitable for development of NT, but they were there.
  • The tweakers.net article had it wrong. The Pentium 4 uses the i850 chipset [intel.com], according to links I followed starting at Intel's front page, then to the Pentium 4 page, and finally to the chipset page.

    The i860 and its successors were interesting chips. As I mentioned in a reply to a post underneath your original post, at the last company I worked for, we had old servers running an i860 as the CPU sitting next to modern servers with the much, much faster i960 chip running their network cards.

    Of course, it was even funnier that my roommate at the time was still using a i386 PC with 8 Megs of RAM. Every day he worked with a network card that had much more processing horsepower and RAM than his PC! I used to tease him about that all the time.
  • It has to calculate every possibility to go today so that when you've decided where you want to go today, it can act upon within 30 seconds.

    -- Microsoft, where do you want to go today?

  • Having never used this chip, nor having ever owned a Pentium 4 system, I can tell you that this new 1.7 gigahertz chip from Intel isn't worth any money you spend on it. Why?
    • My 500 MHz G4 processor goes faster than it on floating-point tests. These include rendering spreadsheets in Photoshop, generating animations in Photoshop, and word-processing documents in Photoshop.
    • No application currently requires a Pentium 4, nor will it ever.
    • My fundamental right to overclock my computers until they explode is being violated. Intel has probably put in some clock-limiting circuitry. I want this processor to run at 2 GHz. I don't need that power, but I must run my systems as fast as possible.
    • Linux is not optimized for it. Yet.
    That is all.
  • Dear Joe,

    I have installed Windows 2000 Advanced Datacenter on my computer at home. It was highly optimized, but it was not open-source, so I had to delete it and perform a low-level format of my hard drive.

    Five nines reliability? More like nine fives. Microsoft is an evil empire, and their software only runs well 55.5555555% of the time. In fact, I would have to say that I reboot the Microsoft at least 555,555,555 times. Ha ha ha. That's funny.
  • The 1GHz of a year ago was a PIII which was comparable to an AMD Athlon. Today's "almost 2GHz" P4 comes in SLOWER than a 1.33GHz Athlon in most benchmarks, so it's hardly double the speed.

    The P4 may be a good chip one day, but it sure isn't there yet.

  • Which 'barrier' was that?

  • Check out 2cpu.com [2cpu.com] for some hot benchmarks. Interesting stuff on AMD's upcoming 760MP as well.

  • You've become too enthusiastic about benchmarks. These benchmarking suites have gone way too far. The only ones that are applicable for me are a) constant computing (folding@home) and b) gaming; both areas where the P4 excels.

    Now, I'm not an Intel jockey, and I was planning on building a 760MP AMD system later this year, but I will make my decision based on the release and limitation schedules of BOTH companies.

    I don't want to get caught at the top cycle of a platform, so, I will consider how high each processor will scale before leaving me in the dust.

  • We have to remember, though, that P4 @2GZ is not the same as an Althalon at 2GZ -- or even a P3 at that speed. An early reminder of how clock speeds differ across processor boundaries were the Intelish Z80/808{0,5,8} processors which took 4 clocks to do a memory fetch, as opposed to a single cycle for processors like the 6502 and the 680{0,9} which took only one. This meant that the 6000 series processors could do about as much in 1MmegaHertz as an 8000 series processor could do in 4MegaHertz.

    The situation is a bit more convoluted with the P4, but a 1.7Gz P4 is certainly not the same as a similar speed on a P3. We really should be dropping back to something like the Whetstone/Dhrystone benchmarks. Although they're flawed, at least they're slightly less misleading than the clock speed wars.

    On the other hand, I could just take my 2MZ 6809, wrap it in a divide by 2048 clock package and proclaim the first 2Gigaherz processor at under $50. (The Marketing department would love me!).

  • (oops -- sorry. that should have been the first 4Gz processor for under $50.

    Out of curiosity, I hunted down a copy of the dhrystone benchmarks, and did some comparisons [telus.net]. megahertz to dhrystone rations go from a low of .02 for an apple 2e on drhystone 1 to a high of 2.35 for my P3/450). (Other than my own benchmarks, the latest results on the table are a couple of years old.)

    In any case, the point is that dhrystone to MZ ratings vary by a factor of aabout 100 when you go across CPU families (and compilers). Even for relatively recent CPUs the ratio is still 3-1. I think that this supports my contention that clock times are a really bad way of gauging CPU performance.

  • I've talked to someone who ran the Windows XP beta, and he says that it's deathly slow. Soon enough, desktop PCs will need more speed, thanks to Microsoft.

    Of course, I won't be running XP, but with KDE/GNOME, I can enjoy the same "benefits" (speed-wise).

    Window Maker.
  • Just to clarify: the instruction set hasn't changed for the Pentium 4. Its still the same tired old x86 CISC ISA from 1980, with a few new added instructions (SSE2) that are of little use to most applications (and users).

  • From the linked article:

    This more perfect crystal structure exhibits reduced phonon-phonon and phonon-electron interactions which increases certain transport properties, such as thermal conductivity. It has been demonstrated in the laboratory that isotopically pure Si-28 has 60 per cent better room temperature thermal conductivity than natural silicon with its three isotopes.

    Hang on, if they have reduced electron-phonon interactions, haven't they increased the mobility and reduced the heat generated? If they have also reduced phonon-phonon interactions, doesn't that hinder the transfer of heat away from the active devices? Am I the only one that think this story almost got it right, but just missed it?

  • Did anyone else do a doubletake on this? Or did I just miss a motherboard chipset in the 8xx series that happened to have the same number as Intel's old 32-bit mil-spec CPU line?

    You may have missed it because it's not out yet--the i860 (codename "Colusa") is the chipset for the upcoming 1-2 proc P4 Xeon (codename "Foster"; actually, in an effort to be hugely confusing, the official name is now simply "Xeon") systems which should be released in a couple months or so. FYI, the 4+ proc chipset will be designated the i870.

    And yes, it is a bit odd that Intel is recycling this rather inauspicious brand number. I suppose not many in the industry have a long memory.

    (For more info on the first i860, Paul DeMone had an interesting article [realworldtech.com] at RWT comparing its ambitious but flawed design to Itanium and its potential pitfalls.)
  • There used to be an error page showing some cool cars (so you had something to look at when they fixed the servers), it used to show when things really got fscked, don't know if it's still in use.
  • Only 500Mhz to go before I get myself a real microwaveable coffee warmer!
  • I remember reading (on NYTimes I think)
    that Intel really is starting a price war with AMD.

    What's important is that chip prices are
    dictated by GHz, not by how fast it
    really is. So if Intel releases
    a 1.7 GHz chip for $400 dollars, they have
    set a maximum price limit for AMD's CPUs
    which run at a lower clock frequency.
    AMD must then drop the price on their
    1.4 GHz chips to $(400-X) since consumers
    will not pay more
    for a lower GHz chip. End result, Intel can
    maintain a marginally higher profit margin on
    their 1.7 GHz chip while AMD's profit margin is
    severely eroded.

  • How soon until manufacturers need to start looking at via effects, trace-to-trace coupling, board materials, trace loss, etc more closely? Things are starting to get pretty fast. There is a point when standard PCB manufacturing techniques go out the window.
  • by selectspec ( 74651 ) on Monday April 23, 2001 @10:21AM (#271058)
    I agree and disagree. Your premiss is correct, that clockspeed is only a factor in performance. The one area of improvement in the P4 is the instruction set and the instruction pipeline have been vastly enhanced, so most of the features long implemented within say a SPARK pipeline are now in a Pentium. However, the P4 suffers from severe memory starvation (worse than a P3) due to the architecture of its caching. This keeps its actual performance down with a frank, zero increase over the P3 of roughly two-thirds the clockspeed.

    But, you are dead on with the practical point, who cares? As it currently stands, the P4 is a complete waste of money. For the PC, nobody needs a 1.7Ghz chip. For a server, that clock speed would be handy, but only with about 1Mb of onboard cache.

  • by selectspec ( 74651 ) on Monday April 23, 2001 @10:05AM (#271059)
    The real story [cnet.com] today is not the P4, but the prices. Intel is slashing prices big time, ahead of their .13 micro manufacturing process which wont be operational until the end of this year. Basically, they are starting a price war with AMD, and it looks like it will be vicious. Why? PC Manufactures can read, and the verdict on the P4's real performance frankly no good. The P4 has a long way to go before it can be considered an improvement. Of course, consumers are idiots and they buy CPU's based on clock speed alone. However, the PC market is hosed right now. By the time the PC market recovers, AMD will be there with its next gen chips. This price war is something that Intel can afford. I wonder if AMD can afford it? AMD's manufacturing costs have always been more competative than Intels. However, a 50% price reduction has to sting, and AMD wont have .13 micron technology by the end of this year.
  • As blair1q commented, I also did a doubletake on the name. The i860 was a really kinky chip that did some things very fast, though it appeared to be too weird for most compilers to do a good job of letting C language tell you which part of the processor to run your stuff on.
  • I think I'd have to disagree that no application will ever require a P4....

    Death, taxes, and bloatware...

  • And AMD is sitting down there at the lower priced and faster performing chip, looking up at Intel and grinning.

    The 64 bit arena is where the real fortunes will be made and lost. Hope AMD doesn't misplay their hand there.

  • by bill.sheehan ( 93856 ) on Monday April 23, 2001 @10:54AM (#271063) Homepage
    The rational part of my mind dismisses this story as trivial. 1.7 GHz. Not far from 2 GHz. But I surely don't need to concern myself with that - my work box is 800 MHz and my big home box is an Athlon 1 GHz. All my software performs quickly and efficiently.

    The irrational part of my mind cries, "Look! 1.7 GHz! That's almost 2 GHz! Blazing speed! Raw, brute, merciless POWER! More!! MORE!!! I'm still not satisfied!!!"

    I can hold off the irrational only so long. When that magicical hertz hits two gigs, the irrational is going to sneak up behind my rational mind with an icepick.

    "Everything louder than everything else!" -- Meatloaf

  • by VAXman ( 96870 ) on Monday April 23, 2001 @10:33AM (#271064)
    And let me guess - you didn't even bother to read the article, did you? Because this issue is specifically addressed. Here's the relevant section:

    During our tests the Pentium 4 1.7GHz always operated at 1.7GHz and did not fall victim to any clock throttling because of heat. You shouldn't worry about the Pentium 4 dropping its clock speed because of heat unless you are running the processor without a heatsink/fan installed.
  • About a year and a half ago, HP was recruiting Computer Engineers at my school. One of the engineers who worked on the Foster core gave a presentation about the guts of it, and about its history. He told us that basically, because Intel had done such a poor design with the current core, that they went to HP and hired them to completely re-design it. The engineer told us that Intel would be using the Foster core, rather than the now current one, for the next generation of their chips, from the Celeron-ish processors to the high end models with lots of cache. Apparently, Intel has a lot more faith in this new design than their own, so maybe we can expect good things and fewer problems in the future.
  • who cares about the MHz?...the MHz doesn't make that much difference...At this point, there isn't really much difference between a few hundred MHz.

    I care about the Mhz. With a faster CPU my computer will boot faster, compressing my MP3's from the CDs I buy will take less time, compiling my programs will take less time, and as yet another bonus I'll get better framerates in my games.

    Let us not kid ourselves: Everyone wants faster computers.

  • by xant ( 99438 )
    Make sure to panic now. Get it out of the way.
  • Of course, we all really need the 1.33GHz Athlon to be faster in those all-important MS Office and other critical business applications!

    It should also be noted that the Pentium 4 scores better in 3DMark 2001, as well.

    It all boils down to this: The Athlon 1.33GHz may be faster in yesterday's apps in general...but why do we need the old apps to run so damn fast? They're still /more/ than playable/runable on the Pentium 4 systems. The difference you get with the Pentium 4 is that future SSE2-coded applications will become mainstream shortly (AMD licensed it, as well) as a replacement to the x87 FPU, and performance will take off for future apps, where it is actually needed.

  • How do you explain the FlasK benchmarks from Toms Hardware, then? A nearly 4x increase by adding SSE2...
  • Actually, to be completely anal, it states the number of transistors on an integrated circuit will double approximately every 18 months.

  • it's also true that "strong encryption" is a matter of crunching more numbers faster than the guys trying to crack your messages.

    No, that's completely wrong. The computation power required for encryption grows linearly or with n*log(n) with the length of the key, while for brute-force cracking it's exponential. "strong encryption" is almost exclusively a matter of finding and using algorithms that maintain this discrepancy.

  • Speed doesn't matter, it's what you do with it that counts.

    Oh yeah, I really hates the way intel claims in their commercials their faster processors enable faster internetting. As if the processors have been the bottleneck the last years.
  • P4s take up a lot more wafer real estate than Athlons. Add to that Intel's recent 68% drop in profits and you have the makings of a firesale. Surely, AMD's CPUs are a much better deal than Intel's even at their new low price. Intel is in the dumps until and if they succesfully move to .13u process.

    The current P4 is a legacy CPU. It will have a different pin count by the time the leaves fall from the trees this year. Anyone who buys one now is either ignorant or works at a corporation with an IT department.


  • SGIs have very fast buses which is a huge help...

    Not only that, but their filesystem code can keep those buses filled with bits. Their filesystem overhead is the lowest you'll find anywhere.

    It's amazing how fast a machine goes when the OS gets out of the way...

  • It's hard to find how well Moore's law holds up for transistor count. I've only seen it referenced in terms of clock speed. Anybody have some info? Thanks, Mike.
  • Where the id says "More! Grunt! More!" The ego says "I am satisfied. All's right with the world." And the Superego says "Consume. Spend. Buy. Just do it!"

    Freudian struggle? You're more right than you could possibly realize :) The original posters quote was from Tom Lehrer's "Smut:"

    More, more, I'm still not satisfied!

    Stories of tortures

    Used by debauchers

    Lurid, licentious and vile

    Make me smile.

    Novels that pander

    To my taste for candor

    Give me a pleasure sublime.

    Let's face it I love slime!

  • 1.7GHz is probably not much different than 1GHz, or even 500MHz to the average user. The only real difference the average user will notice is when they get the bill. And of course there are bragging rights.
  • SGIs have very fast buses which is a huge help...

    Yep, I agree here, my nifty O2 has a bus that can do about 1.2GB/s...Intel is just now catching up with this machine in memory bandwith(RDRAM)... The O2 is made in 1996. However, we really need to move away from bus (low-latency switching is where it is at) technology if we want to squash the bottlenecks in the pipeline.. Can anyone say XIO? Of course, I'm sure all the lucrative switching technologies like XIO have been bought by the big companies(and probably rightfully so).. Why does a PC in this age need that kind of preformance anyway?
  • Agreed...XFS and GRIO...*drewl* I'd like to see some other systems pump through a 1GigaByte movie in ~50 seconds(uncompressed video here, at full frame speed), while not going above 30% load (keep in mind, that we are playing a video, with sound while this is happening, afterall, this is real world stuff...) At that point, the SCSI bus is the only bottle neck... It's really too bad to see SGI so undervalued on the market, and continously so... My $0.02
  • As I, as well as the linked articles, pointed out, initial optimizations using SSE2 are not promising. In fact, even applications that have received a good deal of hand-tweaking to use all of the latest instructions run slower than the same app, sans SSE2 of course, on the Athlon 1.33GHz. And the gap would widen even more if, instead of wasting their time with SSE2, they just hand-optimized the program to run faster on the Athlon.
  • by startled ( 144833 ) on Monday April 23, 2001 @10:56AM (#271087)
    First off, I read a good portion of the reviews that I found linked from Blue's News:
    Source Magazine [sourcemagazine.com]
    Target PC [targetpc.com]
    Hardware Unlimited [hardware-unlimited.com]
    Tech Report [tech-report.com]
    Gamer's Depot [gamersdepot.com]

    What's the upshot? That even with each processor's "ideal" system (DDR on the Athlon, RAMBUS on the P4)-- well, the P4 kicks ass at Quake 3: Team Arena. I mean, it's really really good at Quake 3. So good, in fact, that-- well, you won't be running anything else, I hope?

    Because in almost every other app, the cheaper Athlon 1.2 equals or outperforms the P4. That even includes apps such as POVRay that did some early optimizations for the P4's extended instructions. I recommend reading the Tech Report's overview if you're interested in that; they have more details on exactly which instructions were used, and the current state of Intel's compilers for the chip.

    Keep in mind, of course, that the compilers are still a bit beta-ish-- sometimes they actually make the programs run slower. But they never appeared to actually make it faster than an Athlon 1.2.

    Debate what you will about future extensibility, and so on-- but unless you're going to be playing a whole lot of Quake, if you're looking for a new system you should grab one of those cheap Athlon CPU/Motherboard combos selling for $300 at Fry's.
  • Plus utter confusion when the user notices that it doesn't totally wipe out his neighbor's 1.33GHz AMD, and actually loses in some tests. The salesman told him that more GHz is always better!
  • Why are we going faster? Especially Intel? They need to concentrate on making a better quality chip, not a faster one.

    Remember that company that made inexpensive chips? They thought about making a new chip with a better FPU (among other features). It didn't need to be faster than the competition, just a better quality chip. They called it the Athlon.

    What's funny is that Intel thought they could compete with this better quality chip by making a faster chip, which they released to early, and had to recall.

    Guess some people don't learn their lesson...
  • by FortKnox ( 169099 ) on Monday April 23, 2001 @09:51AM (#271092) Homepage Journal
    New /. poll:
    How long until the new 1.7GHz gets recalled
    - 1 month
    - 1 week
    - 3 days
    - When cowboy neal gets one
    - its already recalled.

    I still find it humorous that they compare the 1.7 Intel to the 1.33 AMD...
  • I will tell you what 1.7 Ghz is like... since this came from Intel it is most likely like a 1.3 Ghz Thunderbird... =)
  • Moore's Law states that the number of transistors on a processor, not the clock speed, will double approximately every 18 months.

    True, but people have been applying it to raw speed for a couple decades now, and the law has applied equally well in that manner. (I'm feeling too lazy to look it up at the moment, but I seem to recall that the notorious "Jargon File" updated their definition of Moore's Law to reflect the alternate aplication of it.)

    CPU's have been close to doubling every year and a half for quite some time now, so it should not be shocking to anybody that we have gone from 1 to 1.7 in a year. That was my point.

  • The phrase "Moore's Law" is just a nice example of dry geek humor. An inventor at Intel once predicted that as chip technology develops, the number of transistors that would fit on a chip would probably double every 18 months.

    Since that time, a trend has emerged of chip speeds increasing at a rate of about 2x every year and a half. Since it has held up (for far longer that Moore could have expected, and applied in ways that he didn't mean for it to be) for so long, we jokingly refer to it as a physical Law of nature, named for the guy who first said it.

    Some people would cite this as an example of a "self fulfilling prophesy" (everybody agrees that "Moore's Law" is a reasonable expectation, so that's what everybody shoots for to keep up with the competition). Whether that's true or not, I'll let others speculate about. "What if" debates give me a headache.

  • OF COURSE it is slower than OS 9.

    OS X is a multitasking operating system. The old Mac OS was not. That is why the old Mac OS was able to be so fast and responsive, even on really old hard ware. Drag the mouse, and that's all your CPU was thinking about: dragging the mouse. Open an application, and you were forced to twiddle your thumbs while the application launches, but 100% of that machine was thinking about launching your application.

    For years, Apple users have been screaming that they wanted preemptive multi-tasking, even though almost all Macs are used as single-client machines. Well, now you got it. Apple runs Apache web server nearly as fast as any other UNIX, but guess what, there's a price for all that power: doing several things at once takes more effort than doing one thing at a time.

    Personally, I don't give a shit if IE or Omniweb takes an extra 10 seconds to launch, because now I can do other things while it is launching, instead of having my system be effectively dead to me while it loads an app from the hard drive.

    But if reizing windows quickly is more important to you than having lots of background processes while you work, then boot to OS 9.1 and stay there. You won't be able to run UNIX apps, and your box won't be much good as a server, but at least you can launch your web browser quickly, which seems to be what really matters to you. I'm not just brushing you off here, I am serious. It really sounds like OS X is not for you at this stage in its development.

  • by Golias ( 176380 ) on Monday April 23, 2001 @12:25PM (#271102)
    I'm sure you meant to rip on Intel a little with that comment, but the truth is if Intel shipped a computer with a heatsink that doubled as a Grillmaster, I would buy it.

    Mmmm... burgers.

  • by Golias ( 176380 ) on Monday April 23, 2001 @09:50AM (#271103)
    It's crazy to think about the fact that just one year ago we were breaking the 1GHz barrier and now we're almost up to 2GHz

    Was progress at the speed of Moore's Law always crazy, or did it just become do today?

  • 1.7GHz? That's really pushing the speed limit, I wonder how often it has to cut down to 850MHz because it gets too hot.
  • by RedWizzard ( 192002 ) on Monday April 23, 2001 @04:54PM (#271108)
    But they never appeared to actually make it faster than an Athlon 1.2.
    That's the interesting thing though. Look at the SYSMark2001 scores on Tom's Hardware [tomshardware.com] and the SYSMark2001 scores on AnandTech [anandtech.com]. The Athlon scores about the same in both (145) but in Tom's review the P4 performs very poorly (115 @ 1.5GHz and 124 @ 1.7GHz) and in Anand's review it scores very well (154 @ 1.5GHz and 167 @ 1.7GHz). That's a 35% difference between the two sites. So what's the difference? Tom's using a Asus motherboard and Anand's using an Intel motherboard, other than that not much. But then Quake and UT show much the same results on both sites. The lesson is don't rely on one review site. Still it seems that you'd want to be very careful if you're buying a P4 for anything other than games. Get the wrong motherboard or maybe the wrong BIOS settings and you'll suffer.
    unless you're going to be playing a whole lot of Quake, if you're looking for a new system you should grab one of those cheap Athlon CPU/Motherboard combos selling for $300 at Fry's.
    Yep, and that's still the case. Even with the radical price cuts the 1.7GHz P$ is $350 which is still considerably more than the Athlon 1.33GHz. Of course if you're into games you're generally better of upgrading your graphics card anyway.
  • by gtx ( 204552 )
    moore's law is not tied to clock speed, but rather transistor count.

    "I hope I don't make a mistake and manage to remain a virgin." - Britney Spears
  • you don't find it the least bit convenient that there were enough 486's with defective FPUs?

    "I hope I don't make a mistake and manage to remain a virgin." - Britney Spears
  • by gtx ( 204552 ) on Monday April 23, 2001 @02:09PM (#271113) Homepage
    well, quite bluntly, intel has fucked us over time and time again... let's go for a walk down 'intel memory lane' for a second...

    what's the difference between a 486DX processor and 487 co-processor? the pin arrangement! that's right, intel realized it would make more money by selling a 486DX as a '487 math coprocessor' even though that by installing it, you disable your 486SX. (effectively making your '487 math coprocessor' the main processor)

    what's the difference between a 486DX and a 486SX? intel intentionally fucked up the FPU on the 486SX! yeah, on the original 486SX's, there WAS an FPU, but it was disabled. thanks intel!

    difference between the older celerons and their pentium 2 brothers? nothing! well, except for the fact that intel broke the l2 cache on them so they could sell value chips. the cache was THERE, you just couldn't use it. if you've ever opened a PII cartridge, you'll notice that it's a socket 370 chip on a slocket. surprise...

    the pentium series bugs. F00F!

    intel has driven me out of my mind. i'm an amd convert till they start fucking it up.

    "I hope I don't make a mistake and manage to remain a virgin." - Britney Spears
  • My fundamental right to overclock my computers until they explode is being violated. Intel has probably put in some clock-limiting circuitry. I want this processor to run at 2 GHz. I don't need that power, but I must run my systems as fast as possible.

    There is no such thing as clock limiting circuitry.

    The chip derives its clock from a frequency provided by the motherboard. There is no way it can "know" how fast it is running unless there is some sort of frequency generator in the processor itself, which is fraught with many problems, and will probably never happen.

  • Keep your intellectual burps to yourself if you don't know what you're talking about.

    Well, since you seem to know so much about it, and how I am wrong, why don't you enlighten us?

    If you can offer nothing but critisism without explaining what indeed is wrong with my facts, then you are just trolling.

  • of id vs ego (and perhaps a dash of superego as well)

    Where the id says "More! Grunt! More!"
    The ego says "I am satisfied. All's right with the world."
    And the Superego says "Consume. Spend. Buy. Just do it!"

    You know what? Mac people have had to rationalize for far too long. We've had to settle for 400Mhz!

    Argh! When will someone come to quench our thirst for raw power? Motorola? IBM? Apple?

    Do not bother to fight the irrational/id. The best you can do is placate it and compromise; promise it a *dual* 2Ghz system!

    Geek dating! [bunnyhop.com]
  • crazy to think about the fact that just one year ago we were breaking the 1GHz barrier and now we're almost up to 2GHz.

    Yeah, it's just crazy to think that it's almost like every year to eighteen months processor power doubles. Someone should theorise a rule about it or something!

  • You knew it would happen: Kyle at [H]ard|OCP [hardocp.com] got his hands on one of these and overclocked it [hardocp.com]. And just look at the results [hardocp.com].
  • Fortunately, most college computer labs are filled with virgins that can be used for just this purpose :)
  • Well, I'll probably be accused of trolling, but... who cares about the MHz? For the regular PC user, or even gamers or big number crunchers, the MHz doesn't make that much difference. Especially as far as gamers are concerned, the graphics card is far more important. At this point, there isn't really much difference between a few hundred MHz.

    And of course, RISC chips like the G3, G4, etc. do far more per clock cycle than their Intel/AMD counterparts. At this point Intel's sort of saying "LOOK HOW FAST WE CAN MAKE THAT QUARTZ JIGGLE! OH YEAH!" Who cares, look at the practical side I guess.

    MHz isn't really a milestone anymore because it's not very significant anymore.


  • I think the /. crowd is most pissed off by the fact that Intel's technical decisions are made partly with a view to marketing. The "Internet architecture" BS a couple years back Intel's own have engineers bitched there is no way to optimize a CPU for Internet - net bandwith will always be the limiting factor, not the CPU. But we still saw all those idiotic "Intel gives you power for the Internet" commercials. Now we've got a chip that appears to be grossly overpriced for its performance, released now only because of "MHz marketing". I think the P4 has an good architecture which scales like crazy, but why was it released before it reached 2 GHz and that scaleablity was an asset and not a liability? The UID for CPUs and didn't help Intel's image either - that pissed off the general public too.

    That said, assuming the P4 continues to play catch-up well, I'll buy one if it offers more value for my money.
  • It's crazy to think about the fact that just one year ago we were breaking the 1GHz barrier and now we're almost up to 2GHz.

    You said it! And it'll be hysteria-inducing when we go from almost 2 GHz to 4 in the next eighteen to twenty-four months!

    Yowza! I'll believe it when I see it!

  • While many people rejoice in the firesale prices on RAM, CPUs, disk drives, and so forth, this bodes not well for the PC industry. These prices are in response to a slowing market for personal computers. Most families that want computers have them, so there are fewer initial-purchase customers. Something like a 400mhz Celeron is just fine for web surfing, word processing, and e-mail and that's what 95% of the PC users out there are doing, so that is really reducing upgrade sales. The average user is not playing a state of the art first-person shooter, rendering frames for 3D animation, or compiling Linux kernels, so why upgrade?

    In response to this, Intel and AMD (and vendors of other computer components) have slashed their prices. In the short term, this will result in a boost to sales. In the long term, it's a disaster waiting to happen. Many users and businesses that would have waited to upgrade at higher prices will upgrade now due to the rock-bottom prices. As a result, the industry will see a lot of low profit sales in the near future. But what happens in a year or two? Will there be some compelling application that is going to compel the average user or business to splurge on a 4ghz system? I doubt it.

    In the worst-case scenario, this might lead to the failure of either AMD or Intel. Look at what the price wars did to the hard drive industry already. Where is Micropolis, Quantum, or Conner? Western Digital, once the preeminent IDE drive manufacturer, had losses of $7 to $10 million for its most recent quater. In almost any case, it will be likely to result in higher prices and less competition. And that's probably not good for any of us.

  • If they can't complete then they should die. That how things works.

    So you think that everyone should sell at a loss until the one with the deepest pockets is left as the sole supplier? Then they can raise their prices to whatever level they choose. That's not good for the consumer or the businesses.

  • Did anyone else do a doubletake on this? Or did I just miss a motherboard chipset in the 8xx series that happened to have the same number as Intel's old 32-bit mil-spec CPU line?

  • by UltraBot2K1 ( 320256 ) on Monday April 23, 2001 @09:53AM (#271165) Homepage Journal
    It's been said before, but Moore's Law states that the number of transistors on a processor, not the clock speed, will double approximately every 18 months.
  • by UltraBot2K1 ( 320256 ) on Monday April 23, 2001 @09:59AM (#271166) Homepage Journal
    In a related story, in an effort to promote their latest 1.7 Ghz P4, Intel has solicited the endorsement of former boxer George Foreman, and will be giving away a free drip tray and jar of grilling sauce with every P4 purchased.

    In a recent press conference, Intel stated: "Not only is the new Pentium 4 a technological breakthrough in terms of processing performance, but users can cook 4 hamburgers in under 10 minutes on it's new larger-sized heatsink"

The only function of economic forecasting is to make astrology look respectable. -- John Kenneth Galbraith