Please create an account to participate in the Slashdot moderation system


Forgot your password?

Pentium IV Problems? 147

zottl writes: "German tech site has an article about various problems concerning Intel's Pentium IV. It says that the new processors will draw lots of power (66 watts for the 1,4 GHz version), need special copper-core coolers, might need radiation shields for the socket pins for ECC compliance and will remain expensive for quite some time. It also says that the P4 will only get mass-market appeal with the introduction of the slimmed-down 0.13 micron version. Oh, and best of all, it seems to be slower for certain apps than a P3 of the Mhz. Seems like a repetition of the problems the P6 architecture had when the Pentium Pro was first introduced" Isn't this pretty much what they say about every generation of Intel chips when first released? Anyway, the article is in German, so you'll need to feed it to the fishy until translations crop up.
This discussion has been archived. No new comments can be posted.

Pentium IV Problems?

Comments Filter:
  • by Anonymous Coward
    This is hardly surprising.

    If intel had designed this to be cheap and low-power at 0.18 um, it wouldn't be competitive at 0.13 um and below. They have to look forward.

    All x86 chips start out like this. The Pentium Pro did (at 0.5 um, now 0.18 and soon 0.13), Athlon (although not that much) and even the old Pentium.
  • "Apple tried to transistion the mass market in thier TV ads to thinking in Gigaflops, a better indicator than MHz."


    GFLOPs is a *HORRIBLE* benchmark (at least not without a great deal of documentation and explanation accompanying the numbers).

    (1) The vast majority of applications most people use are limited by integer performance, and not floating-point performance.

    (2) It's open to being used as an entirely theoretical benchmark (as Apple uses it) that doesn't reflect performance of an actual application running on an actual system. SPEC may have its problems, but at least it is made up of real applications running on real systems.

    (3) It's subject to double vs. single precision differences; again, Apple touts single-precision numbers and ignores double precision.

    (4) Not only do you have the standard benchmarking issues across platforms related to compilers, L2 cache size, bus speed, memory size and speed, HD size, etc.
    Even on exactly the same system, you can get different numbers for GFLOPs depending on what application you use to benchmark (or even things like matrix size in the same application on the same system).
    Don't believe me?
    See: s.html

    Just those 2 URLs by themselves should be enough to convince anyone that GFLOPs (or MFLOPs or TFLOPs) by itself is a worthless measure of performance.
  • by Anonymous Coward
    I knew it! That is why you should run Linux instead!!!
    oh wait,, err.
  • The reason pedophilia is frowned upon in today's world is because children need to reach a certain age before they become sexually mature. This is pretty much standard before most species on Earth. How often do you see a Bull try to have his way with a new born calf? It's been proven time and time again that when children are molested or engage in sexual activity, etc, before a certain age (probably 14 or 15) by someone much older than them, they turn out pretty fucked up. The lucky ones just have bad dreams, or something. The unlucky ones become psychotic and kill themselves. Hey, but if you want to have sex with little children, I guess it's nobody's right but the government's to stop you. Might I suggest NAMBLA?

    - Mike Hughes
  • Hmm, so when Intel switched from the socket to the slot, there really was a technical advantage to it, and they weren't just trying to play dirty against AMD?

    Seems pretty obvious, then, that going back to the socket was technically stupid, and therefore could only have been a political move.


  • the only reason for me switching from the PRO to the PII was that it was cheaper and easier to get than a PII. i would have bought something else than a PII if the alternative didn't suck. they didn't deal with heat as well as intel and none of them could do SMP. the PII became my standard becuase there wasn't a good choice.
  • How about the fact that AMD's yeild during the K6 eras was around 25%?

    That's a total rumor as far as I can tell, I have never seen that yield claim substantiated anywhere. I think in an article in Wired or some computer/game mag that the president of AMD denied that was the yeild and said it was definitely better than that.

    Yields are always "state secrets", same with newer fab processes, so we never really know what the yield is on any plant, company, chip or process.
  • What severe problems?

    Oh, say the recall of the 1.13 GHz P3? It couldn't pass the Linux kernel compile test for all its worth. It also required a special BIOS to load up special microcode that it needed to be even remotely stable. Note that it was quickly recalled for a re-tape and re-mask.

    MMX speed up 3D? I don't remember anyone marketing it as that, at best, it sped up multimedia, such as audio and some video functions, and codecs that used floating point math weren't widely used either. MMX was only integer when I checked the instruction set. IIRC, Floating point 3D became used more widely when the Pentium came out, as well as the fact that the 486DX's had them too, not.

    IIRC, Alpha didn't have SIMD instructions per se, but it did have most of the functions that MMX had, right from the start, circa 1991, due to having extensive byte manipulation methods and having 64 bit registers. I don't recall seeing SIMD add, subtract or multiply functions in the instruction set until MVI was introduced, I have never used it. That mostly only added add, subract and multiply, min, max as well as a couple more byte shuffling instructions.

    IIRC, Intel also had a hard time meeting demands for nearly entire quarters - a reason that Gateway and such lowered their resistance to AMD chips. AMD is definitely giving them a run for the money, and I will admit that neither company is perfect, and different chips are usually better at different things.
  • Uhm, see, "5" is where Intel switched from using numbers to using the name "Pentium". So, by that logic, "Pentium 5" should be "Pentium Pentium". And the rest follows from there.

    Now do you get it?

    Laugh, then.

  • My favorite source for those numbers is Chris Hare's CPU Electrical specs [] page. Looks like AMD's 65W maximum power figure is the top right now.

    Me, I want simple instructions on how to measure the power dissipation of the CPU's on *my* board. According to the dead 400W power supply, the draw is a bit more than it's supposed to be.

  • Maybe there is a "cause and effect" issue at play here. The main reason the code gets reoptimized is due to their market share not because of the processor itself. MMX was a failure and 3DNow would never had seen the light of day if the market didn't demand a response. Thechnological both is (incl screaming Sindy) a waste of resources and better done on the graphics engines.
  • But damn does that 400HP lawnmower mulch well, and if you put it in fifth gear, you can take it out on the freeway. I mean, it's almost as cool as pouring hot grits down your pants.
  • MMX was actually copied directly by AMD; and they even called it MMX (over Intel's objections.) These instructions are integer SIMD instructions, and are most useful in 2D operations like compositing 2D images. Even though MMX is now a well-supported universal part of the instruction set, only fairly specialized programs use them, in my experience (games, Photoshop, and a few others).

    SSE (nee' KNI) is the Intel analogue to AMD's 3DNow! instruction set. These support floating point operations and high-level floating point functions (sqrt, trig). It is with these instructions that the AMD and Intel instruction sets have begin to diverge.

  • Have you ever wondered why AMD added 3DNow! to their chips?

    Maybe because the AMD chips SUCKED in FP operations? I had a K62-300, and in games it was simply blown out of the water by a celeron 300A. I think the K6 was slightly faster in integer ops though.

  • On what planet? Where I come from, it's called dumb.
  • Hehe - It's amazing how selective some peoples memories are. Let's not forget how fussy the Athlon/Duron systems are about power supplies. It looks Intel is about to face a similar issue.
  • If you're going for something that vague, why not go for SPECint95/Price? MHz numbers are meaning less, afterall... Witness the performance of G3's, G4's, Celerons, Durons, Pentiums, WinChips, and Athlons at similar levels of speed or performance, and you'll see that MHz is only vaguely tied to a chips performance level...
  • Overclocking had to do with winding up something!
  • I installed 'better'... Linux.
  • Though I like your price/speed formula, I must object to the statement "Because FLOPs/s is a better judge of speed that Mhz, in my experience", not because FLOPs is worse than Mhz (it isn't), but more because they are both USELESS. MHz is pointless (remeber that comparison of an Alpha which beat the pants of an Athlon at twice the clock speed). FLOPs are pointless too. Suppose we take a segment of code which is one million floating poitn operations, all independent. And we ran the code on two processor.
    Both processors are 4 wide issue, 4 wide fetch, 4 wide retire (or 4-way superscalar of the Intel junkies). Also assume that it always has enough functial units to always execute the code (in other words, the code has been chosen specifically to run on the CPU). Now, one of these processors is in-order, and the other is out-of-order. If you ran the code on both processors at the same speed, it would take the same amount of time to finish. Therefore, the number of FLOPs you could use to describe each processor is the same.

    Now suppose you ran some real world code on both chips. Almost exhaustively floating point, but it actually DID something (i.e. had dependencies and branches and stuff). The out-of-order processor would probably finish 60% faster than the in-order one.

    FLOPs aren't a good measure of speed. I personally trust the SPEC benchmarks a bit more, since they have rules about how much your compiler is allowed to cheat in its score.

    As a side note, can anyone find a version of this article (or one of similar subject matter) in English? The German translation just doesn't cut it for me.
  • I'm frozen solid when it comes to new Intel chips. I have an 866, and I'm scared to upgrade. Call me a wuss, but I'm gonna wait till the smoke clears before I enter that minefield.
  • I can't wait for Intel to add their new CPU feature to support P2P! It's called NetBurst.

  • If I see another retarded consumer user say that compaqs suck, I will scream.

    People please understand that HOME PCs that you get in one wrapped up plastic shell from BigStoreCo are going to be shit, regardless of brand.

    Compaq makes what I consider to currently be the world's best Intel-based servers, in terms of functionality, cost, and price/performance.

    Don't confuse Proliant with Presario.
  • > well, actually, the name appears to be "Pentium 4" all over the article.

    What happens after they get to Pentium 5? Will they start a sub-series of Pentium Pentium 2, 3, 4, 5, then Pentium Pentium Pentium 2, 3, ...?

    Did AMD copyright the number 6 or something?

  • Q. Wasn't the i860 a graphics processor at one time? I think it was later included as an motherboard co-processor much like the x87, but never really did much. The only intel graphics processors I know of was the i740 and the new i810 and i815 chipsets. Intel didn't do much with graphics until the bought chips and technologies in the mid -90's.
  • By that logic, the PPC or the Alpha or even the 68000 should of displaced the x86 along time ago.

    But every attempt by Intel to replace the x86 has failed. x86 was a stop gap, it was to give the market something before 8800 was to come out. 8800 failed badly becuase the x86 was so popular. The 8800 was a much better solution, but it still failed. Intel tried 3 other times to replace the x86 but each time they failed due to lack of compatibility. With the x86's installed base, it won't be replaced.
  • I don't disagree with the NewJersey principle at all, but Intel has tried to replace x86 with the iAPX432 and the i860. (I can't recall the other attempt, I want to say the i960, but that wasn't a general purpose processor) The press releases and company touted both as a replacement for the even then aging x86, but since the customers wanted the x86 they kept pushing the instruction set, even if they knew it was begining to show its age. I wasn't blaming them at all, for trying. But quite simply they are not starting over, and never will. The failures of the previous attempts (since they were not x86 compatible) have bogged down the IA-64 architecture. It was never inteded to have x86 instructions when it was created by HP. HP intended to have the PA-RISC emmulated in software and thought Intel would do the same with the x86. Intel was unwilling to break with x86 and had everything redesigned to include x86 instrucitons where they didn't belong, in the next generation chip. HP was willing to break with its PA-RISC series, so why couldn't Intel? Intel will never have a clean break. Again, because they know all the customer's want it x86. They've done an admirable job of bringing it into today. All in all though, I'm suprised they didn't start a Sledgehammer x86-64 project of their own.
  • by jmauro ( 32523 )
    Whether we like it or not, the stop gap known as x86 will be around for pretty much ever for pretty much the same reasons that what ever microsoft will do, they can never completely kill off dos. It will always be needed for backwards compatibilty. Why does the transmeta chip emmulate x86? All the software is already made for the x86 architecture. The IA-64, which you imply can fix all this (with the PPC) will still be x86 compatible. And when have you last coded in assembly, and not in a higher langauge like C, Java, etc. Assembly in IA-64 will be next to impossible. As for true x86 chips coming to market, the last one was the Pentium. Since this time all chips have been risc, even the Pentium Pro, Pentium 4, and Athlon They've just had x86 translators on to provide what the customer wants, x86.

    Sorry for the rant.
  • Hi, exactly how do you check a member's karma? I can only see your last few post.


  • >Because FLOPs/s is a better judge of speed that Mhz, in my experience.

    Ah yes, but what about the all-important BOGOMIPS?! ;-)

  • I prefer the formula

    (Speed in MHz)/Price

    Although the power in watts is a good idea, anyone want to work all three into a usable formula?
  • This is the reason why you won't see a socketed Alpha. Because of all the signaling they require it just generates way too much noise to be a socketed chip. Hence the Slot B cartridge which handles this issue nicely. Seems Intel is finding this out for the first time :-) I would be wary of any socketed PIV or Merced.

  • FWIW, that's the EV6 numbers. EV6 is on a .35 micron process. EV67 is .25 and EV68 is .18. EV69, well, below .18. :)

    The API UP1100 motherboard with soldered on 600MHz Alpha EV67 draws a total of around 90W. Works great with a 300 watt power supply.

    x86 is finally starting to run into the roadblocks that Alpha has had for a long time. This is actually good! Now things like 300W power supplies will be even cheaper. :)

    FYI:I work at API
  • It wouldn't surprise me if the 600MHz Alpha was cheaper than the 1.5GHz Intel.

    Oh, you can get the 600MHz Alpha today. And faster speeds soon.

  • Pass the pipe dude.. You can get an ATX form factor EV67 Alpha for under $3k. Show me a P4 for under 3K that doesn't require a new chassis and power supply design AND costs under $3k.

    Besides, you are comparing Celeron 1U's with AlphaServer GS series systems. Who's smokin' what?

  • Actually, Intel "fanatics" lambasted the power draw of the Athlon...

    And many many people complained when their crappy Packard Bell 150watt power supply in their old computer wouldn't boot their shiney new Athlon...

    The problem that people with the PIV isn't so much the wattage, it's the cooling requirements. My K7/500 w/the "stock" heatsink is cold to the touch while running. Power disappation isn't too big of a deal on it.

    The P4 is requiring a freek'in copper core heatsink that needs a motherboard with support mounts to keep from cracking things. That isn't so good...
  • Actually, I think what people want more is a FP unit that isn't stack based....(ie: slow as hell)
  • <QUOTE>
    Interestingly, the P4 doesn't have outrageous cooling requirements because it runs hot, but because it is less tolerant of

    Hmm, observation I was unaware of. If that's the case, they're gonna have one hell of a time trying to ramp that sucker to higher frequencies...
  • This isn't quite subtle enough to be a clever troll. The distinction between Watts and Volts is too obvious for anyone to really think you have them confused. Although, I must admit that the parenthetical qualifier "American" is a good attempt to supplement the inital effort. Keep trying.
  • I dont think I will ever buy another top of the line chip from Intel again. With the whole Rambus thing and the recall of the P-III 1.13 GHZ chips, I have lost most of my trust in this company. AMD is pushing out processors that are stable and less expensive and perform as well as or better than there equiv. pentium counterpart.
  • BabelFish, though quite a interesting idea produces the most broken language I've ever read.

    For those of you with mozilla, look under edit->translate. It is a link (sortof) to Gist-In time [] which in my opinion provides much better translations.

  • Right. It's called humor.
    No more e-mail address game - see my user info. Time for revenge.
  • > By that logic, the PPC or the Alpha or even the 68000 should of displaced the x86 along time ago.
    I agree that sure, they were better tech, but we're dealing with the New Jersey principle [] w/ regards to the x86: "It is good enough"

    > Intel tried 3 other times to replace the x86 but each time they failed due to lack of compatibility.

    What 3 times?

    Intel could just be bull-headed and say "After 2002, NO MORE x86". But they won't. They are too busy milking the industry for all it is worth. (Can't blaim them, since they created it.)

    At some point you just have to make a clean break AND start pushing it. Does Intel even have a 64-bit cpu out on the market? After how many years of waiting?

    No flames. Just honest questions.
  • > but Intel has tried to replace x86 with the iAPX432 and the i860.

    Interesting. I have heard of the i860, but not the iAPX432. Will have to read the history on that one.

    Q. Wasn't the i860 a graphics processor at one time?

    > The press releases and company touted both as a replacement for the even then aging x86, but since the customers wanted the x86 they kept pushing the instruction set
    Yeah, the old catch-22. No new "killer app" for the new hardware.

    > But quite simply they are not starting over, and never will. Intel will never have a clean break. Again, because they know all the customer's want it x86.
    You might be right, but I don't know if I would say never. In 10-20 years do you think we will still be running x86 compatible software? I think it's doubtfull, but plausable.

    Maybe once Intel ships the IA-64 we'll start seeing more software developed for it ;-)

  • Um... I think that you are confusing amps and watts. Look on the back of any hairdryer and it will be 1275-1600 watts most likely.

    Now if it required 130 amps, I would stop tinkering around inside my computer.
  • Of course it's slower than the P4 at the same frequency. The chip was designed with a longer pipeline, which has allowed Intel to up the clock rate by doing less per tick. This was no surprise as people have been discussing it since the architecture was announced.

    That said, they did design the ALU to work on both up and down swings of the clock, which means that integer operations could run up to twice the speed of the rest of the CPU.

    More detailed info can be found in this Anandtech article [].

    As for the power consumption, etc. If people are willing to plug their video card into a wall socket for a few more frames/sec in Quake 3, then I don't suppose power consumption is that big of an issue. I guess you could mount two 250 watts in your case if you were really worried about it.

  • Ok I'm biased in the I like AMD's way of dealing with things better than Intel's, but frankly Intel has been pushign the 'trust us we know what's best for you' line for years & if they screw up it should count more... No one really expected AMD to do what it has done more recently (starting with 3dnow), because of past screw ups & they have never said the simply no better than their customers. Their attempt is simply to make a great product that is highly competitive with what is offered by others...

    Though I also have to add I owned an early k6 (one of 3 different k6's I've owned) & I don't think that the 2/5 speed cache on the athlon was a 'mess-up'. But a choice based on what was available at the time.

    I do agree to compare, but I'll compare on a technical level before hand and a working level after I see one. So far what I've seen and heard is making me think the P4 makes a great specialty chip that can at times in certain things beat a athlon. With my final opinion coming later, only after I can test both myself...
  • hear, hear. At least someone makes some sense. The world switching to some other platform (than x86) has so many if's it make me feeling like I'm coding again...

    'if they can compete on price'
    'if they can compete in hardware (whiel keeping said price'
    'if they can compete in software'
    'if they can get enough support for one platform above others'
    etc, etc, etc...

    Most of the world uses x86 because it's cheap, fast, and has a huge base of software stretching back years. Hence why companies liek AMD & Intel make millions by producing what people want: x86 cpu's and supporting hardware.

    Heck even most hardware fanatics relaize that that SCSI isn't for most people so we use IDE. If we followed your example we'd all say 'IDE is a piece of garbage replace it with somethign else liek SCSI!'. But we don't, why? most people don't need it and so it fits it's niche.
  • They probably need all 50 machines, if they're running NT, just to do the job of 3 or 4 UNIX boxes.

    - A.P.

    "One World, one Web, one Program" - Microsoft promotional ad

  • Why not just buy an Alpha, Sparc, or PA/RISC machine -- they've been 64-bit for years now. Why on earth would you want to buy either AMD's *or* Intel's 64-bit CPUs, when there's:

    o little compiler support for either
    o no guarantee either will work *well* in any OS (not even just Linux)
    o no installed base to speak of -- no army of users reporting bugs, no hardware support

    I'd go with a company that at least has a history of making 64-bit chips (and, personally, I'd go with Alpha, if I really wanted/needed a 64-bit CPU -- which normal users really don't need anyway.)

    - A.P.

    "One World, one Web, one Program" - Microsoft promotional ad

  • All you need is some dry ice and some flourinert, and it'll run fine.

    Geez, what a bunch of wusses. Why, in my day... :)
    pb Reply or e-mail; don't vaguely moderate [].
  • Here's a formula I use for determining what CPU to get:

    (Speed in MHz)/(power in watts)

    Take the CPU with the highest number.
  • Good places to look for the latest CPU performance might be to look at the RC5 statistics or the Performance Database Server []. Tasty performance numbers in terms I can digest are found there.
  • by RelliK ( 4466 )
    But try to explain that to the ignorant masses. I'm willing to bet that 90% of all the Joe Shmoes who are buying a computer this year will care about the 1GHz thingy. It'll speed up the internet. It'll fix my fridge. It'll make my runaway cat come back. etc.

  • While I agree with most of your post, I disagree about Pentium Pro being a success. It wasn't. It was so expensive that it never made it to the masses. And it had almost the same performace as plain pentium. It had only 2 things that pentium didn't:

    1. Non-castrated motherboards. As you may or may not know, Intel had to limit the amount of memory VX and TX boards can cache to 64Mb, just to promote ppro.

    2. 4-way SMP. Not that too many people used it.

    Back then Ppro was about the equivalent of today's Xeon. In fact Xeon is a direct successor of ppro. Just like it's older brother, it offers no performance gains over P3 and costs an arm and a leg. Oh, and not all Xeons can even do 4-way SMP.
  • So much for my 300 Watt halogen "TorchAire" lamps... Shoulda figured those imported lamps may not be 100% compatible w/ U.S. electrical standards.

    Seriously, 130W of power is nothing compared to, say, your average hair dryer, many of which run at about 10x that wattage. Or a typical microwave oven, operating in the 700W to 1000W range. Even your typical desktop computer system, including monitor, etc. consumes 600W to 800W if you're not running w/ any APM options enabled. (Remember, a 250W power supply provides 250W to the rest of the computer, but consumes some additional wattage itself. Although switching power supplies are fairly efficient, no power supply is 100% efficient.)

  • The Mozilla-favoured translation service doesn't do much better:

    But new wheels do not run immediately so approximately, as the customer would have that gladly. They are more largely, still another little angular - however they turn more rapidly.

  • I believed that the Pentium II and the current Pentium III's all use the CPU core that was pioneered by the Pentium Pro.

    The only reason why PPro's were subsequently phased out was their very high cost of manufacture, especially with you considered 512 KB and 1 MB L2 cache on the CPU die.

    The current Pentium IIIEB's has pretty much maxxed out the P6 core; that's why the upcoming Pentium 4 will have an all-new CPU core design, one that won't be taken advantage of for some years. After all, when the Pentium II first came out in 1997, Windows 95 was reporting it as a "Pentium Pro" CPU, indicating that the PII used the P6 core. Windows 98 properly recognized it as the Pentium II, though.
  • If something sufficiently better is available, the x86 will vanish. The 8080A/8085/Z-80 chips disappeared from desktop computers after the introduction of the 8086/8088.
  • From Intels' Pentium III datasheets []

    450 Mhz = 25.3 watts
    500 Mhz = 28 watts
    600 MHz = 34.5

    Those are (i believe) numbers for the SECC versions. The FPGA versions are a bit lower, as follows:

    533EB = 14 w
    600EB = 15.8 w
    800EB = 20.8 w

    All the way up to the non-existant 1.13 GHz P3, which draws 35.5 w
  • A pencil! Sure, it does hardly an work per second--but it also uses minimal power!
    Linux MAPI Server!
  • BAH, Mhz comparisons have no meaning AT ALL unless you are comparing two exactly same chips that run let say at 700 and the other at 900Mhz.

    For example, a 300Mhz MIPS R12000 used in SGI workstations and servers has FPU faster than a 1Ghz Pentium 3!

  • ARRGGH! STOP! fuck fuck fuck fuck fuck fuck fuck
  • Yes, but AMD never said "you'll have to get a new-style ATX case where the multi-pound heatsink bolts on, since it will be too heavy for the processor connector to support"... that's when people start noticing...
  • The formula I use:

    eeny meeny miney mo...

  • I agree that Intel has some huge technical hurdles to overcome, but except for a few screw ups from rushing to market, they have always produced good CPU's and chipsets. Yes it will be expensive for awhile, all new cpu's are. Yes there will be some technical problems, but they will work those out. If they do not do this, then some other cpu manufacturer will be happy to step in and become the new standard.

    So rather than speculate and criticize, I would rather give them the benifit of the doubt and judge what they release.
  • It's interesting to think back to how things were when the P6 (Pentium Pro) was coming onto the scene:

    1. Windows and Word and Excel were not past the threshold where you stopped caring about CPU speed increases.
    2. 2D hardware acceleration was still fairly unusual, so the CPU was bogged down more than it should have been in GUI-oriented tasks.
    3. 3D games were becoming commonplace, and 90% of the execution time in a typical game was being spent in a software texture mapping loop.

    Here's how things are today:

    1. Windows and Word and Excel feel the same on a 200MHz Pentium and a 1GHz Athlon. They're not CPU bound at all. 2D graphics accleration has helped a lot here.
    2. 2D hardware acceleration is standard on all machines, and has been for years.
    3. Software rendering is on the verge of extinction. Average cards like the Voodoo 2 are on the order of 500x faster than the best software renderer out there. Cards like the GeForce 2 are maybe 2000x.

    In general, CPU speed is not nearly the issue that it once was. Yeah, some games or applications feel slow, but that's usually because of either slopping programming or a profile that's bound in other parts of the system. Adobe Acrobat Reader never seems to get any faster, even if you double the speed of the CPU. And you still get that damned hourglass or watch icon when opening tiny documents. So these crazy expensive CPUs are coming out, chips with multiple fans and huge heatsinks, CPUs that use 20x the power of ten years ago...and, quite frankly, nobody cares. Oh, the techno-geek fanboys care, because they'll plunk down $300 every six months just so they can get a card with even more unstable drivers, but everyone else quietly ignores them. Considering that most people only surf the web and play MP3s, a 1+ GHz chip is like a 400HP lawnmower.
  • Have you ever wondered why AMD added 3DNow! to their chips? It was largely because of the marketing hype Intel used to push more chips that were supoosedly faster because of their multimedia extensions. MMX did absolutely nothing in its first generation; it was added only to give the illusion that Intel was trying to speed up their chips for the multimedia apps that were causing such an uproar.

    And have you ever wondered why is the standard when better alternatives exist? It's all because of marketing and money. Intel markets its chips much better than any other manufacturer, and it offers them at a better than competitive price. But I've pondered time and time again what would happen if Intel focused more on chip design than on marketing. Would we start seeing more stable, faster CPUs?

    Seriously, Intel is having severe problems with their chips. That's not to say that other chip manufacturers don't have problems--I would bet they do. It just doesn't seem that the problems are as serious when I get my new CPU from AMD.
  • AMD's first 64-bit chip (Sledghammer) is essentially an x86 chip with some added register bits (a la 386 adding bits to make 32). Hence its backward compatable and everything, but carries along all the troubles of the x86.

    If you really want a 64-bit chip, Itanium is the way to go (or get an Alpha now :-). AMD's true 64-bit chip is yet to be announced, but most likely will be based on the Alpha IA (since they have made patent exhanges for Alpha technology), and be very different (binary incompatable) from Intels IA-64 architecture.

    The funny thing about this, is that it's AMD who is going to be "sqeezing" more (another 32 bits) out of the Intel x86 architecture in the next year or so :-)

  • Your chips are already 64 bits in all the places you need them. The bus is 64 bits, FP is 80 bits, and SSE is 128 bits. Almost no-one needs 64bit integer units, and unless you have more than 4GB of RAM, you don't need 64bit memory addressing either. Maybe your talking about getting a RISC-type machine as opposed to 64 bit one? Remeber, even the G4 is a 32bit machine.
  • Oh, say the recall of the 1.13 GHz P3? It couldn't pass the Linux kernel compile test for all its worth.
    It also required a special BIOS to load up special microcode that it needed to be even remotely
    stable. Note that it was quickly recalled for a re-tape and re-mask.
    How about the fact that AMD's yeild during the K6 eras was around 25%? All chip companies occasionally have yeild problems. It's not necessaryily demonstrative of the quality of future chips. And unless you got burned by the 1.13GHz recall, then you've got no problems. I don't hear to many people complaining about the stability of the 900-1000MHz PIIIs.

    MMX speed up 3D? I don't remember anyone marketing it as that, at best, it sped up multimedia,
    such as audio and some video functions, and codecs that used floating point math weren't widely
    used either. MMX was only integer when I checked the instruction set. IIRC, Floating point 3D
    became used more widely when the Pentium came out, as well as the fact that the 486DX's had them
    too, not.
    MMX was designed to speed up multimedia, but mainly to compete with the 3D cards that were coming out. If you look at MMX, it actually does help for the rendering part of the pipeline. That's exactly why it is useless, because 3D cards these days handle that part of the pipeline. And floating point 3D was not at all common until the PentiumMMX and the first wave of 3D cards. Quake, DukeNukem3D, Doom, Wolfenstien, they all used fixed-point math. If you read any of the game programming docs from the era, you'll notice that it wasn't until into the PII's release that they stopped teaching how to do fixed point math.

    IIRC, Alpha didn't have SIMD instructions per se, but it did have most of the functions that MMX
    had, right from the start, circa 1991, due to having extensive byte manipulation methods and having
    64 bit registers. I don't recall seeing SIMD add, subtract or multiply functions in the instruction set
    until MVI was introduced, I have never used it. That mostly only added add, subract and multiply,
    min, max as well as a couple more byte shuffling instructions.
    However, MVI does exist, and it is designed to allow the Alpha to perform better in multimedia software. (According to Digital's press release.) The point is that MMX wasn't a dumb idea. Every other chip comany is implementing similar instructions.

    IIRC, Intel also had a hard time meeting demands for nearly entire quarters - a reason that Gateway
    and such lowered their resistance to AMD chips. AMD is definitely giving them a run for the money,
    and I will admit that neither company is perfect, and different chips are usually better at different
    Which quarters? Remember, AMD had a hard time meeting demands for more than a year during K6 era.
  • Considering that P4 will probably equal P3 in vector ops/clock (3D stuff) I care about clock rate. Seriously though, a 1.5GHz P4 is probably as fast as a 600MHz Alpha in floating point. I don't care if it's less efficient, it is FASTER.
  • What severe problems? If your talking about the P4 being slower than a similarly clocked P3, remember that the P2 was slower than a similarly clocked P2. Or that a Pentium was slower than a 486 for most code out at the time. If your talking about heat, remember that the .25 micron Athlons were massive heat-machines, and the even the old P2 300MHz chewed up nearly 40 watts. If your talking about manufacturing, remember that AMD at several points wasn't even able to meet the demand for 2-something million chips. If you can tell me what's so bad about Intel chips, aside from manufacturing glitches common to all companies, then you have a point.

    MMX wasn't just marketing. It genuinely sped up 3D. However, Intel didn't count on 3D accelerators coming on. MMX was designed in the days of the ViRGE and fixed-point 3D engines. How would have thought that some day 3D accelerators would have fill-rates in excess of 1.6 gigatexels? Intel certainly didn't, and appropriatly, they designed MMX to speed up integer calculations. However, the design wouldn't allow you to concurrently run floating point, and thus, developers started to use fp/3d acceleration instead of fixed-point/MMX. However. MMX is SIMD and SIMD is a good idea. If you doubt it, please explain why everything from Digital's Alpha to Motorola's G4 has SIMD instructions.

    Intel has the power to push a standard. There is nothing wrong with them using that power (remember, AMD has a cross-license with Intel. They could use SSE anytime they wanted). They also make some of the best chips available. For a lot of things, a PIII whoops an Athlon's ass. If it doesn't, don't use it. However, a lot of people find the PIII better and for those people, it is stupid to say that Intel's chips aren't great.
  • True. Sorry 'bout that. The magazine was Boot, Atiq Riza denied the 25% rumor. I must have remembered it backwards (the issue is two years old). However, the yeilds WERE very low at that time.
  • Actually, it's a balance of three things. First, it's market share. Which chip has the most units shipped. MMX had this, 3DNow! didn't. Second, it's marketing power. Intel has this, AMD doesn't. Third, it's technical feasability. SSE had this, MMX didn't. If any one of the three things is severly lacking (for example, MMX had market share, and it had volume, but was bad technically) then the standard won't succeed.
  • I'm running a 200watt lightbulb off my standard American socket. Seriously though, the only thing this is going to require is a 400watt power supply.
  • The fact that there was almost no performance increase from a 700MHz to 850MHz Athlon due to the slower cache is not wrong?
  • I was surprised to hear about the P4, as I wouldn't have believed Intel could have squeezed another one out of the old IA32 archetecture. I'm hanging out waiting for the 64 bit chips from AMD and Intel. My next purchasing decision will be which of the 64 bit chips is best for my needs. Hopefully the advent of Itanium and Sledgehammer will also drive the 64 bit SPARC and Alpha chip prices down, giving me a wide variety of choices (All of which will run Linux so I don't really care which I go to.)
  • I'm actually more hopeful that the coming of the IA64 and Sledgehammer will drive the prices of the other 64 bit chips down. My number of choices will have about doubled -- I hear they're working toward 64 bit PPC in the same timeframe.

    However, Intel's been working with all the big names in the industry to make sure heavily optimizing compilers are there by the time Itanium hits the streets. It may suck in Windows or for 32 bit code, but it sounds like it blazes on native 64 bit code that's been optimized for it. This I gathered from talking to the SGI guys at the last Colorado Linux Info Quest. They've had Linux booting on the pre-released Itanitums and emulators for ages now.

  • Athlon, Duron,
    better look, the heat's on,
    Simmer up now.

    Alpha's beta,
    it's a hot po-tata,
    Heat sinks up to the stove.

    So Intel, oh well,
    might make your case a burnin' hell,
    But you know they can do better.

    If data is food,
    don't glom an' be rude,
    but the second it's done
    marks the winner.

    So with the yields, the shields,
    got radiation fields,
    Gonna nuke that box 'til it glows.

    A chip that fries, well
    my oh my, your data's all done
    in a flash.

    So work hard, yo,
    gotta save yourself some dough,
    For the heat, on the street,
    they all want your cold hard cash.

    (Anyone for SMP-on-a-chip?)

  • The Pentium Pro comparison is indeed appropriate, though the real competition wasn't AMD as it was PowerPC. If you remember, during the PowerPC's peak in about 1994/5, there was a huge amount of FUD about the P6 (as it was then known). The big line was that the P6 wouldn't execute 16 bit code as fast as the Pentium. This was true, actually, but the P6 core was able to scale everywhere. As well all know, the P6 came out, followed by the Pentium MMX, and the Pentium II, and Windows 95 came out, and the PC vs. Mac argument has almost been forgotten.

    I recently came across a Mac advocacy site which hadn't been updated in about five years, and there was tons of FUD about how the Pentium Pro was so slow and would never make it. It's hysterical to look back at that, because the Pentium Pro was so successful. I have a sneaking suspicion that the anti-Pentium 4 articles are going to look as foolish in five years.

    Moral of the story: Never underestimate Intel (unless you're talking about IA-64 :-) )
  • Here's the formula I use for determining what CPU to get:

    0*(speed_in_MHz/power_in_watts) + 1/(execution_time_for_my_applications)

    Take the CPU with the highest number.

  • But you need around 1GHz to decompress MPEG4s...
  • Aren't both AltaVista's Babel Fish [] and Alis Technologies' powered by []Systran Translation software []?

    As a side note, the link in the article points to The Babel Fish Corporation(TM) [], a whole other deal entirely.

  • I pity you, o poor american cripples who are dependent on someone's translation to read german article... According to the statistics here [] only 5.9% of the internet population's native language is German. Do you mean to say with your above quote that everyone in the world should be able to speak every language in existance just so we can read everything in the world without a translator? I think we all know that's a pretty ridiculous proposal.

    Not that I defend the American tendency to be monolingual and proud of it, but I believe CmdrTaco made the following assumptions:
    2) The people reading slashdot speak English, or else they'd have a pretty hard time understanding what all the news was about, unless of course, they were using babelfish to translate English in to their native language... maybe even German?

    Given those two assumptions, it would be completely logical to conclude that most of the slashdot crowd will need a translation of some sort, one which babelfish can provide.
  • Earlier today, major computer makers committed to making laptops using the Pentium 4 processor. Compaq is considering bulking up their laptop designs to make room for the 1 pound heat sink, and to allow car batteries to be placed in the computer. Dell on the other hand is thinking of shrinking their laptop case by removing the heat sink and submerging the motherboard in nitrogen cooled flourinert, and removing the battery cases and instead shipping an electrical generator with the machine. IBM was still considering a method of cooling the chip and providing power that would not involve people breaking their backs.
  • (Speed in MHz)/(power in watts)

    Just out of curiosity, why do you care at all about the watts? For a laptop I can maybe see, but for a desktop system it is totally irrelevent.


  • Unless you use more than 4 GB of RAM or monstrous databases, don't expect to gain from 64 bit. The vast majority of users would gain much more with a decent RISC architecture over x86. Replacing the stack-based floating point with modern RISC could double or quadruple the floating point performance. (The Alpha 21264 claims four times the fp performance of the P6's).

    And you may not want to get too excited about IA64 either. It's widely believed that Merced will be too slow for production use. And even the next gen design, McKinley, will have a lot of trouble performing until there are compilers that can generate optimal code for it. Right now, they're performing worse than they would on RISC architectures. And even worse than the x86 for normal stuff. Which is really bad considering the whole point of EPIC was to go better than RISC. Intel and HP still have to a lot more work before they can convince people that IA64 was a good idea. Things are definately not going as planned.

    It's because Intel has been so long overdue with Merced that they've extended the life of their x86 series. And they've realized that Merced will execute x86 instructions far slower than a native IA32 processor. And IA64 will be expensive for many years. It'll probably be in the price range of the UltraSPARCs and Alphas. Unless Intel can push the mainstream market to switch to Merced, they can't generate the volume production to push the chip prices into the PC market. They'll have a tough time doing that, and Sledgehammer will make it much tougher.
  • by Anonymous Coward on Sunday September 17, 2000 @06:43AM (#773605)
    I'm not going to say that it has anything to really do with x86 in particular but I know the architecture and it is the most antiquated POS out there. Intel clearly has to put more effort in to producing processors than say, Compaq or Motorola, that translates in to more expensive processors and less performance. Moore's law isn't still in effect unless you look at Hz, the actual performance isn't going up that fast.

    I think the best thing would be for us to dump the old architecture. If you're not going to get the continuous speed improvements and dirt cheap costs (although AMD and Intel have been driving them down) at least you should get a clean and easy to program architecture. Hopefully in 5 years IA-64 and PowerPC will be where it's at, since I'm on Linux and staroffice is going GPL I won't have anything I use regularly that I won't have code to. I also think AMD should wake up and smell the coffee, they need to look at how much they have invested in trying to emulate Pentiums, honestly, I think they should work a deal with Sun or Motorola or somebody and more their 64bit plans to something sane.

    Let x86 die.

  • by Anonymous Coward on Sunday September 17, 2000 @07:41AM (#773606)
    --It seems to me there has been a lot of whining about P4 power consumption. This isn't meant as a flame, but the P4 is supposed to be a high performance microprocessor. Honestly, 66W is not a lot in the performance arena. Take a look at the power consumption of the alpha 21264 @ 550Mhz... 100W. or/literature/21264ds.pdf []

    I think a little perspective is required before jumping on these "P4-consumes-too-much-power/generates-too-much-hea t" bandwagons.

    -just my opinion.
  • by Detritus ( 11846 ) on Sunday September 17, 2000 @07:00AM (#773607) Homepage
    It's EMC (electromagnetic compatability). not ECC (error correcting code).
  • by Jah-Wren Ryel ( 80510 ) on Sunday September 17, 2000 @07:00AM (#773608)
    Intel made a lot of changes to the architecture in the P4, the primary goal was scalability, not efficiency. So, it is no surprise that they are slower than a P3 at the same clock frequency. But the P4 is expected to scale to the 3-4GHz range, while it is doubtful that the P3 will ever even make it to 1.4GHz. So, Intel has given a little at the low end as the cost for being able to go much, much faster in the long run.
  • I know Intel's out there working with everyone to make sure compilers, apps and OSes are available for their product. I haven't heard a peep out of AMD. If they're not careful here, they might lose the industry back to Intel.
  • by ahg ( 134088 ) on Sunday September 17, 2000 @10:32AM (#773610)
    It's easy to see why Intel is taking the road to higher clock speeds even at the risk of (initially) slower performance. It's all too readily evident that joe-sumer (average joe consumer) is hooked on having "more megahertz" in their box.

    Intel still dominates the mass market, and I'm sure it's an important lead for them to maintain. Given the recent 1.1 GHz P3 debacle - it's reasonable to assume that their marketing people have told them that they must stay on top of the GHz war even if it's at the cost of better technology.

    While it may take some time for the P4 to hit the mass market, that's where it's headed in the long term. (The P4 is not like the PPro that was only intended for use in high end workstations)

    Apple tried to transistion the mass market in thier TV ads to thinking in Gigaflops, a better indicator than MHz. - See any store ads pitching Gigaflops recently? neither have I.

  • by be-fan ( 61476 ) on Sunday September 17, 2000 @08:13AM (#773611)
    So AMD's total screwup of the K5 and original K6, it's it's mess-ups with the 2/5 clocked cached on the previous Athlons didn't make you lose trust in THAT company? Seriously though, people who base their opinions of a company on one or two debacles are stupid. Wait for the P4 to come out. See how it reviews. Try one out. Ask your friends how they like theirs. Compare it with an Athlon. Buy whichever you like best.
  • by NoWhere Man ( 68627 ) on Sunday September 17, 2000 @08:58AM (#773612) Homepage
    Intel has been having a hard time with chips lately. I don't trust any PIII beyond the 850 mark (okay alot of ppl are prolly going to flame me for that, but that is a personal feeling). The PIII process is out of date and should be retired and the P4 is just being shoved in our faces like an empty promise.
    Intel needs to take a step back and create a new product that'll help em take back the market. And yes I think AMD should take the market for awhile, they have not only earned it but deserve it. This is not to say that I think Intel should crash and burn, quite frankly, the race for speed has forced the 2 companies to develop new designs and think differently than previously.
    But Intel just seems to be thinking like Homer Simpson.
    "I don't know Herb, people are afraid of new things. You should have just taken an existing product and added a clock or something.."

    Maybe Intel's recent problems with the 1.13 Ghz PIII will help them realize that they need to concentrate more on the products for awhile, rather than the market and $$$.
    One can only hope.
  • by VAXman ( 96870 ) on Sunday September 17, 2000 @07:38AM (#773613)
    The 500 MHz Pentium III draws about 30 Watts.

    What's funny is that the Athlon also draws 60 watts. That chip was released over a year ago, but nobody thought that 60 watts was a lot until the Pentium 4 is about to come out.

    Also, Willamette is expected to be a stop-gap to get the Pentium 4 in the marketplace, while the Northwood is going to be the real deal. That's going to be at 0.13 and there is even expected to be a laptop version, so the power is going to be much less. Much like Pentium Pro, where only about one million parts where shipped, and the Pentium II shipped umpteen millions of parts were shipped.
  • by Mike1024 ( 184871 ) on Sunday September 17, 2000 @07:12AM (#773614)

    How about:

    ([Speed in Mhz]/(([Highest speed in contention]+[lowest speed in contention])/2)) - ([Price]/(([Highest price in contention]+[lowest price in contention])/2)) + ([Power in Watts]/(([Highest wattage in contention]+[Lowest wattage in contention])/2)))

    Basically, each property is reduced to an integer denoting it's performance relitave to the average of all the processors under consideration, then the numbers are added and subtracted, depending on wheather each number should be high or low. The processor with the highest number would be the best.

    To find the best value for money, a far easier formula to use would be:


    That would give you the pounds-per megahertz value for each chip. Personally, I'd sooner judge it with:

    [Price]/[Speed in FLOPs/s (Floating point operations per second)]

    Because FLOPs/s is a better judge of speed that Mhz, in my experience.


    ...another comment from Michael Tandy.

  • by Mr Z ( 6791 ) on Sunday September 17, 2000 @10:59AM (#773615) Homepage Journal

    Folks, get it right. Moore's Law simply states that the number of transisitors on a chip doubles every N months, where N = 24 in the first statement of the "law", and was revised shortly thereafter to N = 18 .

    Typically, performance scales with number of transistors, but that is not always true! There are three main reasons performance does go up roughly by the same ratio as the number of transistors:

    • Some of those transistors can be used for new functions. For example, additional functional units (such as the three-way issue pipeline on PentiumPro/PentiumII vs. the U-pipe and V-pipe on Pentium vs. the single-issue pipe on 486). This is a direct application of transistors to performance, but it only addresses computation-bottlenecked applications. Additionally, some of those transistors can be used to build wider pathways on the chip, leading to improved bandwidth to help bandwidth-starved applications.
    • Smaller transistors switch faster, and so can operate at higher clock rates. This has the dual effect of increasing the number of computations per second (again helping compute-bottlenecked applications), as well as increasing bandwidth--at least on the die. Going off-chip can still be a bottleneck. That brings us to the third bullet:
    • Smaller transistors can be used to build a bigger cache, so that the clock rate and on-chip bandwidth benefits can be used to greater effect.

    Sounds great, but what's bad?

    Well, one big thing that is not addressed by faster transistors is latency. As transistors get smaller and the wires that connect them get smaller, communication between transistors starts to become the true bottleneck. In the "Good Old Days", you could send a signal anywhere on the die in a single cycle, and you could treat a wire as an instantaneous link. In these smaller technologies, though, transport time for signals burns a significant portion of the time for any computation. This is why pipelines get deeper and deeper with each generation. Essentially, you can only make effective use of all of those transistors if you can minimize the amount of communcation between them, and that's what pipelining is all about. Unfortunately, this limits how much you can speed up many applications, especially general-purpose compute problems.

    Newer architectures address latency problems by exposing their pipeline (see EPIC or VLIW), or providing extensive resources for dealing with it. The Alpha CPUs, for instance, have an aggressive cache and reorder buffer that allow many pending cache misses to be services while non-dependent instructions are executed happily. (IIRC, the 21264 allows up to 4 hits under miss in the cache -- that is, you can have up to four misses outstanding and still take hits in the cache and allow instruction execution to proceed. I don't have Hennessey and Patterson handy to check though.) The reason this is even conceivable is that the Alpha provides a huge bank of architecturally-visible registers, and an even larger bank of rename registers for rescheduling code. Since compiled code spends most of its time moving data between registers, the architecture can easily determine which instructions are dependent on each other and very effectively hide the latency of the pipeline by reordering instructions and renaming registers.

    In contrast, the x86's highly bizarre and rather small register file create a huge bottleneck to reordering, since compiler ends up spilling many intermediate values to the stack or other memory locations. As a result, the CPU can't use register names to determine instruction dependencies as often, and so it cannot aggressively reorder instructions. As a result, it cannot hide the latency in the pipeline as effectively, and gets bitten with poor performance. All those transistors sit idle more often. (This, BTW, is why the Alpha can beat the Athlon on some apps, despite a 2x clock-speed advantage on the Athlon's part.)

    There are plenty of other reasons why x86 can't keep up performance-wise, but this is not the forum to discuss them. Just remember, x86 is keeping up with Moore's Law just fine. Don't expect its performance to keep scaling at the same rate.

  • by be-fan ( 61476 ) on Sunday September 17, 2000 @06:44AM (#773616)
    It seems that the pundits spend most of their time doubting Intel, while Intel becomes the de-facto standard with their new chips. Take the Pentium. Soon, everyone (AMD & Cyrix) moved to the super-scaler design. Intel added MMX, and AMD and Cyrix added 3DNow!. People originally thought that the PII would be a failure (it's slower than a PPro at the same clock-speed) but it became THE high-end standard for years. People thought that the Pentium wouldn't make it because it ran 486 optimized code slower than a 486. Instead, people just reoptimized their code. All these chips had quirks. Just like the P4 has quirks. However, the software industry will work around these quirks, just like they have for all the other Intel chips.

1 Angstrom: measure of computer anxiety = 1000 nail-bytes