Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Intel

Where Oh Where Is The Pentium 4? 109

Othello writes: "Sharky managed to dig up some insider info on why we aren't going to see the Pentium 4 this month. There are chipset problems with ICH2 that are causing the delay. The processor should be out around week 48, they say, which is late November or early December."
This discussion has been archived. No new comments can be posted.

Where Oh Where is the Pentium 4?

Comments Filter:
  • Basically the UN just did some little statistical sampling about population density and growth rates for the earth and then arbitrarily said that in fact the earth had 6 billion people on it.
  • Why *g*libc? Gnu has nothing to do with that.
    Any conforming C library (including glibc or MS) has this limitation.
    C programmers will one day be as valuable as Fortran programmers in late 1999...

    Time to update your resumes and claim some "Epoch Problem Consultant" title.

    ____________________
  • by Hadean ( 32319 ) < ... <at> <gmail.com>> on Tuesday October 10, 2000 @06:31AM (#717464)
    HardOCP [hardocp.com] had a link on their page yesterday pointing to TurboTech and an pretty damaging, but generally truthful, editorial... If you are an Intel fan, well, skip this message :)

    http://www. tur botech.ch/articles2000/001001-intels_darkening-01. html [turbotech.ch]

    HardOCP [hardocp.com] quoted this part, which is a pretty good one:

    September 2000 - Intel's blackest month in countless years. Following the official withdrawal of the 1.13 GHz PIII in the last days of August, in the wake of this face-loss it also became pretty obvious that Intel will have to ditch the grandiose plans of breathing new life into the dying P6 core with a 200 MHz FSB, a 0.13 micron process, and larger on-die L2 caches. It seems the Coppermine core (the last and most advanced modification of the half a decade old P6 core, introduced in the Pentium Pro 150 MHz in the mid-90s) simply won't be able to go much further.

  • Brian Boytano do?
  • Athlon don't support SMP, so don't hold your breath
  • The only truely revolutionary changes that Intel has ever done is the 386 and the Pentium. Every other design that they did was a simple upgrade to already existing chips.

    You're kidding, right? What about the P6 core? O-O-O is not a trivial change. The P4 is a complete redesign of the core that implements concepts like the trace cache that are relatively recent in the computer architecture research community.

    Yes, of course Intel has released incremental changes. If they didn't we'd all be complaining about Intel not advancing their technology. Honestly, I'm getting tired of seeing the same old Intel and Microsoft bashing with little or no useful argument to back it up.

    --

  • My calendar has the year grouped into 12 units called "months". Each unit has between 28 and 31 days. The units have unique names that people can reference. As people become familiar with the names and order of the months, they can quickly determine what time of year is referenced when the name appears. Seems like a better scheme than the "week" calendar: "My birthday is on week 16, day 2 of 1964."

  • It has nothing to do with accurate time, or hardware or resolution. It was just a decision made to store time as a number of clock ticks in a 32 bit location and have all apis honor that.

    Once a machine boots up and reads the time from the CMOS clock, it then counts clock ticks, which are usually generated by a programmable timer counter chip on motherboard. Everytime an interrupt is generated by the timer/counter , the time counter location in memory is incremented. Timer/counter chip has a lot more information in it that can be read by a programmatic call ( such as microseconds returned by gettimeofday() call ). Some counter/timers will return submicrosecond timing. However this is irrelevant to the agreed upon seconds tick by all unix api's which is a 32 bit location. It is very easy to implement a 64 bit counter , or 128 bit or anything else, but it is extremely hard to have everybody else agree to that, except by acceptance by everybody to the new spec.

    One fly in the ointment is that it is very hard to read two 32 bit locations to make one 64 bit word ( or two 16 bits for one 32, or two 8 bits for one 16 , etc...) when you are reading two consecutive locations and an interrupt might occur between 2 reads. One could try a mutex, but the cost is too much for such a small operation that could happen at a high frequency. It is left to the reader as an exercise as to how to do this.
    Now in two 32 bits for 64 bit situation , this is not bad since you could only be off about 68 years every 68 years, if your algorithm is not correct.. but... Also mutexes could cause priority inversion and your Mars exploration vehicle could get in to a brain lock.

  • Use the post-Higgins accent, you know, the one you hear right before, "Come on! Move your bloomin' auss!"
  • 2-Regular people do not need 64bit CPUs, nor will they for another half decade at least.

    Sorry, but that's just silly. What people do and don't "need" is beside the point. I seriously doubt that most people "need" a processor much faster than 250-300 MHz for their serious computing needs; that's plenty fast enough to handle the web-surfing, word processing, spreadsheets, etc. that make up most people's daily computing use. But those people are going ahead and buying much faster processors because they're available at a reasonable price point. If Intel had put some serious effort into developing a 64 bit processor instead of continually extending the life of the PPro core, it too would be available at a reasonable price point and people would buy it. Would they really need all of the big advantages that you can get from a 64 bit processor? Of course not, but they could very well be getting more processing muscle for their dollar than they are today, and that would be a good thing.

  • I don't own any 500 horsepower cars, but I do care what speed the top end processors run at, and for a very good reason. The highest speeds available today determine which lower speed processors are still supported and sold.

    I prefer to run the low end of the AMD chips, which is now near the 650-700 point. This cheap alternative would not be available at prices below $100 per chip if the high end didn't drive the market.

    Do I really NEED a 650 Duron to run my day to day stuff? No, but at $65, I can afford one that will still serve my needs for years to come. While your needs may be met your current systems, please do not infer from a single point of data that everyone's needs fit the same pattern.

    As for x86 "losing the battle" why did Apple feel the need to offer TWO CPU's for the price of one in their high-end models? Are Mac users that influenced by a simple MHZ number? Perhaps "real benchmarks" are not equivalent to "real world" performance, as any cs undergrad will tell you.

    What I do know is this: for my computer business, I can offer my clients the luxury of purchasing a system now with a 650 processor, and the guarantee of being able to swap in a 1200 processor if their future needs demand it. Apple isn't at that point, and the PC world is only due to the high-end CPU wars.

  • Athlon don't support SMP, so don't hold your breath

    I held my breath and went to www.amdzone.com et voila :

    AMD Demonstrates Dual Athlon System

    Date: Tuesday October 10, 2000

    SAN JOSE, CA --OCTOBER 10, 2000--AMD today reached a new milestone with the first public demonstration of a multiprocessor computer designed specifically to work with AMD processors. The demonstration, at the 2000 Microprocessor Forum, consisted of a computer powered by dual AMD Athlon? processors, the AMD-760? MP chipset, and next-generation Double Data Rate (DDR) memory. The multiprocessing computer demonstration featured 3D Studio Max, a professional 3D design and modeling application capable of increasing system performance by using both processors simultaneously.


  • Seriously, all of the major recent Intel chipset problems were with RDRAM. There's the infamous i820E problem with the third RDRAM chip not getting registered (which was after the initial, pre-RDRAM i820 went bust). And now this, when the ICH2 is coupled with the i850 and i860 MTHs, which use... starts with R, you know this...... RDRAM! Right!
    I think the time for the NVidia DDR chipset is NOW. Let's stop this half-assed hardware engineering and pre-alpha lithography which the Intel staff is undertaking.


    It's not that simple. Both RDRAM and DDR require tighter tolerances on the chipset and motherboard levels for the same obvious reason--the more bandwidth you want to transfer, the less your tolerance for noise and defects, and the greater the danger of crosstalk. Period. This is simply a fact of life if you want the benefits of high-speed DRAM without taking the (in my opinion inevitable) step of ditching the current system of expandable commodity RAM on the motherboard in favor of a system which ties hardwired or embedded DRAM to the MPU.

    Now, you can argue that RDRAM makes the problem worse by trying to cram the same amount of bandwidth as PC1600 DDR into a thinner bus. And you can argue that dealing with a new memory communications protocol has led to more bugs. I'm not going to argue with you there. On the other hand, RDRAM proponents would counter that RDRAM lessens these problems by switching to a packet-based protocol to cut down on interference.

    And you can argue--as you did--that the fact that mighty Intel has run into myriad problems trying to implement RDRAM for their PC chipsets, and that their results to date--the i820 and i840--have been lackluster at best. But you can also point out that Intel has (for marketechture reasons) made this switch on a processor designed for use with SDRAM only. You can point out that they've (also foolishly) decided to have the switch coincide with a switch to hub-based memory management--which, incidentally, appears to be the source of the erratum in the i850, not the use of RDRAM. You might also want to note that Intel's first efforts with SDRAM chipsets, while not as starcrossed as the i820/i840, were nowhere near as efficient, stable or refined as their 4th-generation BX chipset or the 5th-gen i815.

    Finally, it's worth noticing that there are exactly zero currently available DDR chipsets, bug-free or otherwise, with which to compare Intel's RDRAM record. Of course the main reason for this is politics--only after a year of Intel floundering with the RDRAM protocol did the industry finally coalesce behind DDR. And of course this fact will be changing quite soon, within the month in all likelihood. The fact that several working and apparently stable DDR chipsets are on the verge of being released, from chipset designers less accomplished than Intel, means that DDR cannot be as difficult to implement as RDRAM-backers have long argued it would be. Still--and this is very important--no one has claimed it was easy. Indeed, DDR chipsets were by all accounts much more difficult to get working than SDRAM chipsets, and even now many are reportedly quite finicky when working with DDR made by different manufacturers. Furthermore, all DDR motherboards due for release in the near future are, like all of Intel's RDRAM boards, six-layer. This improves stability and reduces crosstalk at the expense of extra engineering effort and manufacturing cost; SDRAM chipsets tend to be quite stable with just 4 layers.

    I dunno where that leaves the ease-of-implementation balance. It appears that it's on the side of DDR, but it's still a bit premature to say so conclusively. In any case, the idea that whipping up a DDR chipset to replace RDRAM is child's play is absolutely false.

    On the other hand, Intel already does have a DDR chipset for the P4--or rather, for Foster, the "P4 Xeon" due out in the beginning of the year. Now, this chipset could be modified for use with the normal P4 with little problem, but there are two big reasons it won't be:

    1) Cost: In order to fill Foster's massive FSB, Intel's new chipset uses, IIRC, dual-channel double-wide DDR. Contrary to what you may have heard, this means a much much higher cost than the dual-channel RDRAM bus on the i850. The reason is that RDRAM's one unambiguous advantage over DDR is in using fewer pins; having two channels and doubling the bus width means multiplying the already quite large number of DDR pins by 4, which in turns means motherboards which are mucho expensive, even if two sticks of DDR is cheaper than two sticks of RDRAM.

    2) Legal obligations. Intel is under contract with Rambus not to promote any other next-gen DRAM standard for its mainstream desktop line until 2003. That means that, unless they want their asses sued off (and we all know if there's one thing Rambus is good at, it's ass-sue-offing), the best Intel can do for the next couple years is license the P4 bus and allow 3rd-party chipset makers like VIA, ALi and, as you mentioned, perhaps even nvidia make DDR chipsets for the P4. Actually the contract specifically states only that Rambus has the option to revoke Intel's RDRAM license if they promote a DDR chipset for their desktop chips, so the big question is, would Intel risk losing their Rambus license, especially when there is no cheap DDR solution which can take full advantage of the P4's 3.2GB/s FSB? On the other hand, would Rambus risk revoking Intel's RDRAM license and thus taking themselves out of the PC DRAM industry possibly for good??

    I dunno. Frankly I'm just hoping Infineon succeeds in overturning some of Rambus' RDRAM patents with prior art as they're seeking to do. (I'd be shocked if they didn't succeed in showing prior art for Rambus' "patents" on SDRAM and DDR.)
  • As soon as the business market figures out AMD is making good products and that AMD chips are not the second-rate, unstable things that they used to be I think they will take a larger cut of Intel's market share than they are now.

    AMD's not going to get a lot of Intel's business until they get some more workstation-class OEMs producing machines with the Athlons in them. I'm picking out 30 new machines today at work, and I'm not looking at any Athlons. Why? Dell doesn't make any Athlon machines. Gateway doesn't have any business machines, only their home-market Select series. Micron? Nope...

    I'll buy them, I'm happy with my Duron 600 at home, and if they're cheaper, it's a no brainer. But AMD is going to have to break Intel's stranglehold on the business OEM before I can buy them!
    ---

  • Actually... no. In 2038 the clock rolls over, so the date suddenly changes from 2037 to 1969.

    A lot of software doesn't handle that change terribly well, though some can. The EOF is ^D, unfortunately I don't have an ASCII table handy to tell you what the hex for ^D is :( It's deffinitely not 9999.... etc.('twould be a 32-bit int, though, so 32 1's translated into decimal)

  • I will take this time to note that are a number of non-Intel architectures out there already that are 64-bit.

  • This is a good article, but it doesn't mention the i815. True, it's still "forcefully bundled" like the 810 with slow integrated video and 1993-era quality sound, but it's a usable chipset (AGP4x, ATA100, 133MHz FSB, SDRAM) with no stability issues to date.
  • eh, you mean *COBOL* programmers in the late 1999...i suppose...
  • That 9999 thing is an old database/preY2K problem. Sometimes you use 99..... etc as a marker in dbs to say it's not a valid number (basically like a NULL, really) thing was that in 1999, some db fields containing the year in 2 digits would contain 99, you get the message, if the app using the db intepreted that as a invalid or unset field then *poff*
  • There is also a story at the Register about the current exec shuffle that Intel is doing related to the P4. Sounds more and more like panic to me given its recent set of fiascos...

    Why do you people call it P4 ? So, when next version (Pentium V) gets out, you will call it P5 ?
    Oh, wait, P5 is already reserved for Pentium I !
    Could we at least use the roman number, like P-IV or something.
  • Of course, more serious and more robust operating systems, including VMS and Windows NT, both manage to go to some ungodly high year, on 32 bit machines (and have a earlier starting year, and finer clock resolution, to boot). Unix is unique among operating systems in demanding that users go to 64 bit machines just to get the year correct.
  • You cannot logically make the statement that AMD is cheaper without making relative comparisons for the new chips to their new chips.
  • Oh yeah. Or some sort of pre-historical computer language anyway, at a time when storage was an issue.
    Nowadays, memory is cheap and we align everythink on 8 bytes to gain some microseconds...

    ____________________
  • Rumored price info here [theregister.co.uk].
  • Oh yes, Athlon DOES support (Alpha-style) SMP, there just are no any SMP chipsets developed for it yet.
  • That's highly dubious isn't this more of a function of the CMOS than of anything else. I believe that most of the problem with fine resolution timing were related to the CMOS only allowing for 1 second accuracy in timing and date information.
  • Only if you play the '+/- six months' game. The 486 was delayed that long by to a problem with their fab yield.

    The 80286 was on time.
  • Anyone remember the Pentium III, Pentium II and the Pentium without a number? The all were discovered to have some bug after they were launched.

    Hyep. Remember, kids...at Intel, quality is job 0.99999999999998.
  • Most every unix vendor's machines are 64 bit. It's just that unix's date format is based on 32 bit libraries. Sparc's, Alpha's, MIPS, Power's, PA-RISCs, they're all 64 bit chips.

    And not to sound too much like whoever hyped up the Y2k "problem" blamed, but isn't the year 2000 a bit early to worry about 2038, anyhow?
  • for the cold winter months up north. Intel's just doing their part to help out with your heating bill :)

    c.

  • Will the Athlon already dominate the high-end market by that point? I think it's already happening.

    The Athlon's not going near the highend until VIA or AMD can release a SMP chipset. Until then it'll just be a gamer, home user, desktop user, and possibly economy server CPU. Which isn't bad. I just bought an Athlon system last week. Which incedentally, won't recognize it's sound blaster live card while running Redhat 7 (so if anyone here wants to give me some advice, i'd much appreciate it, hint hint :)
  • It is going to dominate the high end market huh?

    That must be why DELL (read: Major Portion of PC market) must be marketing Intel processors in their systems still.

    Just yesterday I got some more junk mail from Dell containing items like the 1 Ghz pentium etc. etc.

    I scoured the article and could not find any mention of AMD processors, very strange.

    My conclusion, based on this and several other factors, is that saying Intel is too late to be relevant is like saying since Microsoft is late to market its to late for them to be relevant *cough*

    It is an underestimate of what years of monopoly, patents, deeeep pockets, and being in bed with Microsoft can do to your longevity at this point.

    Jeremy
  • About an hour ago, the press release cleared that AMD demonstrated a dual Athlon workstation at Microprocessor Forum. Everyone knows they've been planning it, but now it's official.

    Of course, it doesn't matter much until you can buy one in stores. Volume SMP-capable MB's should be available Q1 2001 -- not that far away.

    And, the Athlon's chipsets have been SMP-capable for a while (they *are* based on DEC Alpha's interconnect), it's just that no one has put the denergy behind it to build SMP motherboards from them until now.

    Happy day!

    --Lenny
  • I'm just curious, but what does the architecture matter to someone besides those that write compilers or assemblers? I mean, to C, C++, Java, and others, aren't they all basically the same? yeah, performance varies, but to joe user, what difference does a different chip mean if the benchmarks and real world performance are about identical?
  • 1-The P4 isn't really an incremental step forward as it's a completly new architecture. The first since the Pentium Pro. That said, in terms of performance it will be only marginaly as fast as a 1GHz P3 or a 1GHz Athlon when it debuts at 1.4 and 1.5GHz. Despite the delays, the P4 is still a very much rushed product. The P4 is moot already to everyone but investors. Intel will be moving from 423-pin packaging to a 478-pin packaging with the 0.13um Northwood that will replace the current P4. That will be the real next generation CPU intel wants to release. 2-Regular people do not need 64bit CPUs, nor will they for another half decade at least.
  • by Anonymous Coward

    /me rips out AFCArchVile's spleen.

  • The actual reason for the delay is that Intel and George Foreman have teamed up to provide a way to remove all that heat coming from the P4. Intel is just days away from announcing that the P4s will ship with a George Foreman grill attached. Combined with the refrig units that overclockers keep handy, the P4 will be the ultimate home kitchen. The P4/George Foreman grill will save time b/c now you can cook and play (insert game here) simultaneously.
  • It certainly appears that AMD is poised to overtake Intel in the PC arena. IMHO, Intel is coasting on their reputation, fab capacity, and product line breadth, at the moment. They are not the CPU performance or value leaders. The introduction of Mustang and Athlon SMP around the end of this year will chip at Intel's last stronghold.

    But will it matter?

    We speak of the post-PC era. I don't expect to see the PC go away. I'd rather expect it to look more like the end of the mainframe era of a decade or two ago. The mainframes didn't go away, they even kept growing their market. But the wild growth was in the PCs.

    Now in the post-PC era, expect to see the PC market growing, just not wildly. Knowing exactly what will be the wild growth area is what will make some people VERY rich.

    But Intel's product breadth, particularly ownership of the StrongARM, is going to help them more than AMD's CPU leadership will. IMHO, AMD may well have won a Pyhrric (sp?) victory. The big question will be how they are poised to play in the post-PC era.
  • The reason 2038 is a problem for Unix machines is that Unix time turns into a 9999...(I forget how many 9's) which in most systems in how you denote the end of a file. So imagine having an internal system date which corresponds to the EOF marker, you have a large problem on your hands. Remember Unix time has been counting since 1969.
  • True, true... It -is- a pretty good board, but then, you're forced to pay extra money to get things you don't want (how many actually want the integrated graphics in these boards??)... at least they offered an AGP slot this time... You know there's a problem, though, when M/B makers are still pumping out BX boards (with ATA100, etc. on them)... Intel needs to make such a board again.
  • Use a non x86 alternative such as Sparc, Alpha, or MIPS. All three happen to outperform an equivelent clocked pentium.
  • And now this, when the ICH2 is coupled with the i850 and i860 MTHs

    Emphasis mine. Actually, it's MCH. Memory Controller Hub. An MTH was used when you wanted to use PC100 sdram in a RIMM slot.

  • How do you have a P4 if they aren't even released yet?
  • by HiyaPower ( 131263 ) on Tuesday October 10, 2000 @05:45AM (#717509)
    There is also a story [theregister.co.uk] at the Register about the current exec shuffle that Intel is doing related to the P4. Sounds more and more like panic to me given its recent set of fiascos...
  • P4 was supposed to be next gen a long while ago then came IA64 which turns out to be worthless.

    Now where's that damn link?

  • If they aren't in time for Christmas just ask Santa, because his little elves don't have problems with supply and demand. (There are what like 10 billion people? Give or take a few billion)
  • The suggested price and the possible wholesale price?
  • by AFCArchvile ( 221494 ) on Tuesday October 10, 2000 @05:44AM (#717513)
    Oh where, oh where is the Pentium 4?
    Oh where, oh where could it be?
    For I'm placing my order for a Pentium 4
    And I'm crying, for it I'll not see.

    Oh where, oh where is the Pentium 4?

    Oh where, oh where has it been?
    This song will turn me to a Karma Whore
    And AC's will rip out my spleen!

  • by Flounder ( 42112 ) on Tuesday October 10, 2000 @05:49AM (#717514)
    Could the P4 arrive too late? Will the Athlon already dominate the high-end market by that point? I think it's already happening.

    The only thing that will save the P4 at this point is a huge marketing blitz by Intel. And even that sometimes fizzles (remember the "Enhanced for Pentium 3" web sites? The only one around I've seen is intel.com).

    Now, if only Motorola/Apple finally get off their asses and make the high-clock speed G4s it's capable of. Dual CPU is tre-cool, but only if the OS fully supports it, and OSX won't be out in full release for a while.

  • But wouldn't that then make the new model the Pentium 3.9999999999999998 ?
  • Well, great.
    Anyone remember the Pentium III, Pentium II and the Pentium without a number? The all were discovered to have some bug after they were launched.
    Is it just me or could we jolly well wait a bit longer (I mean EVEN longer) for yet another faster processor to spend money on and then be sure we get one that's not been thrown onto the market in a hurry?
  • by ch-chuck ( 9622 ) on Tuesday October 10, 2000 @05:50AM (#717517) Homepage
    harnessing the complexity of it all.

    What would Gordon Moore, Robert Noyce, Ted Hoff and Federico Fagin do?
  • P4 was supposed to be next gen a long while ago then came IA64 which turns out to be worthless.

    How exactly is IA64 worthless. A 64 bit architecture is necessary eventually unless you don't want a unix machine after 2038.
  • Also unless you want to constantly have to play tricks like date windowing you have to increase the number of bits a long double is to allow for storage of some parameter. Unless you get milisecond level timing you have problems.
  • to Intel, IMO.

    While I don't doubt the missing chipsets...I think Intel has far worse problems on the horizon. (I haven't seen anyone come up with the new compilers that are going to be required for the P4...is there news there?)

  • Huh? Don't forget there are lots of english accents around the world. Around these parts, we say "been" as "bean" which rhymes with "spleen" and "seen".

    Anyway, I thought it was a marvelous job! Brava!
  • According to the Register article at http://www.theregister.co.uk/content/1/12491.html, it is delayed for many reasons -- not the least of which is that motherboard firms are delaying shipments of products to support Good Ol' Wilma.

    Also, not many people are REALLY happy about the fact that these new processers are going to have to be cooled by fans that are hard to implement -- 25 dollars for said fan.
  • The CMOS has nothing to do with it. It only stores the time when the machine reboots.
  • My, that looked a whole lot better when I hit "Preview". Somewhere between the preview and the submit I seem to have lot a few words (not to mention a tag). Hungry gremlins?
  • Then why don't they have apps or even a well documented method of calculating the exact time down to 1 millionth of a second then?
  • All the heatsinks made for the P4 so far are Aluminium with a single run of the mill fan. Not 2lbs Copper monsters.
  • Flashback a couple years: Could the K7 arrive too late? With the Pentium II already dominating the high-end market, and the poor performance of the K6 series, AMD doesn't have a chance.

    It doesn't make any sense to extrapolate, except to FUD Intel and for the fact that being an AMD fanboy earns points on slashdot. Sure, Intel is currently having problems with the P6 core at the end of it's lifecycle, but that's what the P4 is supposedly going to fix, and the chip is designed to get the clock speed up up up (because that's what sells chips).

    The "high end" market goes where the performance is. Intel could fall on it's ass and so could AMD. What's more likely is that they will both stay within the same price/performance band in the near future, Intel will continue to keep the big OEM contracts that have made them rich, and AMD will continue to keep the loyalty of it's fans.
  • Your starting premise is not quite correct. Motorola can clock the current G4 at around 700mhz but due to the 4 stage pipeline this will not buy you anything and the reason its still "stuck" at 500. The redesign will solve some of those issues so your analysis needs 700 as starting point not 500, and you will end up with someting around 1.4G. IBM has G3's at 750+ ready to go.
  • You cannot logically make the statement that AMD is cheaper without making relative comparisons for the new chips to their new chips.

    Well... to paraphrase the old Greek saw, "Beware how picky you act, for the Gods may be pickier!"

    You cannot logically deduce that AMD will be - the original poster was predicting, after all - but you given recent history you can logically induce the relative pricings. Logicians do study inductive logic as well these days, you know.

    Not everything is Aristotelian (or Platonic).

    Please infer plentiful smiley faces, this post is tounge-in-cheek.
  • Moore's Law was repealed on Tuesday when the Supreme Court ruled...
  • Werd to that. Both Athlons and Durons are SMP capable, we're just waiting for AMD to release their 760MP chipset this december for boards to be avilable in January/February 2001. There's nothing to get alarmed about, AMD has never said it would happen before then and so far they seem to be keeping their promisses so I don't see why 760MP won't be on track.
  • How can anyone here say what the "average" user needs inside their computer? Saying no one needs a 500mhz or higher speed processor is asanine, if average users don't need it how come power users need it? Blender will render a 3D scene on a 300mhz P2, why do you need anything faster? Intel is in the business of making money, yes thats right, they sell a product, not try to survive on ad revenues or VC. There would be little to no point in producing chips if they couldn't produce them in bulk. For every chip produced thats a little cheaper each cheap is. You might have room to complain if they increased the clock speed by 1 or 2 mhz per chip version instead of 50. Last year I bought a P3 500 for almost 300 dollars now the same chip can be found for under 200. With each stepping of a processor released all the older versions get that much cheaper. I don't give a shit about the newest processor. I am interested in the one thats fast enough for my needs.
  • Complexity theory seems to be another word for chaos theory. In chaos theory the fist assumption one makes is that the system is not perfectly chaotic, thus it is 'merely' complex. The second condition is that the initial conditions are well known. The genetic algorithm is a bottom up approch to the same conditions. Given an initial condition and a well known ideal goal find the best solution. Thus, the algorithm creates order from chaos through chaos. I would go so far as to say that complexity, chaos, and the genetic algorithm are all subsets the same discipline.
  • Hold on a second- isn't it true that the reason the P111 outperforms the P4 is because of the absurdly long pipeline in the design?

    What if I want my uberG4 for doing _work_ rather than spinning tiny wheels really fast? It's as if you don't understand how different PPC chips are and have been, from x86. They have always been register-rich and loaded with cache compared with x86- let's have more of that. Let's have an uberG4 in which the cache will fit, say, Quake III :) then the pentium people can boast of their higher clocks all they want while their computers spin their tiny wheels constantly loading stuff into their teeny 'high speed zones', and the uberG4 will be about TORQUE and will slowly overtake the pentiums for good.

    (Yes, I know that quake framerates are hard to come by on MacOS- one word, ATI- no, two words, ATI and Doom: Quake III is not inherently a super complicated program. The reissue of Doom looks to be a _lot_ more demanding. The difference could well favor the G4- having a compiler use loads of huge registers to speed things is dead simple compared to the twisted arcanity that will be necessary for P4 and beyond)

  • From my point of view the incrememnts are too incremental.

    When I bought my 486, slowest was a 25sx - the fastest at the time was a DX2 66 - nearly three times the speed.

    Now I appear to have a choice between a celery 500ish and a 1Ghz P3 - the P3 isn't even twice as fast and only maintains the lead it does because the celery is artificially crippled by intel. Even then you've still got to try hard to find the 1Ghz P3 because they are rare outside the major manufacturers. For the typical user the graphics card is far more important (e.g. 500 Celery with a Geforce will kill a P3 with an ATI).

    I used to pay 20% more on the system price for a 50% increase in clock speed - about 33Mhz, now that 20% will get me about 50-100Mhz or about 10% increase in clock speed, barely noticable after all.

    My advice is the best price/MHz chip you can get and a damn fast graphics card. Change the chip next year, you'll still pay less over all and average out at about the same speed.
  • The Pentium vs. Athlon battles of recent months have made a couple of things apparent.

    The first is that no one cares, except for the same kind of person who insists on owning cars with 500+ horsepower. I currently do high end software development, including lots of 3D graphics work, on two machines. One of them is a 333 MHz Pentium II, the other is a 400 MHz Pentium II. This is hardcore stuff, involving several compilers and some high-end languages that don't normally get used. According to benchmarks, my machines are about 29-35% of the speed of the top of the line machines. And in all honestly I have zero complaints about speed. Both of my computers are zippy. I suppose I could try to slow them down by pointlessly including extra headers everywhere, but why? "My computer is twice as fast at handling unnecessary crap as your computer" is not impressive.

    The second thing is that it's obvious that the x86 architecture is a losing battle. When Apple claimed that the PowerPC processors were equivalent to Pentiums of double the clock speed, everyone pooh-poohed them. Then, according to real benchmarks like the one mentioned on Slashdot yesterday (in a story about the Pentium III), it turns out that a 500 MHz G4 really is equivalent to a 1 GHz Athlon, and only 12% slower than a 1.1 GHz Pentium III. And the G4 uses much less power, making it a realistic choice for notebooks. I am not saying that Macs are better than PCs. I am saying that the current high-end PowerPC chips are making the Intel vs. AMD battle look pretty ridiculous. Who cares if Chevy puts an 800 HP engine in one car and then Pontiac out does them with 810 HP? Everyone is happy with inexpensive and reliable with 150 HP cars that don't need to be handled with kid gloves.
  • by Anonymous Coward
    Intel have a compiler that generates and optimizes code for the native P4 code... as well as p3/p2/mmx theres a free 30 day trial of it on their site. Its also a very code compiler for a lot of things, beating VisualC on most things, and leaves gcc eating the proverbial dust.
  • By the time they're shipping, AMD will have better prices for faster CPUs.

    People still can't get 1GHZ PIIIs
  • by AFCArchvile ( 221494 ) on Tuesday October 10, 2000 @05:53AM (#717539)
    Seriously, all of the major recent Intel chipset problems were with RDRAM. There's the infamous i820E problem with the third RDRAM chip not getting registered (which was after the initial, pre-RDRAM i820 went bust). And now this, when the ICH2 is coupled with the i850 and i860 MTHs, which use... starts with R, you know this...... RDRAM! Right!

    I think the time for the NVidia DDR chipset is NOW. Let's stop this half-assed hardware engineering and pre-alpha lithography which the Intel staff is undertaking.

  • This is not news. It's perfectly normal. If I remember correctly, the last chip Intel released when they *said* they would was the 486.
  • Yeah, so instead of waiting on our butts for the overpriced and overhyped Pentium 4, I think it's time people moved on to the other processors that are out there, like Athalons and G4s. I've been running MacOS X for about a week and a half now and I'm very impressed. I'd love to see a FreeBSD port to this arch, and I'm really looking forward to actually being able to test-drive a Linux.
  • We've got a few P4 machines at my office, because we're writing software for Intel to use as demos at some kind of upcoming show (I'm not on that project, myself, but the guys across the hall from me are). So, I don't know about any problems with the machines, they seem to run fine...

    Of course, the sorts of demos Intel's requested are pretty bogus... Full screen video, voice recognition, some networking stuff, none of which has much to do with the power of the processor...
  • They rhyme if you use a "My Fair Lady"-ish, proper English accent. Try it. (no, not out loud!)
  • If I remember correctly, the Earth's population passed 6 billion late last year. October maybe.
  • But I doubt he has a chip fabrication plant.
  • I see the delay to production of the Pentium 4 as a non-harmful event. Those of you out there who have to have the biggest and fastest, ok --Pentium 4 may be just what you need. Just like a balding 40-year-old going through a mid-life crisis needs a red convertable Jaguar.

    But really, at this point the tech market is lagging, software has fallen far far behind top-of-the line processors, and we (as consumers, mind you) don't need more power except to fulfill some non-productive urge. And there is no doubt that the P-4 is targeted at base-level consumers and not folks rendering high-end graphics and animations.

    Oh, and I'll put money on the P-4 being less stable than previous generation processors.
  • No, those people don't matter..
    What you really need to ask is:

    What would Brian Boitano do?
  • It seems everything that Intel is doing lately is FAILING miserably. The high-profile problems keep cropping up (i.e. the 820 chipset, the 1.13 ghz PIII, etc.) They are full of promises about faster hardware, but they don't seem to deliver very well. AMD is getting great yields and gaining market share, I think Intel is running scared and are releasing things (either chips or press releases talking about chips) way faster than they should. As soon as the business market figures out AMD is making good products and that AMD chips are not the second-rate, unstable things that they used to be I think they will take a larger cut of Intel's market share than they are now. We all should be overjoyed at this newfound competieion in the processor market, it has brought prices down and speeds up more quickly than would of happened if Intel still was the dominant it used to be. However, it means that errata that used to be fixed before release are shipping with chips because they want the chips on the market faster. Their attitude is usually "oh, we'll fix that in the next stepping". I am glad they are delaying the release so they can fix the problem, rather than just releasing the chip as-is, as they have done in the past.

    Enigma
    .sigless



    Enigma
  • It doesn't really matter when Intel will release
    the P4. My next box is going to be an SMP Athlon.
    Now, if some manufacturer would actually
    *release* an SMP Ahlon board before 2003.....
  • I for one think it's a good move on intels part. Instead of shipping a crappy product they found the bug and appear to be exterminating it from the chip. After incidents like RH7.0 it pleases me to see a company fixing the problem(even if they're a little late to admit it). I'd rather wait another month or two for them to get it right rather than spend a couple weeks dealing with returns and their help desk
  • Hang on, I'll go count.
  • by MarNuke ( 34221 )
    Anyone that been in the computer field for more then a few weeks knows this. And for every clueless bastard that been saying through their fat face "why does anyone need a XXX mhx system, all they need a YY Mhz system" there's are a few people that realize, a fast system today, means you don't have to buy a new one ever X years.

    Think about this. 3k for a wickly mean machine, 2k for a good computer, 500 bucks is for "all that someone really needs." A year from now that 500 bucks machine will become some sysadmin mail hub. The 2k will be his desktop for the next two years, and then serve as his file server for three, then a router for three more year. The hot machine will be the guy devel machine for a year, his desktop for two, his file server for four, and a router for another four more years.

    What has more value? I think the 3k machine. That's why when I do buy a system, I buy two steps down form the best thing I can get, keeping it easy to upgrade (two 256 dimms, instead of four 128's), and understanding next year it will be less.

  • its clear you'd rather listen to marketdrones than actually information or check it out for yourself, the G4 was shown to be faster at some very specific things, its not faster in general, and its certainly not as fast as 1ghz k7

    I'm not talking about Apple's marketing nonsense, but what various hardware web sites are reporting, including one linked to on Slashdot this past Monday. Also note that they're talking about the CPU in the G4. The snail ads from 1998 were about the CPU in the G3.
  • If I remember rightly, the P4 systems Intel was demoing had fan/heatsink combos that weighed one pound, and the chip had four holes so that the heatsink/fan could be bolted directly to the motherboard.
  • My friends and I are already doing this. Bascially you get a calender and people try and guess which week the Pentium 4 might come out on. $1 per week you guess. Whoever guesses the right week gets the money from the pool. In case of Intel never releasing the damned thing, you go and buy pizza or something for everyone. And if a couple people get the same week, you can split it, etc...
    This thing has already been delayed, I personally have the weeks of January as my main choices.
  • It took me about five minutes to read that whole thing. I'd suggest that you submit that as an editorial somewhere.

    (still trying to adjust eyes from reading 12-point Times New Roman for five minutes straight!)

  • No, it's a problem because it rolls over.

    It has nothing to do with EOF.

    (sigh)
  • Why do you think they called it a Pentium, rather than a 586? They tried to add 486 and 100 on one of the (then) new chips, and got 585.9997

    -
  • Do we really need the next generation of Intel's overpriced processor/heat plate or is this merely an attempt by Intel for Yet Another Incremental Upgrade(YAIU).

    Now that I have your attention, and before you call me flaimbait, hear me out.

    The only truely revolutionary changes that Intel has ever done is the 386 and the Pentium. Every other design that they did was a simple upgrade to already existing chips.

    Call me crazy, but I think that Intel should have released a 64 bit chip about 5 years ago. Now that would have been as revolutionary as going from the 286 to the 386, or from the 486 to the Pentium.

    Why didn't they? Because they had already paid a fortune developing the existing technology and they wanted to milk it for another few years. Intel is a big believer in the incremental upgrade. They want users to pay a premium for basically the same chip three or four times while spending the money that should be going to research into the "Intel Inside" marketing campaign.

    By this time Intel should have been releasing chips with 128 bit busses. Instead we are stuck with a chip whose basic design hasn't changed in many years. And the 64 bit chip is delayed yet again. Is it really going to be out at the end of next year. Sadly not.

    *sighs*

    I guess I just don't understand how to make money.
  • Even the G4+ (enhanced version) won't be able to clock much higher than 1.2GHz unless they lower the process and use SOI. Moving from a 4 stage pipelined design to a 7 stage will boost MHz performance anout 25%. On a present 500MHz part (.22 micron;copper;no-SOI) that alone would result in a 125MHz boost. That's 625MHz top speed. Moving to a .18 micron process we get a slight 20% relative increase topping out the figure to 750MHz. Depending on if Motorola rolls out the G4+ with SOI or not, you can expect a topspeed of 1GHz if Motorola releases the G4+ with copper/.18/SOI. The only way they will attain higher speeds after is to: 1) produce chips on .15-.13 micron or lower, 2) Another redesign of the processor to increase pipeline. The latter is very, very unlikely. The former is inevitable. Although, according to Motorola PowerPC Roadmap(tm), the G5 will have an "extensible architecture" and "new pipeline". This is probably to combat their low MHz yeilds. Time will tell.
    --

"How do I love thee? My accumulator overflows."

Working...