Follow Slashdot stories on Twitter


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Pentium III 1.13Ghz: The Real Story 227

NoWhere Man writes: "Tom's Hardware has posted up their dealings with the new PIII 1.13GHz processor. Apparently without a special board with a new bios from Intel it will not even run correctly. Any motherboard that has not got the special micro code update for this very processor will ultimately fail. The review has some interesting facts about the processor as well."
This discussion has been archived. No new comments can be posted.

Pentium III 1.13Ghz: The Real Story

Comments Filter:
  • maybe the bugs in the OS will be where the bugs in the processor are...and we might beable to have a version of windoze that doesn't crash when i have InterNut Exploder and Nutscrape open at the same time

  • >(10 RPM * 5-way radial symetry * 60 seconds per minute = 3000 FPS)

    you made a mistake there, you have to DIVIDE by 60. :)

    10 * 5 / 60 = less than 1 FPS

  • I think the whole article is just a troll. It simply doesn't add up, from a technical standpoint or a business standpoint. Seems like WHBT, WHL, HAND.
  • Actually, sockets have a huge advantage in cooling over slots. It is mostly because of the direct connection of the pins to the board. The pins help dissipate heat, as well as the heat sink being directly seated to the top of the core.

    As far as size, you can get a larger heat sink on a slot, but that still doesn't make up for the natural cooling properties of a socket, unless the difference in size is massive.

  • As Ferdinand Porsche found out during the development of the original Beetle, it only requires 20 horsepower to get a car moving 60 mph (a car with the rolling resistance and aerodynamic profile of the Beetle, which originally shipped with a 36hp engine).

    Why bother, when you can build an electric car that gets 4 miles per gallon []?

    Think about it. Then click the damned link.


  • Part of the reason the performance enhancement isn't there for small-time stuff is that progammers have had little motivation to add it; SMP workstations (as opposed to servers running server processes that either fork or work in conjuction with other, seperate processes and hence have 'native', systemic parallelism) have traditionally been really, really expensive and the parallelization work has only been done on applications that need as much speed improvement as possible and where people were willing to pay for it, like 3D visualization stuff. Everything else hasn't seen the work because the hardware has been too scarce or too expensive or some other cost/benefit analysis made paralellization unpopular.

    What I'm saying is that if computer manufacturers would instead start coming out with dirt-cheap consumer-priced SMP systems this would have the added benefit of motivating programmers to consider parallelization in their applications more closesly, upping the benefit to SMP, enabling greater demand, etc. A classic feedback loop.

    With the right thread support in the OS and the application, there's no reason a 4-way 300Mhz Celeron system couldn't clean the clock of 1Ghz CPUs at a fraction of the cost of the higher clock CPU. Imagine sitting down to your 8-way 300Mhz Celeron system.

    It's also possible that the boffins at Intel might also get their heads on straight and start coming up with the goodies to give us segmentation. If they can't give us virtualization, segmentation+SMP might be even better.

    (Before you get all wet and hard to light, yes, I *know* that there are no 4-way Celery boards and yes, I know that beyond 2-way SMP normally uses Xeon CPUs. But its not like Intel couldn't come out with a 128K on-die cache "xeon celeron" if they wanted to, and yes, I know that you don't just add SMP Mhz up like I did.)

  • I think the reason they went with a slot one design is that you can but a larger heatsink on that cartridge. Simple as that.
  • by mlk ( 18543 )
    at work i use P166 with 16 Mb of ram running MSOffice And guess what, ifs fine, and works great. At home I use Duel PII 466, and it's a little on the slow side (but the bottle necks in every day work is H/D & Mem, I need more than 128Mb) this hole I don't need it, but I do is utter bollocks. Some people will need it. Some peeople will need more than just a faster proc some people will WANT faster but some will not. I've worked inplaces using 386's, why the hell should they upgrade? mlk
  • That's not the point. Instead of going for an unstable 1133-MHz processor just because of the clock speed, try perhaps an 800-850 MHz. Most applications you'd care to throw at the slower processor would probably gain *NO* benefit from the extra MHz. (See Tom's list of applications, which covers most computer users, I think, except maybe researchers or audio/video editors.)

    Plus, as you decrease clock speed from the absolute high-end of CPUs, price/performance dramatically increases.

    -$0.02 from Andy
  • Oh, please. Sure there are scientific and entertainment applications that still require more power. And servers, you can't put enough grunt into a shared server. But of 50-odd people that work where I work only 3 (including myself) would regularly max-out a PC regardless of its power, with maybe another 6 requiring PCs faster than the ones on their desk. We've probably got 20 people that could do everything they want on a 32MB 486, with the rest (50 -20 -6 -3 = 21) probably never needing anything more powerful than a Pentium II.
  • I'm curious: what can you do that would warrent a 1GHz processor? It must take _lots_ of operations on just a little data? It seems to me that just about everything will be bound by the need to get stuff into/out of the processor when you're running the processor that fast. With a multiplier of about 8 (mentioned in Tom's article), you'll need to have a pretty good hit rate on the cache if you have much data.

    I'm not familiar with the applications you mentioned. These must involve doing the same thing over and over to a fairly small data set? Maybe a larger cache might be more to the point, if it would let you run code and data in the cache? I wonder if something like the Altivec unit in a G4 would adapt well to this sort of thing? I think that the Motorola CPU's come with larger caches than the Intel CPU's. And I know that the Sparcs do, but I think their cost-benefit ratio is worse than Apple's.

    All my stuff is large data, with a reasonable number of operations on each element (linear regression, non-linear regression, etc.), so I've never really thought about this before. I seem to need a faster harddrive and faster RAM _way_ more than I need a faster CPU.

    I wonder if you have considered SMP? Would this be more cost effective than a single, fast processor for your kind of use? I should think that running multiple cellular automata might parallelize well, at least.


  • Why do I get the feeling that the market for this CPU is going to be largely based on buyers who will pay the $990 for the CPU and nosebleed prices for the RDRAM in the name of raw speed yet will never think about getting a U2W or Ultra160 setup because "it's expensive"?

  • Microcode is a sub-Assembly language that's very dependent on the processor. It defines the processor's instruction set. The instructions are like ENABLE ALU A, SET ALU A INPUT TO DATA BUS, GATE ALU OUTPUT TO ADDRESS BUS, stuff like that. The instructions vary wildly from processor to processor and sometimes from processor model to processor model. But if you can manipulate tbe microcode you can completely refefine the instruction set. An example would be someone at MIT (I think it was MIT) hacked the DEC KL-10 Model B microcode to make the machine compatible with the IBM 370. Another example would be the ITS microcode for the DEC KS-10, which completely restructured the pager to make it use ITS style paging instead of the DECSYSTEM20 paging. They also added several new instructions via microcode update, and redefined the bus I/o instructions to make I/O work in a manner more compatible with their operating system (ITS). Certain models of the PDP-11 were microcoded, the LISPMs were microcoded, all of the large VAXen were microcoded, but I did not know PCs were microcoded. Or has Intel redefined microcode? Microcode defines the instruction set, it is not just a bunch of FEATURE X ON/OFF type stuff. It's much more important (and more a pain in the ass to debug!) than they make it sound.
  • , I can't think of a single actual Microsoft (...) program that requires a 300MHz CPU, very much a 1GHz.

    Oh come on. The damned acid-trip paperclip in Office takes, like, 30% of your CPU cycles.

    The bigger the CPU speed (and, more importantly, factor into that the number of CPU cycles it takes to execute the average instruction), the faster the parts of Office that you actually need will work.

    Along the same lines, if I wanted to, I could take a 5 horsepower Briggs and Stratton lawnmower engine and make it power my car. It would work just fine, but it would be about as useful as Windows 95 on a 386.

    Even if M$ software were efficient, incremental upgrades in speed make it possible to do things that we couldn't do a few years ago. A few years ago, arguably, you didn't need anything more than a 486. 486 machines don't generally play MP3s very well.

    More power means more new uses.

  • youre right, hes on crack. the real thing thats going to hurt them is when superscalar fails and vliw wins out. as for rambus, technically it is slightly superior (according to toms hardware)
  • Tom Pabst. The Jon Katz of Hardware!
  • Maybe this is slightly offtopic, but to get an idea of what goes on inside Intel, check out FaceIntel [] (former and current employees of Intel). If even one tenth of what you see on that website is true, then it's no wonder that Intel is doing badly in the marketplace.

    Remember the back cover of The Dilbert Principle: "Employees are the ninth most valuable asset of the company...carbon paper came in eighth."

    I don't buy Intel now. My new box is powered by an AMD K6-2. And when I save up enough $$, I'll upgrade to an Athlon. But no P-III for me, thanks.

  • I will agree with the fact that more non-big-company consumers (namely Joe Schmo with his PC et al) are getting frustrated and shying away from Intel. However I do not agree that it will be enough to oust them and give AMD enough room to start sitting on their laurels just yet, simply because Intel will not not have to convince us to buy it, they do have companies willing to buy the 1.13 GHz along with the special board and the special drivers if needs be.

    We have seen time and time again, and although we'd like to secretly deny it, that these products are not made for us (the trickle down consumer). So Intel will be able to afford their "paper realeases" for at least another three years. Yes, I know I sound like a pessimist but don't you think that if little "trickle down consumers" Mary Jane and Billy Bob could have companies that huge shifting gears because we were unsatisfied, that for instance Microsoft, Intel, Apple etc, would stop leading us around by our pockets and start giving us quality first instead of "hand me your wallet now and I'll patch it up later?"

    Nuff Respec'

    7D3 CPE
  • Intel needs to catch up to Apple. Did you see how the dualie G4 smoked that 1GHz PIII?
  • yes, we need that kind of speed.

    I am currently spending serious amounts of cpu time running Frau. mp3 encoding. its very slow and to encode a whole album takes well over an hour (well over).

    its mostly cpu-bound and when I moved from a lowly p2/450 to a k7-800 (an o/c 700 tbird), I shaved 10's of minutes off each song's encode time

    so tell me again that pure compute power isn't needed by the masses?


  • Oh, very good.

    With a well written suite of applications over half of the staff where I work could probably get by on an XT. Certainly a 68000-based system like the Amiga 500 would be able to cope. It's only the bloated apps with the cute crap that means people would consider anything less than a Pentium "slow". No-one should ever need more than 640k...

  • "Let's face it all they have been doing any more is shrinking die sizes by going to smaller processes and adding instructions"

    Strange, that sounds like exactly what they've always been doing - in fact, that is what they've always been doing, since the 8086 - adding instructions, shrinking die sizes, and optimizing CPU speed internally (they haven't stopped doing that either, considering Williamette's (sp?) internal risc-like architecture).

    Perhaps I've missed something basic, but you seem to imply that there was a time when Intel was somehow doing more than just shrinking die and adding instructions .. if so, what was it? 286/386 protected mode was probably the only major addition ever in the entire line.

  • What do you get when you combine an egomaniac with a paranoid schitzophrenic? I dunno, but it smokes french cigs and wants you to touch his monkey [].

    Tommy claims that the Pentium-3 1.13GHz is unstable, and he can't get benchmarks to run. Why?
    Because the Pentium-3 demolishes Athlon, and costs less. So he made up this little story. Ach!

    As you can see, some other Hardware sites had NO problem running the 1.13GHz Pentium-3. [] i=1290 []

    http://www.shar z/ []

    http://firingsquad.gamer []

    They even ran it on 440BX and VIA boards! Firing Squad OVERCLOCKED it. But Tommy's was broken, really, and it must be a SCANDAL for Intel.

    Here's a scandal for you--AMD's stock price is going to cross Intel's this week, heading the wrong direction! No wonder Dr. Tommy is having problems!

  • I know this chip needs 1.75 volts to run correctly. Maybe the instability is related to that, and the micro code update enables that...

    This chip is more or less an overclocked P III 800 or 900. I have a 700E running at 1085 @ 1.7 volts...

    This Fall Intel puts out a cC0 stepping which should allow CuMines to clear into the 1.2-1.3 GHz range.


  • It is also a misconception that motion blur is an ideal solution for smooth displays.

    In real life if you see a fast moving object then your eyes can track it, and the object does not ppear blurred at all. However if your eyes follow a fast moving object on a cinema display then it will appear blurred.

    Thus, as the previous poster showed, if you want accurate representation of fast moving objects then high frame rates are a must. There are cinema systems around that use much higher frame rates than the usual 24FPS.

    I'm just waiting for a video card and monitor that can do 1600*1200 at 150FPS. ;)
  • by Yhcrana ( 88366 ) on Monday July 31, 2000 @04:57PM (#890385) Homepage
    Tom may be wrong here, but when I worked inside Intel they were all worried about it also, A couple of the bosses in my department told me that Intel was having some problems with their stock and outstanding shares.... never followed it up but wish I would have bought into the company when it split that August 1997

    But you must admit AMD is getting the best of intel simply because intel has streched itself too far and isn't innovating any more. Let's face it all they have been doing any more is shrinking die sizes by going to smaller processes and adding instructions. We need to simplify again and go back to a RISC processor and away from making the chip better by adding instructions to it. SSE is a crock, MMX was good, but mainly a marketing ploy. AMD's 3DNOW technology isn't much better, but at least they don't use that as the reason for raising the price of their CPU's

    And how about naming a CPU "Coppermine" when it is still using Aluminum interconnects. The thunderbird and Duron are using copper. As it stands AMD has a greater potention in the future at the least cost. Their Dresden plant, .18 micron process, copper interconnects, and a much better yield on their chips than Intel could ever dream of. Oh yah and they don't have to deal with Rambus.


  • Well, consider this. What about stuff like voice recognition? Given the correct software, people could be immensly more productive if they could have a computerized assistant to do stuff for them. If there was software that would provide the equivilant of an assistant (archive this file, send this document to this person, take these notes, get info about this topic, etc) then businesses would immediatly buy the fastest computers availabe if that was what was needed to run this software (because speech recognition and AI are pretty big CPU hogs.) That's what this increase in power amounts to. As for the "cute crap," that offends me. Sure you can buy a perfectly functional Toyota, but given the option any sane man would buy a Boxter with tons of chrome trim, no? Asthetics counts for a lot to most people. Some people manage without them (anyone who still uses TWM for example) but to most people working in a nice computing GUI is just like having nice office furniture. Sure you can do without it, but its so much nicer on the eyes.
  • You bring up an interesting point, one that is very important for SMP computers and OSs. You see, there is only a few reasons why BeOS is better than Linux at SMP. One is that it was designed for it, but in the end, that isn't the biggest reason. The major reason is because of the way the BeOS API is designed. It is designed in such a way that it is not only easy for an application designer to use multiple threads, but is often EASIER. For example, the protocol used to allow direct access to the screen. Conventional wisdom dictates that an application locks the buffer, draws to it, and then unlocks it. However, in BeOS, the system sends messages whenever the state of a buffer changes. The application must respond to this change inside a function that may not run for more than 3 seconds. Thus, the application designer is forced to use a seperate rendering thread to update the window. Not only does this improve performance (since the surface isn't being constantly locked and unlocked) but it allows the load of the application to be distributed between multiple processors. Of course, this API dictates a lot of policy, which is quite unUNIXish. However, it really is the most effective way to end up with a good SMP system. I see the GTK and Qt people missing a tremendous opportunity in what they are doing. GTK and Qt are entire OS-level APIs in their own right, and thus they have the opportinity to force developers to use threads (and do other desirable things like support application scripting.) However, not only do GTK and Qt NOT force developers to use threads, in general they are (especially GTK) pretty thread unfriendly. Developers are almost by definition lazy. Even if SMP machines were very common, it still wouldn't be as much of an incentive as an API that made it easier for them to use threads than not.
    (BTW> The Be API is not at all hard. It is just designed in a way to make threads a more desirable choice than no threads. Even with the extensive threading it is still easier to program for than most other APIs. I say this because there is no reason to scare any programmers away from the OS;)
  • Heh... I remember when I came to college with a 3 year old 486, even when I bought it, it was obsolete. I had been running DOS on it (didn't really know about linux then). I even managed to get the internet working with it. It tooks a few tricks, but it worked... slowly... so grudgingly, installed Windows95 on it. The internet was faster, but ofcourse, the rest of my computer wasn't... so it averaged out... and eventually I got a faster CPU, a PII266. I dunno, I always just use what I need. Right now, I have a PIII 500 Mhz, and I really don't forsee a time when I'll need much faster unless I start doing some serious number crunching on this baby. I guess some people just got to have the latest and greatest. =)
  • Actually (IIRC) Intel settled out of court so fast it barely had time to make the news.

    "The axiom 'An honest man has nothing to fear from the police'

  • ON THE HIGH END?? You must be kidding. Out of the narrow selection of AMD motherboards, they all have stability problems, and their performance over Pentiums is negligible.

    Cites, please? Do you have anything to back up this claim?

    If you are having stability problems with an Athalon system, it's probably because you failed to follow AMD's guidelines. The Athalon, particuarly early ones (like mine), are finicky about the hardware they work with. They are particuarly sensitive to the power supply, which is why AMD has a list of recommended power supplies [] on their web site. A UPS with a power conditioner is a smart investment for any high-end system. (I use an APC [] Smart-UPS 650)According to the guys I buy my hardware from, somthing like 90% of the stability problems with Athalon systems that they see come from people using an out-of-spec power supply. Most of the rest of the problems come from using marginal memory.

    My primary system is a FIC SD11 with an Athalon 550. I got it within the first two weeks of it hitting the market. It's almost a year old now, and I have had ZERO stability problems with it. It runs 24x7; the only time I ever have to reboot it is when I switch into 95 to do some gaming. This box primarily ran NT4 Workstation up until May, when I switched it over to RedHat 6.1 It was rock-solid even under NT, and has been just as stable under Linux.

    It makes NO sense whatsoever to buy a top-of-the-line CPU and MoBo and stick it in a bargain-basement case with a cheesy power supply and no-name RAM. Spend the extra money and get server-grade memory and power. Likewise, if you ignore the manufacturer's guidelines and use out-of-spec parts, you have no right to be pissed at them when your substandard components crash the system.

    Before you build an Athalon system, do yourself a favor and RTFM [] first. You'll save yourself a lot of aggrivation.

    "The axiom 'An honest man has nothing to fear from the police'

  • And a 600 MHz G4 still works better...
  • Two words: voice recognition.
  • You are right, there will always be a need for more CPU power.

    There is a temporary reprieve from that law, however, since your computer is no longer the bottleneck to performance. The internet is.

    If there is some breakthrough that actually brings gigabit-all-the-way connections into the mass market like 56K modems are today, then we'll see CPUs becoming important again.

    Given enough bandwidth, we could see lots of uses for more CPU power. Virtual Reality, AI, Super-Duper-Uber-Hi-Res-Hi-Fi-256-bit-audio movies over the internet, etc. will all need lots of CPU.

    P.S. Switch to "Plain Old Text" when posting or remember to use BRs or Ps.
  • by MillMan ( 85400 ) on Monday July 31, 2000 @06:38PM (#890401)
    Saying intel isn't innovating anymore is more of a bad choice of words than it is incorrect. Obviously you've seen intel is working on some intersting things, however, the original poster is referring to what they actually bring to market. In the end, thats all that matters.

    It's easy to say this now, but you could see intel starting to falter a few years ago. What wasn't so easy to see was the emergance of AMD.

    Intel has been around a long time. They made good stuff for a long time. They made huge dough. The shareholders were happy. But shareholders always want more. Profit margins have to keep increasing. They can only increase to a point until the rubber band snaps. Their high prices, the RAMBUS fiasco, and others point to this. Eventually you really piss off the customer.

    There are other reasons too. The older a company gets, the more bureaucratic it becomes. A Very Bad Thing in this industry. Intel is also very engineering "top heavy". Too many engineers who have been around for too long, all thinking they know exactly how it should be done. This can stifle innovation very badly.

    Intel will have to go through some sort of rebirth eventually, something like what IBM went through. They haven't hit bottom yet, though. I love to see intel suffer, but I don't ant them to go away: AMD needs competition. There is no reason AMD can't turn into intel in a few short years. They're just another corporation, who have to answer to a group of shareholders who are no different than any other.
  • by Anonymous Coward on Monday July 31, 2000 @06:39PM (#890403)
    Read the Sharky's article. Skip the garbage and go straight to the hardware specs. Sharky got Intel's VC820 board -- which Tom didn't get. He tested (or tried to test) the 1133MHz chip with other boards (BX, i820, i840, Apollo133) and it didn't work. *That* is what Tom is saying. The mysterious 1133MHz CPU requires a brand new board to work properly. Also note that Sharky didn't test any game in 32 bit color. Oh I can play Quake 3 at 640x480 at 150 FPS. Great! But useless. I want to play Quake 3 at 1024x768 (or above!) and 32 bit color. The 1GHz+ CPU doesn't help here. At this resolution even 800MHz is enough to max out GeForce 2. And in general, at high resolutions the performance is limited by the video card.
  • Remember he was the first one to point out that Rambus performed worse than SDRAM, when everyone else was claiming it was great but just too expensive.

    Intel are now part of an industry law cuit fighting Rambus patents, and have announced a complete U turn and will support SDRAM and DDR for P4.

    I'll give Tom the benefit of the doubt on this one - lets wait a while, and I'm sure the truth will come out about the microcode update.
  • by Pink Daisy ( 212796 ) on Monday July 31, 2000 @07:12PM (#890413) Homepage
    We all know exactly what Tom thinks about Intel.

    Intel could release the "Jesus Processor" that would save our souls and send us to heaven eternally if we just asked. Tom would say it was a cult so they could get our money, and would lead to mass suicide.
    Intel could release the "Olympic Processor" that ran faster, harder, higher and broke every single record. Tom would say it was just the doping.
    Intel could release the "World Peace Processor" that automatically altered documents from world leaders, causing world peace. Tom would say Intel was spying on everyone and abusing the information to generate massive profits.
    Intel could release the "3rd World Processor" that cost five cents, ran off sand, had built in voice input in every spoken language and had a holographic display built in so you wouldn't have to buy expensive peripherals. Tom would say they were trying to create a monopoly for their lousy video cards in the lucrative market of people who can't afford monitors.

    I'm not saying Intel has any of this stuff; obviously they don't, but Tom's comments about Intel are neither surprising nor credible.

  • Thomas Pabst wrote:
    It requires the whooping core voltage of 1.75 V by default. Normal Coppermines' only require 1.65 V. This increases the power hunger of that CPU over the Giga-Pentium III (1.7 V) significantly by already at least 3%, plus the 13% required by the higher clock speed, summing up to over 16 % more power hunger.

    He added voltage and current together to get power?
  • by Black Parrot ( 19622 ) on Monday July 31, 2000 @05:00PM (#890415)
    > Is there even a market for 1GHz+ right now?

    Yes, but it may not be the mass market.

    I, for example, need all the Hz I can get because I play around with CPU-intensive tasks like running genetic algorithms, training simulated neural networks, and running cellular automata on game-sized maps.

    For a slightly broader market, analysts were saying a few months back that NT users upgrading to W2K should upgrade by 300 MHz at the same time if they want to keep their current performance level, so that will put a number of people up in the ballpark of 1GHz.

    But for most people, I share your doubts about the need, at least until the next generation of bloatware makes 1GHz absolutely essential.

    For now, even kooks like me sometimes buy less than the top of the line, since the performance/price ratio improves so much. If I bought today I would probably only buy 800 MHz, CPU hog though I be.

    I'm certainly not going to buy extra gidgets to stick on my computer so I can run 1.13 GHz instead of 1 GHz.
  • ...since, this being Intel, you won't be able to actually *buy* one of these chips until next year some time.

    It's all about marketing.

    - A.P.

    "One World, one Web, one Program" - Microsoft promotional ad

  • This Anandtech article [] clearly shows this chip running just fine on the BX and Via 133A platforms.

    Sharky Extreme [] chose to use the VC820 board, however they mentioned nothing about these "problems" that so far only Tom has found.

    The NDA was just lifted today, folks. Don't think Tom is the first and last word, and don't think he has an exclusive here.

    I don't trust him. Cough, cough, GeForce benchmarking...

    I heard Tom is a bad doctor.


  • Well let's think about it, did we see Celery based systems in stores right after it's release, I think it was a few months before I saw those in stores.

    How about this, Intel limiting their celery chips to the 66 mHz bus. I can't believe that, what a crock. I'm sorry, but I get kinda tired of being told how I should run my CPU by the company I buy it from.

    Yes if I remember right the AMD CPU's do have a smaller cache and a lower multiplier, but look at it like this. AMD also has a faster bus speed (admittedly it is DDR based if I remember right), more L1 cache and the L2 cache doesn't mirror the L1 cache like Intel chips do, higher yields on their CPU's, and can actually make market...


  • > Paper releases seem to be all that Intel can do anymore.

    Yeah, it's starting to look like AMD has such little competition on the high end that I'm afraid they might start resting on their laurels.

  • Voice recognition in an office environment is not a good idea - the place is loud enough as it is. The silent keyboard is the office worker's friend.

    And by "cute crap" I meant the 10,000 Word and Excel "features" that the average office worker never uses. I didn't mean eye candy - I run the Litestep shell on my Win9x box with associated visual effects and other cool stuff I enjoy. (lighter, faster and better looking than the Explorer shell...)

  • I read Tom Pabst's article and frankly, I have this feeling he may have gotten some bad parts.

    The reason I say this is because Anandtech got the 1,130 MHz Pentium IIIEB working using a Slot 1 motherboard that uses the VIA Apollo Pro 133A chipset with good stability--and the performance was quite good, only limited by the somewhat slow memory management chipset.

    It'll be interesting to see when will Intel ship the 1,130 MHz PIIIEB on FC-PGA format, though.
  • The argument that "nobody needs a faster computer, anyway" was rubbish 20 years ago and it's rubbish now. As a programmer, every time I compile something I feel the need for more CPU speed, and if compilation starts to become I/O bound there's always more optimizations that can be done to soak up that time.

    Want something considerably more mainstream? Digital video editing, which is going to take off like crazy over the next few years, as people realise that you can use it to produce watchable movies instead of the unbearable tripe that is an unedited amateur video. It's going to be a hardware manufacturer's dream, because it places huge loads on CPU (compression), memory, and I/O.

    Of course, edit capabilities just bring us back to what you could do with Super 8 movies decades ago, but anyway . . .)

  • > I'm curious: what can you do that would warrent a 1GHz processor? It must take _lots_ of operations on just a little data?

    Yes, all involve many iterations. The data can be either small or large, depending on the problem you are running.

    And you're right about cache. Right now I'm running with 1 Mb L3 cache (on the motherboard). I hope the next system I build will have a big cache plus PC 133 DDR to fill it up as fast as possible whenever I do have a miss.

    > I wonder if you have considered SMP?

    Yes SMP or even multiple machines is ideal for some of this work. Still, I'd rather have a twin 1 GHz system than twin 500 MHz system!

  • by full_tide ( 136848 ) on Monday July 31, 2000 @05:04PM (#890442)
    Actually, that whole duron heatsink thing is fluff. People were simply using socket 370 heatsinks and assuming they would work. AMD has a very diverse list of recommended socket A heatsinks []. If you use one of those, you will NOT experience these problems (or void your warranty).

    To quote amzone [] who put it bluntly, but accurately (of this tweaktown article []):
    If you are cracking your Duron then you are doing something wrong. Most likely you are using a non socket A heatsink, and you are using too much force to put it on. It is very important that the heatsink is designed for your CPU. This article is full of so much misleading information that I can not believe it. I suppose this is what happens on sites that just spit out as many articles as they can write in a day, don't do any research, and then spam news sites to get posts about it. It is ridiculous. Spacers will only redirect heat back into the CPU die, taking off the support pads is a bad idea, not using a correct heatsink is playing with fire, and there is no defective packaging going on, that is ridiculous, what is going on is a string of websites not knowing what they are doing and screwing up their CPUs and then crying about it, like it is AMDs fault, and that is a joke.

    ~full tide~
    "Linux is only free if your time has no value."
  • You're wrong.

    The Thunderbirds have been out more than long enough to get them into stores and their L2 cache is at full clock
  • I have had much more luck pumping my machine full of fast RAM, rather than jumping up the CPU every 8 months. At 512MB, there is a VERY noticable increase (for me, YMMV of course) in stability with apps like Netscape, that are known to crash. I got better performance out of Quake, I think, but I haven't clocked it. Overall, my machine works better. I keep thinking if I had spent the money on a new CPU, and kept it at 128MB RAM, I would have seen Netscape just load faster between constant crashes.

    Bottom line, which has been said here already, is that its all about marketing and "prick waving", look at us, we have the fastest CPU.

    Everyone who asks me about what to upgrade, I tell them "Aim for 500Mhz, and spend the extra dough on RAM and a fast HD and good video card". I agree that I just don't see the need for that much speed when good RAM and good video makes all the difference, IMHO.
  • in stability with apps like Netscape, that are known to crash.

    Netscape crashes 1/2 as often at 256 than at 128, half again at 512, and half again at a gig. It's nearly stable there. Only problem is: Do you know how much this gig of ECC EDO SDRAM cost me?!?!
  • All my stuff is large data, with a reasonable number of operations on each element (linear regression, non-linear regression, etc.), so I've never really thought about this before. I seem to need a faster harddrive and faster RAM _way_ more than I need a faster CPU.

    That's interesting. I've noticed the same thing. I train neural networks on a training set of 400,000 input/output vectors and even after heavy optimization (the next step would be assembly), I still only get ~100 mflops from my Athlon 500. Looks a lot like a memory bandwidth problem. Right now it takes ~10 hours to complete training... I'd sure like to get a 5 GHz CPU/1 GHz FSB!
  • It is hard to tell the difference (at boot time) between 95 on a 486 and Win98SE on a PIII 700.

    So, run Windows 95B on a PIII-700. Not just will it be a hell of a lot faster than 98SE, it also won't have the stupid "Active Desktop".

    ("Active Desktop", of course, is just a very nice way of saying that not only can you crash Explorer, but you can also crash Internet Explorer, all without ever having to dial up your ISP.)

  • What point was I trying to make? Ah! Yes...use the machine for what it was built for an not for what is standard now. I've seen people complain that their "older" PC got unbaringly slow because they installed Window 98 and kept installing new soft all the time... *sigh* Normally those people just buy a new machine like nice little consumers ought to do.

    Of course. Your ten year old 386 is exactly the same computer as it was when you bought it. It's still every bit as fast as it ever was.

    Your perceptions of what a computer should be have changed since then. Not only do you want a bigger and (arguably not) better operating system than Windows 3.1, but you're now trying to play MP3s, video clips, video games, not to mention opening fat and inefficient programs.

    At the time your 386 was new, a little video window the size of a postage stamp and playing 5 frames per second was high-tech video.

    Nowadays, thanks to ever-faster processors, many new computers now ship with DVD players. (And don't get into a semantical argument that most of the processing occurs in the DVD decoder, I know that too, but I use it as an illustration anyway.)

    How long ago was it that Bill Gates said we'd never need anything more than 640k of RAM?

  • by Beta7 ( 91128 ) on Monday July 31, 2000 @08:30PM (#890464)
    The thunderbird and Duron are using copper.
    Not exactly. Thunderbirds are made at AMD's Dresden fab (copper) and Austin fab (aluminum). All Durons are made at the Austin fab, which only does aluminum right now. The Dresden fab is the copper fab and is reserved for high-end chips.
  • >You are either dealing with a buggy OS
    Red Hat 6.2
    >some buggy apps
    GNOME, Netscape, Enlightenment, EFM.... Also a LOT of VMWare use.
    >some seriously stupid user behavior
  • Faster CPUs are of little use to me. After a few tests I've determined that memory bandwidth alown is what is holding my image processing back. Yeh raw processor speed helps, but when you can't get the data to it fast enough then adding more is useless. I've gone to using prefetching and wavefront optimizations to get my processing speed up to a reasonable level. Even then a 2x speedup in main memory throughput will double my video frame rate.
  • by Yhcrana ( 88366 ) on Monday July 31, 2000 @05:14PM (#890470) Homepage
    Well I think the most important thing for any consumer to do is read multiple sites, I tend to find that if I attack Tom's articles from a un-biased point of view and simply ignore his crap that he sometimes puts out he gets me the information I need ususally ahead of most of the other sites.

    First and foremost we must all take every site at their word and not beleive them always. I read multiple sites and get as much information as I can then make my own decisions based on intelligence rather than what a site told me.


  • AMD's 3DNOW technology isn't much better, but at least they don't use that as the reason for raising the price of their CPU's How else is AMD supposed to compete in the x86 market with Intel releasing SSE instructions etc?

    They needed a gimmick and I am impressed they come up with the engineering resources to create more than hype/vaporware.

    AMD has put their money into their processors(Where their mouth is) Not into the hype and busting Intel's balls.

    Take me your average programmer, I keep up with the technical issues for the most part and.. I cant think of to many times were I have been sold any hype about AMD processors, but nearly every campaign from intel is hype/shit.

    That said how fair is it to judge AMD on the same scale as Intel when they have been palying catch-up the entire time and still managing to stay afloat?

    Look at the Athlon, its not even native x86 it just emulates it.

    It may be like a raging bull with power consumption, but lets at least give AMD the chance to become king of the hill and then see where they take us.

    I just dont thinkt here has been a lot of room to innovate and stay ontop of the Wintel world. How can you just get back to RISC processing and stay in the market?

    Simply said if you want to be king of the moneypile you will be running a windows platform for a while no two ways bout it.

    Thats where the money is so I think in order for AMD to become a real innovator they need their time on top before they will be able to 'push' something like RISC and strong arm the rest of the PC world into it..they would just be laughed at right now and we all know Intel isnt going to.


    If you think education is expensive, try ignornace
  • Reading Tom yapping about how no one needs more than 1Ghz drives me nuts. I remember hearing that bullshit argument several years ago: "What could you possibly need more than a 383/33 for? There's nothing that needs more power than that!" I went from 450 to 667 and the change was very, very obvious to me in all sorts of applications, and just generally in the environment in general. Flight Simulator 2000, however, still begs for more power (yes yes's because it's an evil empire [] product, but it's also because it has an enormously complex environment simulating a gigantic swath of the world. Several simulators, such as GP3, having physics models that push the limits of the processors. New innovative interfaces using 3D techniques and modelling, overlays, etc., beg for more processing power. Don't EVER say that no one has any need for it unless you accept a myopic vision as the truth just because you're saying it.

    Having said that you know Tom's purely pissing on Intel (presumably because he didn't get the microcode : WHO DO INTEL THINK THEY ARE? Don't they realize the importance of Tom Pabst?!) when you read : "A 3D modeler? Well, moving wire frame models around is again limited by the 3D chip and the scene rendering is done faster with an Athlon processor at less clock speed anyway." in reference to why no one needs more than 1Ghz. Okay firstly the wire frame model is only accelerated by the video hardware if you have a GeForce (2) or a professional video card : Does everyone have one of those? Secondly the rendering is begging for every microta of processing power you can give it. An array of 64 Xeon's : It's still begging for more. To simply jump over this and say that an Athlon does it better completely defeats this whole "no one needs more than 1Ghz" bogus article. In 6 months I'd love if everyone remembered Tom's wisdom in this.
  • The way it is now, I cannot change the order of execution. However, I have made sure to balance the multiply and add operations. As for my cache, it is 1/2. There's no (Non-thunderbird) Athlon with a fullspeed cache. I'd like to have this info confirmed, but I think double values bypass L1 cache (what about L2?).
  • It works alright for me, but that's W2K Pro, with every patch I've found. On a semi-related topic, there's a memory leak in W2K's TCP/IP stack, which I am too unknowledgeable to diagnose further.
  • He didn't mention current at all; where are you getting that from? He's summing two measurements of 'hunger' to give a final total in 'hunger'. This, though, does seem a little bit strange, since hunger is usually subjective.
  • Hey, that sounds like Slashdot!
  • by Yhcrana ( 88366 ) on Monday July 31, 2000 @04:39PM (#890508) Homepage
    Paper releases seem to be all that Intel can do anymore. All they do with this is apply simple overclocking techniques most of us have been using for years to overclock the celeron and the AMD thunderbird CPU's

    From what I have been seeing from Intel I don't see much of a future for them, releasing a chip in slot 1 format when they are obviously trying to go to Flip-Chip socket format. This simply seems like a reason for you to have to go out and buy a new CPU sooner.

    With the Rambus fiasco, 64 bit CPU fiasco, this, and the i820/i810 problems I find that Intel needs to sit back and take the marketing department out of the driver seat. AMD has the idea release products that are reliable and available to the general public.

    Oh and btw if you want to claim that AMD sucks because of the Ge-Force problems that were occuring that was a driver issue and not a CPU issue. I do however agree that AMD needs to add better sealant to their duron and new Thunderbird chips as Anandtech [] talks about in their web news sections. Where if you apply a heatsink just a little wrong it will crack the die of the chip. Other than that AMD has intel by the short-hairs and intel isn't capable of doing anything about it right now


  • I've noticed a trend in processors. Many people are doing fine with the 300-450mhz processors purchased 2 years ago. Do we need all this speed? I know no one who owns more than 800mhz, and most people I know have around 500mhz. Is there even a market for 1GHz+ right now?

    Sometimes you by Force overwhelmed are.
  • Yeah, but you have a Pitt address ... what do you do in the lab, sweep up?

    (some good-natured ribbing from a CMU CS alumnus)
  • Yes, BillG is pretty clever. He certainly has marketing sense. And I don't know what spurred demand for the 386, but I do remember multimedia being the reason we bought a 486. Either way, it's a moot point. People were saying nobody would use this power, and somebody did.
  • Sometimes I think /. is the home of people who don't get it. Sure, given current software, there's not a need for the power. But, as I remember it, Word processing, and email have worked fine since the 286 days. However, something always comes around to use additional power. The GUI word processors demanded more power. Who knows, maybe when chips get fast enough to do really good voice recognition without slowing to a crawl people will upgrade to P5 4GHz's so they can use the new voice powered user interfaces. Telling the what to do certainly makes things easier for these people, right?
  • ...I immediately read "Pentium III 1133" as the Pentium III 'leet

  • I don't know what the exact limitations are, but as I remember it, there have been many times before in processor history where people said "silicon just can't go any faster." However, everytime the barrier was about to be hit, somebody came up with a way around it.
  • True, but to some extent. I, for one, feel much more constrained by CPU than my internet (DSL) I tend to be a big 3D/audio/video person, and for me, renderings are still to damn slow. Maybe I have no patience...
  • The performance increase over PIII is considerable for the SPECviewperf benchmarks. On most of the benchmarks the Athlon is 10-20% faster than a PIII with RDRAM, and as much as 25% faster if the PIII has 133MHz SDRAM (the same as the Athlon). A 20% performance advantage is not negligible, and the fact that it's also cheaper doesn't exactly hurt.
  • V2 = V1 * (1 + DeltaV)
    I2 = I1 * (1 + DeltaI)

    P2 = V2 * I2
    = V1 * I1 * (1 + DeltaV + DeltaI + DeltaV * DeltaI)

    neglecting the higher order term, for
    DeltaV, DeltaI 1

    P1 = P1 * (1 + DeltaV + DeltaI)

    And since that higher order term is positive, Tom's statement that 3% and 13% sum to "over 16%" makes sense; the exact answer would be 16.39%.

    When you've got no calculator handy, knowing that 1.03 * 1.13 is about 1.16 isn't a bad thing, especially if you dump more digits in there.

  • They're up, but not at
    You can access it through this IP until Domain name changes take effect.
    Here's some info on it from
  • I've encountered a use for faster processers at my work: making maps. This actually includes two problems.

    First, there is the map data. My job is to work on and improve that data. Guess what? When you choose an alteration on a large set of roads, it takes _forever_. When you want to render a whole state of roads in great detail, it takes _forever.

    Second, there's making custom maps. A customer calls up and says "I want every location that is within 45 minutes driving distance from this location". Well, you probably guessed. It takes _forever_. With maps and similar problems, you start running up against exponential computation problems.

    So, you asked, and I gave you a very specific example. The more general answer is probably non-3d workstations. Anything that reguires crunching through lots of numbers or databases.

  • Cross browser testing in web development.

    The question is what's wrong in his setup that makes them crash? I have IE 5.5 and NS 4.73 open together, day after day, without issues.

  • >Look at the Athlon, its not even native x86 it just emulates it.

    If you want to get real technical, the PPro core (P-II, III, etc) emulates most of the x86 instruction set, too... the AMD happens to do it a little bit smarter, since it was redesigned from scratch (after fab processes had come a long way since 1994). When you know you will have the area and ability to make something, your design can be a lot less constrained.

    As for the power consumption, if you have any extra 700 MHz 21264s to get rid of because they consume too much power, just toss 'em my way ;-)

  • Well, the fastest CPU you can buy in a machine these days would be AMD's Athlon @ 1ghz, Intels ghz PIII is nearly impossible to get your hands on. Personaly I'm fine with my Celeron 300a which I only clock to 464 for playing games and watching DVDs. I'd much rather spend my money on a new hard drive, memory, or even a new video card before I drop a load of cash to get a new CPU/MB
  • Isn't the irony beautiful? I can just imagine the 386 days some guy saying
    "But of the 50-odd people that work where I work, only 3 (including myself) woulrd regularly max-out a PC regardless of power, with maybe another 6 requiring PCs faster than the ones on their desk/ We've probably got 20 people that could do everything they want on a 1 MB 8086, with the rest (50-20-6-3=21) probably never needing anything more powerful than a 286"
    People have said that forever. GUIs necessitated the 486 and Pentium for regular business desktops. Multimedia, 3D, etc required home users to own PIIs (btw the home market is quite huge and very influental) Something will come along. My first guess is probably a voice user interface. Coupled with a mouse, a voice interface really could make web browsing really accessible to a larger group of people. Instead of icons,etc, you could say "PC dictate an email for my daughter." You can do that to some extent, but you still need to train it etc. For a really fluid voice UI you need a lot of horsepower for AI algorithms that can understand nuances of speech, figure out how to adapt to the same command said in a different manner, etc. Or maybe something else entirely will come out, you never know. However, the most dangerous thing you can do is be satisfied with what exists, and be closed to new concepts because that way you really miss out on what COULD be.
  • How is a 1133 MHz processor unstable? If you have any proof that they're are serious problems then please tell me, but as far as I can see, the high performance PIII/Athlons have been quite stable. Especially the GHz PIIIs which produce about the same heat as a 800/850 MHz Athlon. My point isn't (there really is a use buying 1133 instead of 1033) but (there really is a use for CPU power continuing to grow) Right now I admit, it is pretty silly because the increases in clockspeed are so small, but a lot of people say that processers are fast enough, the busses are too slow, etc. To those people I'm saying that "no, processors aren't fast enough" and "the limitations of the bus depends on what you're doing." Sure saying that there really isn't a use ging 1133 vs. 1000 is a sane thing, but saying "do we really need more power?" is just stupid.
  • uh i don't know if this occured to anyone else...but tom's hardware doesn't exactly like intel. they show way too much favortism to AMD.

    now i'm not saying AMD is bad...and for that matter intel still hasn't shipped their 1GHz chip while AMD has got them in stores.

    like you tell all those sluts: Intel needs to put up or shut up.

  • by katmaikni ( 132932 ) <nextmail AT mail DOT com> on Monday July 31, 2000 @04:45PM (#890546)
    Anandtech [] has a review of the Pentium 3 1.13 GHz where he had used motherboards using the VIA Apollo Pro 133 Chipset which Tom Pabst said did not work with the boards on that chipset. Anand did not say anything about the microcode nor did he use the motherboard that Tom said did not arrive.
  • Well, SMP works only to a point. If the stuff you're doing is easily parallizable then it works pretty great, but otherwise it really doesn't. For actualy desktop use SMP is great if the OS you use is really good with it (ahem, BeOS) but if you're running an OS that really isn't (ahem, MacOS) then you don't see much of a performance enhancement. In the end, clockspeed really is the best way to go. Doubling clockspeed will always give you a bigger payoff than doubling the number of CPUs.
  • by be-fan ( 61476 ) on Monday July 31, 2000 @04:47PM (#890551)
    I'm tired of hearing all these people saying we don't need more power. It all depends on what you use. There will always be those who need more power, and not just in a "Tim Allen-esque testosterone induced" way, but genuinely. People have been saying that nobody needs more power ever since the 386 days. Even Intel used to say that the 386 wasn't really meant for consumer space, it was a server/workstation chip. Yet always, some clever dude found a use for that power. Back in the 386/486 days it was multimedia and video. Just when the Pentiums seemed fast enough, those crazy gaming guys came up with 3D, which needed a lot more proc power. I think 3D will carry processors until the 50+GHz region, at which point somebody will find something else to use the proc for. Even then there will be morons saying "oh, is there really a USE for this 100GHz proc?"
  • I agree with you entirely (exept the part of programmers. Even on 50GHz procs I want to see those programmers slaving away for every last clock cycle ;) However, I think that if I said EVERYONE can use more power I'd get 500 responses from people still using 386s saying I was an idiot.
  • Th reason why stuff isn't always clustered, mutli-proced is that somethings just aren't parallaziable. Say if your doing, I don't know, a very large domino simulation, the thing would run the same speed on a 500way computer as on a 1 way computer. As for 3D, it will always need more power. The NVIDIA guys said it pretty well. It was to the effect of "with 3D you can't ever have enough power because you're trying to model reality." Then he went on to say something about making a tree "so real that it couldn't possibly exist in real life, but is believable because of how well rendered it is."
  • I've got a PC with 128Mb of RAM a 600MHz CPU and a 7200rpm disc drive but it's really no faster than my older P200 system with 64Mb and 7200rpm disk.

    What? 'How can this be' I hear you cry!

    Well. The CPU isn't the bottleneck. Hasn't been for quite a long time now.

    Bus speeds, memory speed and by far overall, disk speeds are what are limiting the overall speed of systems these days.

    I don't see a huge point in having a 1GHz CPU which can access data in 1ns if the data to and from disk takes 8ms (125,000 x slower). Sure adding memory gives buffering but there's a diminishing return on doing that.

    You go on and get all excited over 1GHz CPUs. I'll get all excited when a new long term low cost storage device is created which can handle the I/O from said CPUs.
  • by badmonkey ( 29600 ) on Monday July 31, 2000 @05:55PM (#890561) Journal
    Reading the reviews of this part, it's obviously just a really overclocked part (yes i know all processors are created equal, some just run reliably faster than others) They are really pushing the die to get this processor to run at 1133 Mhz.. using 2 of the mainstays of overclocking.. a LARGE heatsink (did you see the pics?) with 2 fans, and increasing the voltage the processor runs at.
    Anyone could have done that stuff, but the microcode issue Tom talks about is just wierd/suspicous... unless its a multiplier problem, the same processor that runs (underclocked) at 850 Mhz fine with a given microcode, should run fine at its full rated speed with said microcode version.. do you think they are deactivating processor features to save heat as Tom seems to insinuate? I'm eager to hear from the rest of the slashdotters on this...

    Either way the first 2 issues show that Intel is really struggling to get product out the door at or above 1Ghz... they should have just sanely clocked these puppies and sold them as Gig models instead of going to crazy lengths to get an maginary victory in the CPU wars
  • I missed the part where you could actually buy and use a real operating system on those things.
  • But you must admit AMD is getting the best of intel simply because intel has streched itself too far and isn't innovating any more.

    You're wrong. They're still innovating. I know this for a fact, because the lab I work in at CMU has entered into a partnership with Intel to work on some pretty cutting-edge stuff. You'll notice that processors are starting to slip behind Moore's Law predictions. If this idea works, it'll get us back onto the doubling curve and maybe even beyond. Now, this is a university project which is just getting started up as some grad student's thesis, so it'll be at least a decade till it hits your motherboard, but Intel is not just doing the same old thing.

    I know I'm being vague, but I can't exactly talk about the details. I don't know how much of the project (if any) is public knowledge; it's certainly not posted on our group's webpage. A few measly karma points aren't enough reward for me to risk getting in trouble for talking too much.
  • I agree with the sentiment, but you seem to imply that some tasks can't benefit from more speed.

    There is no general task that can't be done better with more power. You don't need to go around dreaming up new things for computers to do, almost everybody who writes a program throws out features because the computers wouldn't be fast enough to run them.

    Sure, you can have "adequate" tools that work with current hardware, but if you don't see how all of them could be better, it's a failure of imagination.

    3D will carry processors far beyond the 50 GHz region. Virtual reality is an obvious bottomless computation pit, you can always do better with more.

    A few other computational bottomless pits (there are many more):
    -physical simulation
    -genetic algorithms
    -natural language processing

    Even in a thing like word processing, consider how much more computational power is needed to give consistently good advice on things like grammar and spelling. Yeah, the current software is pretty bad, but I don't believe good software can be written for this enhancement without more speed.

    Perhaps most important of all is freeing up programmer time. The less you have to worry about conserving resources, the more you can get done. It's a shame when programmers waste resources inappropriately, but having computers fast enough to be able to just hack up a quick Perl script, rather having to write optimized C and assembly, can make incredible increases in productivity. Another example is being able to emulate old programs, rather than having to rewrite them.

    Despite rumors to the contrary, I am not a turnip.
  • I'm not sure if its the engineering or what, but EVERYBODY seems to be having challenges with their silicon these days. Moto can't get the G4 over the 500Mhz mark and Intel can't supply new CPUs period.

    So why not dump more of the marketing/manufacturing/research into consumer level SMP? It's a winner for Intel as they can sell more CPUs ("What? You only have 8-way SMP?"), and presumably a push into consumer level SMP will be a push to software vendors to make their programs take advantage of SMP, which many don't do (well) now.

    Yes, I realize that 2-way systems are cheap (I'm running a dual 650e system now) and have limits to their performance advantage relative to faster clock, but I keep waiting for SMP to hit mainstream..
  • Here is an article [] where they actually got the chip running, I didn't read the whole article, you get kinda tired of reading specs and benchmark results after a while

    But if I remember correctly Tom's hardware said something about how it truly doesn't offer any real benefit unless you are running your 3D games in 640x480, much above that the GPU (Video Card)limits your framerate. And it has been shown for years that to truly run a business application (word, excel, access, or much anything else) a simple 400 mHz CPU will do you fine.


  • by HamNRye ( 20218 ) on Monday July 31, 2000 @04:48PM (#890578) Homepage
    This is really not surprising, the processor wars have had their casualties in the marketplace. With the competition drivi9ng hardware prices down, you would think that this would be the best time to buy/build a new PC. But with the problems encountered with the high end PIII, and the Slot A, no wait, Socket A, no wait, Fonzie says "Aaaayy..", this is undoubtedly the worst time.

    Both companies are far more interested in getting a product out the door, and not interested in getting a working product out the door. The result of this could be interesting to watch.

    My guess is that the end result will just add to a growing contempt for the multipurpose PC adding to the appeal of small embedded devices. (Unless the small embedded devices try to go the same route...) As we know, all of these races to be first usually signal the death knoll for at least one of the companies involved, if not both.

    End analysis? My next PC purchase will be after the market calms down, which may be never. If all else fails, we might have to go with "OSH", Open Source Hardware...

    "My only hope is that they don't breed..."
    -Said about "pet" penguins that have escaped or been abandoned by their owners.
  • I don't get how Intel thinks they can still act like a monopoly. Granted, they still have a larger maket share, but they have inferior technology and have done for a generation and a half or so. Given that, do they really think that they can pull tricks like requiring an Intel mobo? They're gonna have problems if people a)can't overclock and b)have to buy Intel for everything, because it's going to cost more and not be as good. plus, the gubmint might have something to say about it at some point. In addition, if it doesn't work, that's a major problem (duh), since AMD is faster, better and cheaper (a la NASA).
  • There will always be a market for faster processors because there will always be people who just have to have the fastest processor around.

    Seriously though, it seems to me that this will have to limit out however. Except for scientific applications, most computers (ie, the home market) don't need all that power. Sure, games will always be pushing that envelope, but even then going much faster doesn't seem to have much use (does it really matter if Quake can run at 150 fps or 300 fps when your monitor only scans at 120 Hz anyway...?)

  • Yeah, but (correct me if I'm wrong) the GHz Athlon in stores is using a 1/3 L2 cache divider, so any Athlons past 700MHz quickly lose their edge to Intel's chips. I'm assuming that the mainstream (Compaq, etc.) machines aren't using the Thunderbird based Athlon, as I don't see any Duron based machines in stores, which was released at the same time.

If in any problem you find yourself doing an immense amount of work, the answer can be obtained by simple inspection.