Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

The Dual 1GHz Pentium III Myth 122

Sander Sassen writes: "HardwareCentral has the latest on the dual 1GHz Pentium III controversy. Here's a blurb: 'The 1 GHz Intel Pentium III seems to be the subject of much controversy, as many claims have been made about its inability to run in a dual CPU configuration. HardwareCentral has been following the discussion closely and decided to put an end to all the rumors and get a couple of GigaHertz Pentium IIIs and a dual CPU motherboard and find out what exactly is the truth of the matter.'"
This discussion has been archived. No new comments can be posted.

The Dual 1GHz Pentium III Myth

Comments Filter:
  • by Anonymous Coward
    Everyone knows the pentium architecture has support for multiple processors. How many people know, however, that they are limited to only TWO processors? Its the truth.

    Athlons, on the other hand, can support up to 8 (well, if the motherboard guys ever figure out that configuration...) What else is good about multiple Athlons? Their bus. If you've got a 133MHz bus with 2 pentium 3s, each chip gets 67.5MHz of bus speed. Hook 8 Athlons up to a yet-to-be-built motherboard with a 400MHz bus based on the DEC EV6 architecture and you get 400MHz for EVERY CPU simultaneously.

    All this information is backed up on numerous sites, www.tomshardware.com, Ars Technica, etc
  • by Anonymous Coward
    Even worse than the fact that there's only a 133MHz bus under there is that, thanks to the Intel bus architecure, its *SHARED* bewteen the 2 processors. The Athlon EV6 bus technology, on the other hand, will give out 400MHz to each of 8 chips (pentiums are limited to 2) simultaneously... if only the motherboard makers would get to building one...

  • Just imagine what you could do with a (working) pair of 1GHz PentiumIIIs

    Fry a steak? Heck, I don't have enough to keep my little Athlon/550 busy. It spits back kernels to me in 4+ minutes. Hardly enough time to fix a sandwich. Besides, you know the 1G P3's will start at least $1500. I've got a list of better things to have for $3k.

    Any bets on whether we'll see 1G P3's or Dual-Athlon systems out first?
  • Nah. If he knew this he would be doing it. I've never had much patience with the "Everybody else must know better so I just won't say anything" approach, and when I have that much to say about something it's generally because I know it. Have you written software to time-and-space antialias video footage to cut down video noise? I have. The concept is proven. What failed me was 'REALbasic', the testbed: it wouldn't handle large quicktime files at all. I can see I will have to put up demos: oh well, Y-A-thing to do. John didn't get to be "JOHN CARMACK" and inspire all this rabid loyalty by not ever learning anything from people. He's about 27 million times the coder I am, that's plain. I am simply saying that he absolutely hasn't yet clued in to what 'motion blur' needs to be as a persistent effect, and this is simply because the technology is making things possible that _nobody_ did before in gaming. I suspect that if he even reads this thread again at all, he'll be curious enough to try the idea, will observe that (sure enough) there's no obvious effect, and will begin tuning in on the details to see if there is any subtle effect like I'm describing. Q3's models and graphics are more than detailed enough to take advantage of such an effect. The result should be a greater sense of fluidity and intention in the motions onscreen, and it could even enhance the ability to read motion cues and 'intention movements' off the screen, enhancing rather than distracting from gameplay.
  • OK, working off totally anecdotal evidence, with no numbers to back me up (at least I admit it...)

    The largest speed increase I ever had was from a 486/33 to a PPro200. That huge jump aside, what I find works the best is, all other things being not-sucky (ie, not 16MB RAM, not 100mhz, etc), a switch from IDE to SCSI brings huge speed increases. My dual P3-500 box without its SCSI sucks it to my P200 for general use with SCSI (of course, now I run the SCSI on the dual P3).

    That being said, lots of memory is helpful, and of course, processors do speed things up. But after, oh, about 600mhz and 256MB RAM, the bottleneck in the modern PC (remember, this whole thing is about PCs, not your incredibly powerful SGI rendering box) is the HDD.

  • Just finished encoding "Swingers" into Divx;
    How do you do that???
  • I can't quite figure out what this has to do with Divx though. Oh well.
  • That's great. So explain the quad Pentium Pro 200.
    That's a pentium, thus the word Pentium.

    Or a quad Xeon. Uhh, Xeon is a Pentium II/III with
    hot cache.

    I've never read a /. thread with so many people talking completely and anonymously out of their collective asses.

    All your information may be backed up on numerous sites, but it's wrong.

  • Uhhh, you spelled sufficiently wrong in your sig?
    Or is this a limey spelling. :P

    Just tryin to help, I like that quote as well.

  • by skroz ( 7870 )
    There are plenty of applications for which faster processors are needed. High end 3-d rendering, CAD, engineering analysis, real time number crunching... there are plenty of applications for which fast processors on inexpensive systems are desirable. And yes, $1200 for a single processor can be considered relatively cheap to people that need that kind of processing power. The next step up would be either a large cluster of cheaper machines, or a large supercompuer. Both of which are very expensive, and not necessary for some of the applications listed above. The home user doesn't need it, you're right. But don't believe that just because YOU don't need it, no one else does.
  • by skroz ( 7870 )
    True, but you pay an even higher premium...

    Octane with dual R12k, SSE graphics -- ~$35k
    Estimated total system cost for dual 1ghz PIII ~$7000 - $10000

    The octane will be faster, no doubt, but not three times as fast. The PIII system is a bargain.

    And don't get me started with sun equipment... sun workstations are just PCs with different processors in 'em. UltraSparc just ain't that cool.
  • It has to do more with the way the EV6 addresses those processors.
    Highspeed ASCII drawing was to be placed here...
    This makes the EV6 bus harder to implement, but the advantage is that no CPU needs to wait to access resources. I had tried to draw it for you, but it was a nightmare to make it look right, instead this is a link to the whitepaper on the topic. Page 6 [amd.com]
    Time flies like an arrow;
  • by Baggio ( 8432 )
    Is a codec by that name, not the CC fiasco.

    Time flies like an arrow;
  • Almost every PC-related problem I see is RAM RAM RAM RAM RAM RAM RAM

    No it's effing not! POV takes forever to render a frame? Ray-tracing is a lot of floating-point and not much I/O. More CPU power would speed it WAY up. Your Quake3 framerate sucks? If you have a modern 3D card, it's probably because the CPU can't feed vertices to the card fast enough. Want to watch a DVD? A faster CPU would make it smoother. (assuming you don't have hardware MPEG decode)


  • You're assuming that the Athlon can be run as a multi-processor machine, which it currently can't.

    Intel's got the x86 smp market sewn up right now, but it'll only be a matter of months until smp K7's are available. So THEY say anyways... a whole lotta potential there...

  • by Synic ( 14430 )

    your website owns, Jeff K!

    (hi Lowtax!)
  • WRONG I have a dual p2-450 with one gig of ram. The night after I received my 4 sticks of 256Mb ram (I was drooling), I fired up SoundForge and loaded a live phish dat onto my hard drive. I decided to run some audio applications to manipulate the sound, after about 10-15 minutes of hard-drive crunching, max cpu using, mouse locking up on screen, ect (you get my point), I got an out of memory error.

    This sounds like a job for (Swhoooosh!)

    BeOS man!

  • The pages are gone (Mon Apr 3 19:24:27 EDT 2000), when you pull them up (try 2/ instead of 1/ in the URL) they are blank. Looks like the story is missing from Slashdot now also (oops it's back on slashdot now). I wonder who got on their butt about this one (Intel maybe).

  • thats why crusher.dm2 and the Q3 equivilent are most often benchmarked.
  • but the question remains: does Karnov benefit from fullscreen antialiasing?
  • Bah, people who need that kind of power don't fool themselves with toys are buying -real- workstations such as SGI/MIPS, RS/6000, SUN/sparc or Compaq/Alpha.

    GHZ have no meaning.

    300mhz MIPS/R12000 has better FPU performance than any existing Intel CPU at any clock speed so far.

  • > Examples of apps that are compute bound.
    > 1) 3D, games, rendering,modling, you name it.

    Usually video card bound

    > 2) Any kind of realtime graphics.

    What does this term mean? Did you make it up?

    > 3) Photoshop type apps depending on wether you
    > use filters more often or just edits to large
    > files.

    Cache size and speed dependent

    > 4) Compiling.

    Cache and I/O bound

    > 5) Audio editing
    > 6) Real-time video editing. (What, I have to
    > wait 2 minutes to render the changes!)

    Usually depend upon FPU which is largely ignored in commodity chips.

    It's not that nothing is CPU bound. It's just very rarely the bottleneck.
  • It's just that the current Athlon has it's L2 cache off chip, which is why the access is slow. The interesting thing is that even so, the benchmarks are pretty evenly divided between the 1GHz Athlon and the 1GHz PIII. Needless to say AMSD are adding on-chip L2 to the Athlon, as well as to their "low end" Spitfire which is going to be an AWESOME CPU (even Dell appears to be giving up their "Intel only" position becuase the Spitfire will blow away Intel's Celeron so badly). With AMD having now won over ALL TEN of the top PC manufacturers, they're obviously doing something right!

  • Quoted from their "conclusion":

    "We had not troubles operating the same setup at 1066MHz, although very unstable [...]"

    If "very unstable" doesn't count as trouble, I don't know what does.
  • they arent that fast. I've got a 9000 class *server* and its slower than my dual 500 abit bp6. parisc stuff is sloow.
  • moron. even on my dual 500 with 768MB RAM the system still cant do everything i want it to. like seamless audio/video in the background, compiling on seperate windows, java stuff etc etc. more ram will only take you so far. then you need CPU horsepower.
  • - Did they use dual 866s overclocked or real dual 1000s?
    - What exactly did they do to cool this beast? Most of the article was about modifying the slocket. No information about cooling other than it was a "powerful" solution.

    Also, I've never even heard of Iwill, but they're mentioned numerous times in this article and twice right now on the hardwarecentral front page.

    Anyway, if they really did use OC'd 866s, they didn't prove anything, and this article is worthless.
  • I Have a Dual-500mhz machine, and i max out both cpu's all the time.

    Ever considered buying more RAM? If you're maxing out a dual-500MHz machine, I would hope you've got at least 4 Gig of RAM on that baby. If you have less than 1 Gig, you're starving it, man.

    I'm just mentioning this due to my observation that a lot of friends who maxed found problems like that went away when they upgraded from a measly 128Meg or 256Meg to some reasonable number like 512Meg.

  • Sounds like a little bit of RAM might help. But I'd look at the disk access - do you have dual hard drives and some level of RAID to increase the access? It takes a lot of time to read those files, and the usual tradeoff is doing it in RAM is 1/1000th the time to do it on disk. First see if just plopping in 128megs of RAM helps, then investigate getting better drives. The CPU is only part of the system.

  • My quad Pentium Pro does it in 2 1/2.

    All you crazies and your new-fangled processor thingies....
  • You will never get 200+ fps in UT and Q3, no matter how fast the cpu is. And dual wouldn't help UT anyway, it isn't multi threaded. You would be fill rate limited, unless you found a way to get your GeForce in a resolution like 150x113 or so, which I doubt.
  • However, even if the EV6 bus can handle 400MHz, no Athlon runs them at the frequency. Go check out the specs on the Athlon, 200MHz bus.
  • It looks like they restored the site from backup...that is several days old. I write for HWC and if you look the main site is up. But even a review I did the other day is missing off the front page.
  • "they arent that fast. I've got a 9000 class *server* and its slower than my dual 500 abit bp6. parisc stuff is sloow."

    Servers generally aren't designed with raw processing power in mind. Maximum I/O is the design goal and the processor is just one piece of the puzzle.

    Case in point; where I work, we have an RS/6000 F30. It only has a lowly 166mhz POWERpc, but for database operations, it kicks the shit out of our IBM Netfinity PII400.

    I have a feeling that that "slow" PARisc HP9000 would make your BP6 look silly for similar operations.
  • >Compiling is CPU bound, not I/O bound.

    >An extra Gig or 2 isn't going to speed up the compile when you have 4,000 files and a few million lines of code.

    Well, the link is usually very memory intensive. Having enough memory to cache all common header files makes a big difference.
    But the difference between one Gig & two? Probably close to zero.
  • >> ... cutting down times in compiling code ...
    >> I Have a Dual-500mhz machine, and i max out both cpu's all the time.

    > Ever considered buying more RAM? If you're maxing out a dual-500MHz machine, I would hope you've got at least 4 Gig of RAM on that baby.

    Compiling is CPU bound, not I/O bound.

    An extra Gig or 2 isn't going to speed up the compile when you have 4,000 files and a few million lines of code.
  • > For the most part apps are still computer bound, EXCEPT in server space. Thats why the Xeon is still chugging along at 550MHz. Examples of apps that are compute bound.

    The techinical terms are CPU bound and I/O bound.

    Nice summary, BTW.
  • > Well, the link is usually very memory intensive.

    It takes 25 mins to do a compile, and about a minute to link (P2-400 w/ 128 Megs). The debug .exe is about 10 Megs.

    > Having enough memory to cache all common header files makes a big difference.
    Now that is something we haven't tested. I'll buy that. Now only if my manager would ;)
  • Try 600-800MHZ. This was the i820 and RDRAM. They even said that the comparing systems were overclocked to 133, meaning that when run in their rated mode, they would go even slower, making the performance benifit even more pronounced.

    The whole point of RDRAM was to allow deeply pipelined, concurrent memory accesses from PCI periferals, 4x AGP and 2 to 4 800+MHZ CPU's.

    Each RDRAM module can independantly handle a memory access, and each RDRAM channel can independantly transfer data ( on a simple 16 bit bus ). You have tons of bandwidth, just tons of associated latency.

    In a multi-CPU configuration, you're going to get memory contention and thus latency, no matter what you do, so the added latency of RDRAM is hidden in such configurations.

    The problem comes about in single CPU and 2x AGP( or any minimally utilized periferal configuration ). Here the added latency seriously shines, and you potentially can get slower memory access than standard SDRAM.

    In short, the memory was doing what it was designed to do.
  • Ah, that's what takes up all their time, so that they can't release any XFree86 4.0/DRI drivers, I'm sure! It's nice to figure these things out.
  • Yes but thats' because they tried to pull out functionality that the chip already had.

    They didn't add any functionality too the chip itself.

  • Motion blur is a hack. Anti-aliasing is too.

    If (when) GPU's are capable of doing 1800x1200 at 150fps. I don't think anyone's going to care about this kind of visual trickery.

    If I was at Nvidia, I would be laughing my head off at the crap hardware 3dfx puts out, but pitying for the consumers tricked into buying it.
  • I have a Dual P2-450 machine...unfortunately the motherboard has something wrong with it...so...its only a p2 450 machine...and i have had no problems with just a single p2 450. I don't get this need to power. There isn's nearly one program that most people use that would actually make getting a dual PIII Ghz machine worth it. i can see i high high end server but that wouldnt just have two..it would have aroung 4-8 processors. the cost would be astronomical and though its fun to drool about i dont see the point. One would be far better off buying quality componenets for a computer rather than massive amounts of processing power...unless you are in some mad dash with teh Seti@home project or something...HEY...thats a great idea!!!!!!!!!!!!!!!!!!!!!!!!!!!
  • i got 2 Celeron 366mHz cpu's (ppga) overclocked to 533mhz with NO EXTRA COOLING except fro the standard heatsink + fan, notithng special and it is ROCK SOLID . i ran seti@home for 4 days solid and not one hang, crash, winnt blue screen NOTHING. either you guys SUCK or u got fucked up cpu's btw, motherboard is an abit bp6 THEY ROCK!!! :)
  • I can buy a 1 Ghz Athlon.
  • ... if AMD finally made SMP-capable Athlons.

    I guess there are Alpha-based EV6 SMP boards available, aren't they?
  • You measure system speed only in terms of 3D games performance and you do faulty reasoning. Just because RDRAM has a lousy performance it doesn't prove the memory bandwidth is not a bottleneck.

    There are lots of applications where CPU bus and memory bandwith are the bottlenecks. Why do you think some high-end servers provide 4-way interleave access to their SDRAM main memory?

  • They are NOT.
    Real 1GHz PIIIs are not yet available.
  • Are you referring to the PIII erratum [hardwarecentral.com] that caused multiple PIII systems to have potential conflicts when accessing memory simultaneously? A BIOS update to fix that came out middle of 1999. Issue resolved, old news, not a problem anymore.
  • and then finding the money....
    MicroBerto hears three glaring letters off in the distance...

    Mike Roberto (roberto@soul.apk.net [mailto]) - AOL IM: MicroBerto
  • nah, it's not the smart people who need a fast machine for their work. it's the brain-dead managers who see the higher numbers and insist on getting that for their new server, or it's the users who don't realize that the mhz isn't the only number to look at and have too much money to burn at want to have "The Best" even if they don't need it...
  • yes, but that's not why Intel is in such a rush to get them out.. Intel would _much_ prefer making people pay more for their chips..
    way too many people are just blinded by the one most prominate number, and know nothing about buss speed and cache and ram speed and so forth..
  • Good luck building a dual-1GHz PIII machine when you can't even buy ONE of the things. What's the hardware equivalent of "vaporware"?
  • At least show a little taste when you try to piss off the MPAA!!!
  • From the Article

    Furthermore, most dual CPU motherboards are used in workstation or server configurations, which would make meeting these temperature requirements an absolute necessity. Instability or system crashes are not an option in these configurations.

    When is a system crash ever an option?
  • Keep in mind that the 1Ghz PIII is not Intel's flagship, their PIII Zeons are, I'm sure for double price of a PIII at any clock speed they'll let you use SMP as far as the design specs allow.
  • Sorry for the offtopic but that's a GREAT sig!

    I can hear it now...
    "...THEY'RE GEEKY, THEY'RE LINUS AND THE pain-in-Bill's-butt-PAIN." ta-da-da-DA, dunt.

    Hmmmm, time for a new sig.


    "It's not enough to be on the right track -- you have to be moving faster than the train." -- Rod Davis, Editor of Seahorse Mag.
  • Well, the Athlon supports 8 processors. The Pentium supports 2. Its in the chip, not the motherboard where that is determined.

    Not exactly. Case in point - Abit BP6, able to overcome many processor limitations that would otherwise inhibit dual processor functions, in this case celeron.

    Regardless of the INTENT of the processor's manufacturer's limitations, any properly designed MB can overcome this with a good chipset and other elements of good design.

    AFAIK, with the PII to Xeon socket adapters, it is possible to run more than 2 PII/PIII's in a multi-processor MD.

    Can anyone confirm or refute this from athoritative testing or experience?


    "It's not enough to be on the right track -- you have to be moving faster than the train." -- Rod Davis, Editor of Seahorse Mag.
  • by BillYak ( 119143 )
    Thank God I'll be able to get 200+ fps in Unreal Tournament and Quake 3 Arena because you all know I can tell the difference in every frame past 100 a second :)

    Seriously, thats some amazing speed.

  • Well, first off, my system performance improved alot more by shuffling off IDE and moving to a fast SCSI HDD than the jump from 350 to 700 MHz.

    What are you doing with your computer? I'm pretty sure that 350+IDE->700+IDE will make your kernel to compile faster, games running better and so on than 350+IDE->350+fast SCSI.

    Of course with 700 Mhz CPU you need SCSI to boost your system even faster, though UDMA66 should work pretty well also.

  • Homeworld, baby. It's all about Homeworld, squads of ships and motion blur.

    - - - - -
  • There isn's nearly one program that most people use that would actually make getting a dual PIII Ghz machine worth it.

    Quake III Arena, baby! It's all about framerate! -- and perhaps ping times...

  • Maybe you should switch to a Voodoo3. I can 60+fps@1024x768 with my Athlon 500Mhz. If you don't like UT then don't play it. Personally I love UT and don't play much Q3 but I think there is plenty of room for Q3/UT/Halflife/etc. without getting nasty about it.

    Of cource if you were playing in WinXX then your Geforce would be quite respectable but it still has a ways to go in Linux. I built a system for a friend with a Geforce (win98) and tried it out in my system. I decided that for now I would stick with the V3 because of better driver support (at the time at least) and because I could buy 2 (which I did buy 2) of them for the same money as the Geforce. I figured by the time Nvidia gets the drivers up to speed I could upgrade again.

  • High Mhz processors are not originally intended for the consumer market, they're aimed at the business market. I work at an engineering lab and whe we buy computers we buy the fastest box they have with al the RAM they can cram into it. It's people like US that demand 1Ghz CPU's. The people with ancient P-3 600 boxes from 6 months ago will likely all get 1Ghz boxes and the 600's will be tossed over to sales (with 512M of RAM, no less) as useless old technology.
  • I run a pair of 466MHz Celerons on an Abit BP6 motherboard, totally saturating the CPUs with SETI-like stuff, full-time, over the past four months, running Redhat 6.1, and have never had a single crash, failure, or any other problem. Trust me, I keep these machines at 100% utilization.
    This doesn't mean there isn't a problem, but I certainly having seen one.
  • >> Yes I've forgotten a few tihngs, but the market for more CPU power is clearly more important than the market for higher bandwidth

    hmmm... 1000 mhz at 100 mhz bus...


    1........1..........READ DATA FROM RAM
    2...................PROCESS DATA (3 cycle)1/3
    3...................PROCESS DATA (cont.) 2/3
    4...................PROCESS DATA (cont.) 3/3 done
    5--------------------ho hum waiting for data bus
    6--------------------ho hum waiting for data bus
    7--------------------ho hum waiting for data bus
    8--------------------ho hum waiting for data bus
    9--------------------ho hum waiting for data bus
    10.......2..........WRITE DATA TO RAM
    11..................READ DATA FROM RAM
    12..................PROCESS DATA (3 cycle)

    Oversimplified, not taking cache into account, but you get the point, Bus Speed Matters.
  • Why make faster CPU's?? Do you like giving Micro$oft an excuse to east up _more_ of clock cycles. Fatter hardware=fatter software!!
  • The sooner they get the high-end chips out then the sooner the low-end chips drop in price. No-one can afford the 1Ghz chips, but the 500's are cheaper as a result.
  • Sorry kid- Carmack is wrong ;) (hehe- maybe I can give B1FF there a coronary!)

    Or not _wrong_, exactly, he's just got slightly incorrect expectations. 3dfx is NOT helping him understand these issues, because naturally they want him to produce demos that are way over-the-top, like effects sequences in The Matrix or something.

    Think of it as _time_ antialiasing. You don't antialias by having some big shocking Gaussian Blur that goes out for 10 pixels: it's very local, and the effect is to make angled edges appear more clearly angled (less stair-stepped, and not fuzzed). I do have a page that gets into this w.r.t fonts: it's this page [airwindows.com].

    By the same token, if you can see visible blurring happening in action, the motion blur is _already_ too much. What 3dfx need is not to arrange for Quake to send 27 frames to make a smooth blurry contrail behind everything (no matter how nice that may look in screenshots)- what's needed is to buffer a _single_ frame and average it with the current frame at a variable blend ratio. 50/50 might even be overly strong though it would maximize the time-based antialiasing- 40/60 could be better. Screenshots might show a tiny video echo effect on the fastest moving objects, but the most significant effect at typically high frame rates would be the softening of edges only in the specific directions of movement. To demo this you'd want something like a guy dressed in black and white pinstripes- the desired effect would be that the guy wouldn't blur out, but you would have a much clearer subliminal sense of his exact movements- something it would take a gamer or multimedia geek to pick up on.

    Running at sufficiently high framerates (say, over 80), the ideal density _would_ be 50/50, because as the speed is pushed, differences between frames become smaller. The end result would be strictly confined to the softening of just such edges that are moving the most, which would highlight exactly what types of motion are happening.

    Serendipitously, this type of motion blur (being not a fancy cinematic 'blur' that you don't see ALL THE TIME anyhow, but time antialiasing) would be just as suited to the driver as the planned spatial antialiasing- and just as suitable for application to ALL extant games that can be played on a 3dfx card. Again, all that's needed is to buffer one frame and average it with the current frame. It's not supposed to be a lightsaber blur, and 3dfx is foolish to emphasize this concept.

    Time-based antialiasing is just as effective as spatial antialiasing but people don't know what to look for, partly because nobody seems to be bothering to even try doing it properly! It'll be just the one frame buffered- or possibly two. For the purposes of 3dfx, clearly using a single extra buffer and setting a blend ratio is the way to go. For serious video, a better approach is this: think of the frames as one pixel _deep_ and treat the antialiasing as a sphere centering on each pixel. The most weight would be on the pixel being tested, the pixels that are directly left or right or up or down or earlier or later would have less of a weighting, and so on: a pixel that's one pixel up _and_ left _and_ a frame earlier would have the least weighting. I've had very good results experimenting with code that averages the frames directly before and after a frame, then doubles the size of the immediate frame and dithercopies it down to antialias it and averages it with the time shifted frames. The goal was to cut video noise, and this approach was very effective at doing so without causing lots of stupid blur. (anyone doing an open source video editing program might try this- I need to finish up my version if possible and gpl it ;) )

    The long and the short of it is that 3dfx should offer this spatial antialiasing as an extra driver feature, and that Carmack shouldn't have to do anything to enable it- and the last thing you want is to dump lots of frames in for a nice pretty blur effect. As Carmack quite rightly says you can do that easily in the program anyhow and don't need special interaction with the card- the only thing he's not getting (and this is because 3dfx are pointedly not suggesting he try it) is that time based antialiasing really isn't his problem, and is an entirely undramatic effect, to be sensed rather than seen and marvelled at. And marvelling at effects might sell more 3D cards ;)

    Again, the desired effects are practical rather than strictly visual in a 3D game. Racing and flying games would definitely be more exciting with such a driver feature, but in a 3D game it would be a matter of quickly sensing when a player you're chasing is turning or slowing- or whether you are dodging a flying projectile thoroughly enough. These things would be conveyed through the subtle softening of the lines perpendicular to motion- giving you more information than just the straight frames. (I really need to render up some demos of this...)

  • There's a big difference between antialiasing the scene and making the result the framebuffer, and saving the scaled-up version. Basically it could be done like this:

    draw frame at 2X
    dithercopy or otherwise antialias to a screen-size buffer
    display blend of buffer and the last frame's saved buffer
    copy current frame's buffer to last frame's buffer, repeat

    This also uses less display RAM. However, it is inferior to this:

    draw frame at 2X
    make blend of buffer and the last frame's saved buffer, also at 2X
    dithercopy or otherwise antialias the blend to a screen-size buffer
    copy screen buffer to screen, repeat

    The reason the second is so much better than the first (despite using a lot more RAM) is that subtle movements are a lot more likely to show up at the scaled-up level (this does assume that there is a buffer that gets drawn to and antialiased down, which I think is a safe assumption). Storing only the screen buffer means that for many pixels there would be no change from frame to frame- storing the larger 'aliased' buffer and averaging at that size makes it far more likely that there will be significant differences. These will be averaged, and then antialiased by either dithercopying to a smaller size, or some 3dfx approach that is effectively the same thing in practice (there are only so many ways to do this). The result would be the effect I describe, of a softening of lines perpendicular to movement, but it would take on a subtlety and delicacy quite comparable to film.

    Which is of course what 3dfx really want...

  • Hmm, that's funny. I thought the "king of floating point" was what made *Intel* absolutely superior to everyone else.

    I concede that the cache speed is a problem for AMD, although in most benchmarks it seems to affect things perhaps less than optimizing for Intel chips does, and maybe makes the Athlon comarable in speed to the PIII, I have run into situations on my K6 where programs run horribly because of the cache. However, people need to start writing code with less cache misses where possible! Smaller is still better, a lot of the time. But AMD is working on that anyhow, just like Intel is working on actually releasing 1Ghz chips in any measurable quantity.

    Also, Intel has major problems with (guess what?) overclockability, high power consumption, not producing reliable chips in quantity, not selling them for reasonable (market?) prices, and high operating temps! When you're pushing the chips this hard, they're *all* going to have these problems. I admit that the Athlon is a beast, but it's also faster, clock-for-clock, than the PIII core, which explains the extra transistors.

    As to the future: Intel will have their new (slower for x86!) next generation architecture, while AMD will have... copper interconnects? Faster cache speeds? Even faster 64bit x86-compatible chips? Well, we'll see what the future brings, but I know who I'm rooting for.

    And if you really want overclockability, low power consumption, good operating temps, etc., etc., don't look to fast chips from Intel *or* AMD, but rather wait for Transmeta or get a PPC chip or something.
    pb Reply or e-mail; don't vaguely moderate [].
  • The link is just as mythical as the Dual 1Ghz Pentium III.
  • by RelliK ( 4466 )
    I'm tired of all you people saying that faster procs are useless because of other bottlenecks.

    I'm also tired of people such nonsense. The fact of the matter is that the CPU bus is not a bottleneck (it was proven in multitude of tests running 3d games on 66 and 100 MHz bus systems -- performance difference was less than 1%) and the memory bandwidth is not a bottleneck (it was proven in benchmarks comparing RDRAM to regualar PC100 SDRAM). However, for the 3d games, the CPU is the bottleneck at low resolutions. At higher resolutions (1024x768 and up), the video card becomes the bottleneck. (However, at any resolution the 3d card speed is much more important than the CPU speed). That is why a lowly Celeron equipped with GeForce DDR will kick a 1GHz machine with RDRAM and say a ATI card any time.


  • By "3D games" I assume you mean Quake and Unreal and the like. Not all 3D games are so dependent on the video card. For example, fire up Falcon 4 on the two hypothetical machines you have there in campaign mode. Falcon beats up on the video card a lot, but it's also insanely CPU-intensive, so the machine with the faster processor will most likely win.
  • (FLAME)
    JEEZ! Did you even READ the original post? Let me quote the relevant portion for you; try to engage your brain before responding.
    I got news for you, in most non-server environments the proc is still the biggest bottle neck
    Hmm..."non-server". Let's say it again together..."non-server." Now, let's read the relevant part of YOUR post:
    The computer in question serves files...
    Tell me again how your response was relevant to the original post.

    Pardon me while I mock this thread.
    Post: "Women don't have testicles."

    Response: "That's not true! I have testicles!"
    Me: "Idiot!"
    Since your feeble mind isn't going to catch the analogy, I think I'll just sit back and enjoy my self-imposed feelings of superiority.
    Now, where did I put those hot dogs?

  • > even on my dual 500 with 768MB RAM the system still cant do everything i want it to. like seamless audio/video in the background, compiling on seperate windows, java stuff etc etc. more ram will only take you so far.

    While multiple processes exacerbate your CPU load, they exacerbate your ability to feed the CPU with data more or less in proportion. Your problem may not be the amount of RAM, but you certainly have a problem with memory bandwith on any x86 ever made.

    And the above analysis is before factoring in cache. Any one of the things you name is likely to put a heavy load on your cache, and when you try to do several at a time, you effectively run (several-1) of them without a cache. Or maybe all of them, if you get into a cache thrashing mode.

    And it's those cache misses that make a CPU stall out for multiple cycles at a time.

  • I've been sysadmin in a grapics department that has several dual CPU machines. The dual processing capability doesn't give one the option to multitask when rendering. Of course, I perhaps might have been able to set it up so that it would, but for the most part, 3DSMax rendering ate both processors for lunch. Granted, they were nearly twice as fast as the nearest single processor machine in the office, but during rendering, both procs maxed at 100% utilization.

  • I'm not sure about 3dsmax either. I'm not working with it currently, otherwise, I'd probably go check! ;)

    I'll bet it does offer similar functionality to Lightwave, but I never really had a need for that type of functionality. My need was really, "Get as much processor time as humanly possible." We even resorted to stealing the CAD machines at night to do rendering for us.

    The big problem that we had rendering was that bridges have a shitload of polygons in them. ;) Throw a few cars on the bridge and fly around it a couple of times and rendering all of a sudden takes 20 minutes per frame on a dual zeon 450 with half a gig of RAM!

  • That said, the average joe MS user is far more interested in faster bandwidth. Most of that computer time is spent online (chat rooms, net games, pr0n, etc.) Joe doesn't care about 1Ghz, except perhaps as "my processor is faster than yours".

    So who has a budget for these things and actually wants them if not the technical incompetants? High end servers? Would rather have a fatter pipe. These processors are only useful for high end work stations. Maybe you care, but most of us don't use or need that.

    Of course, I wish my processor was faster than yours... :)

  • It isn't like any normal consumers can afford
    > one of those chips anyways.

    Well I sure can't. But 1 GHz chips I can't afford means remaindered or price-slashed 700 MHz chips that maybe I can afford. Not that you need a fast machine to run Linux - my Linux box is a P5-100 - but if you're running an MS OS, damn buddy, think how fast Turbo C v.2 is gonna run on one of those! Go Intel & AMD!

    Yours WDK - WKiernan@concentric.net

  • "just a little bit of rewiring needed. ;-) "

    The rewiring is to make the motherboard support the chip. The chip supports SMP fine, no rewiring needed.
  • _

    I just can't wait to bolt some waveguide to the system clock and use it to cook my food - coming soon to a pc near you - the latest and greatest in pc cases, with a built in microwave - just open the convienient door, slip in that slice of cold pizza, and by the time your code is compiled, your food is hot...and just think, no more burnt popcorn - you can nuke it and have it right in front of you so you don't forget about it, just click a button on your desktop and it stops cooking (not recommended during a q3 fragfest)

    of course, I guess that means we will need a bigger power supply....
  • well, won't work _reliably_ under stress, that is.

    if you run the well-known BX chipset, and both cpu's are run to near saturation for longish periods of time (more than a few weeks, usually), the system WILL lock up.

    I found this while stress testing my dual cpu BX boards (asus, tyan and abit). I ran 2 instances of seti@home on each cpu (yes, 4 processes on a dual system). ALL my BX systems hung eventually. even the ones with extra fans on the bx chip and heat compound under the sink.

    my guess is that when the BX was released, intel felt it was "good enough" and that no one would totally saturate their CPUs. certainly NT wouldn't - and I bet linux 2.0 wouldn't either. not until linux 2.2 would both cpu's be used to this level.

    so I can't believe intel anymore when they lie about their own products. dual 1ghz - gimme a break; they can't even make ONE ghz work under normal consumer conditions!


  • There isn's nearly one program that most people use that would actually make getting a dual PIII Ghz machine worth it. I can't speak for all devleopers, but for myself, cutting down times in compiling code, and being able to multi-task and do somthing else while the code is compiling, without speed-downs, is absolutly crucial for productivity.
    I Have a Dual-500mhz machine, and i max out both cpu's all the time. Having Twice the power would almost double my productivity. As for most users - Anything that is multi-threaded and cpu-intensive will take full advantage of these cpu's. I believe that apps like 3d max are multi-threaded and can take advantage of the SMP. My artist friend has to take whole afternoons off while his animations are rendering because he can't use his system for other things while it is rendering. he has a single cpu. If he had a dual-cpu system, im sure he could be doing other things(photoshop,email,etc) while his images rendered. Just my 2 Cents (3 in canada)
  • Geek: I can download 5 billion pictures in 7 seconds.

    Homer: But I want them now.
  • by Chris Johnson ( 580 ) on Monday April 03, 2000 @08:53PM (#1153791) Homepage Journal
    Actually, the poster child for (good) motion blur isn't FPSes at all. It's racing games and flightsims. Having the nearfield flickering semirandomly loses the sense of great speed. Even a small amount of general purpose motion blur would help this quite a bit. I concede that's Not Your Problem :)
  • by Effugas ( 2378 ) on Tuesday April 04, 2000 @12:50AM (#1153792) Homepage
    [WARNING: The contents of this post are slightly of topic, until you consider A) This is a subject regarding excessive CPU power that nonetheless consistently gets overrun and B) This site has been overrun and I can't comment on the actual contents of it.]

    [WARNING 2: This is one of the more bitter posts I've made to Slashdot. You've been warned.]

    And so it was so ordered, after legions upon legions of sites fell to That Which Was The Slashdot Horde:

    If the content of the web page is not dependent on the identity of the user, then the content of the web page MUST not be generated specifically for that user.

    Yes, that's an IETF must, damnit :-)

    This isn't a complicated concept, folks. If each user gets a very different page(think search engine), then you dynamically generate the new content live. If each user only gets a slightly different page...well, gee, dynamically generate that slight difference, but leave static everything else.

    If you're dependent on the user, change the page for each user. If you're dependent on some local index of news, then change the page each time the local index of news changes. If you're dependent on an angel coming down and teaching you to code the goddamn meaning of life in Perl, *THEN CHANGE THE PAGE WHEN SOME GLOWING HALOED CREATURE WALTZES IN YOUR STUDIO*, but for *CRYING OUT LOUD* don't regenerate your page every time I try to read some godforsaken article!

    It's simple stuff like this that make me feel like I have a moral obligation to be a Comp Sci major. Grrrr.

    One other point...ya wanna talk overcommitment? The Linux kernel lists are going nuts about the reasonably rare situations that can arise when the OS allows processes to overcommit memory, on the probabalistic assumption that not all processes will actually use the memory they request. What to do when the memory actually commited actually becomes used? Should the OS die, so that the processes may live? What processes does the OS kill to keep itself alive? There's alot of argument about how to deal with overcommitment on the OS level, and I'll leave that fight to the experts.

    But lemme tell ya, just view the Slashdot Victim of the Day to find web pages that deal with overcommitment. Since these sites aren't too likely to change their entire codebases all that soon, may I suggest that expressing Database Errors *might* not be the most graceful method of expressing degradation of resources?

    In other words, faced with the choice of fewer ad impressions and less readers vs. temporarily switching to a cached copy of the page which is 99.9% accurate, might it not be nice to have built as a core element of Apache's modperl something along the lines of, "Run this script to generate this page UNLESS we're getting hammered; in that case, use mod_rewrite to change the URL to a static equivalent of our now thoroughly overloaded page"?

    Ahhhh. I might actually be able to view pages about Gigahertz SMP :-)

    Yours Truly,

    Dan Kaminsky
    DoxPara Research

    P.S. Irony #1235235: It's taking me forever to finally get this comment posted :-)
  • by Signal 11 ( 7608 ) on Monday April 03, 2000 @03:31PM (#1153793)
    Well, first off, my system performance improved alot more by shuffling off IDE and moving to a fast SCSI HDD than the jump from 350 to 700 MHz.

    secondly, it has always been true that you need to properly spec your system based on what you plan to use it for. If it's going to be serving static webpages, you want lots of memory and a PCI architecture that can support multiple concurrent bus masters (to handle the high bandwidth). CPU speed is less important than cache size as well in this configuration.

    If you're wanting to run Quake, you want a smaller amount of RAM but it needs to be low latency (say, 6ns) and have a high bandwidth (PC133). You'll want a fast processor and a faster video card. HDD is unimportant for quake, as is the bussing architecture (PCI, ISA - not the FSB).

    But don't tell me there's a single metric for measuring system performance.. that's a lie, as much as saying that average access time for harddrives is "the" metric. I'd disagree, it's the track-to-track latency and aureal density that I happen to have used to spec my system. Not that it's "the" metric, but it's the two I used (as well as internal xfer rate, of course) which is directly proportional to the RPM the HDD is rated for!

  • by TimButterfield ( 16686 ) on Monday April 03, 2000 @12:36PM (#1153794) Homepage
    just a little bit of rewiring needed. ;-)

    To recap, the article has 6 pages on modifying the Iwill Slocket IIs (albiet with graphics).

    I like the item mentioned in the conclusion [hardwarecentral.com] better: "a new revision of the Slocket II is currently in the works that will support FCPGA SMP out of the box, making the configuration of a dual CPU system a matter of plugging the CPUs in the slockets, no soldering required."

  • by Black Parrot ( 19622 ) on Monday April 03, 2000 @02:49PM (#1153795)
    > Granted that it is a royal pain in the butt

    Yet still less trouble than actually rounding up two 1GHz processors.

  • by iCEBaLM ( 34905 ) <icebalmNO@SPAMicebalm.com> on Monday April 03, 2000 @02:57PM (#1153796)
    DB Error: Unable to get author information!

    Cant read the article :(

    -- iCEBaLM
  • by Anonymous Covard ( 140827 ) on Monday April 03, 2000 @02:38PM (#1153797)
    Nearly 30 replies and nobody has made the obligatory Beowulf Cluster comment?
  • by tesserae ( 156984 ) on Monday April 03, 2000 @06:22PM (#1153798)
    It's not clear that they've proven that the 1GHz part supports SMP: what they did was take several of Intel's conflicting statements and specifications, select from that information the portions which were consistent with their argument that the 1GHz processor was actually identical to the 866MHz part, and overclocked the 866 and called it a 1GHz...

    They may or may not have been right, but in any case they certainly did not run a pair of factory 1GHz CPUs in an SMP configuration. I'll grant that the core is almost certainly the same; but even if they are correct in their contention that the stepping is the same for the two parts, it is a trivial matter for Intel to change the packaging to render the 1GHz version incapable of supporting SMP -- don't bond the SMP pin to the die, and they've done it!

    The big question from my point of view is this: why would Intel say their flagship processor won't support SMP? This isn't like the case with the Celeron, where they clearly wanted people to buy the more-expensive processor instead of the cheapie... so why don't they want people to buy their most expensive product in pairs? The Celerons were too cheap, and the early FC-PGA Coppermines (the 500E and 550E) were just too overclockable; it makes sense that Intel would want to disable SMP for them, and so they played hide-the-SMP-pin. It also appears they've gone further with the CeleronII (AKA Coppermine128), and simply not bonded the SMP pin to the die -- again, a pricing issue... But why the 1GHz part? It's hard to buy Sassen's argument that it's just heat -- that applies to single-CPU systems as well as to duals.

    So I don't think the controversy is over; it has just gotten more complicated, is all.


  • by karlm ( 158591 ) on Monday April 03, 2000 @02:52PM (#1153799) Homepage
    I despise Intel's dominance as much as anyone, but claims of AMD's absolute superiority don't hold much water. AMD is currently king of floating point for the x86 architecture, however, overclockability, power consumption, cache speed, operating temps all leave something to be desired.

    I'm not sure about the 1GHz machines, but AMD was having real problems with thier caches, at least when the 800 MHz Athlons came out. They ended up setting the cache multipliers at 3/8. If you're doing hard core rendering and simulation, you'll want an Athlon, but you'll also want to be running an SMP machine. Unfortunately, there aren't a lot of SMP Athlon machines out there.

    If, on the other hand, you're looking at gaming. I beleive the video card is the current bottleneck in a high performace gaming system. The Athlon/PII decision becomes a mattter of personal preference.

    Database manipulation and general OS tasks would seem to be where the PII would shine, given that it's cache multiplier is 1. That whole cache multiplier problem is going to be a real big problem for AMD if they don't get it figured out soon.


    I'm a slacker? You're the one who waited until now to just sit arround.

  • by Anonymous Coward on Monday April 03, 2000 @12:34PM (#1153800)
    Two 1-GHz processors on one motherboard seems like an incredible source of bus contention. How fast is the bus these CPUs are connected to? The CPUs are at 10 times the bus speed! (afaik)
  • by Mike Miller ( 28248 ) <mikem@computer.org> on Monday April 03, 2000 @03:09PM (#1153801) Homepage
    No, the reason that AMD doesn't have a DP or MP system out is not because they 'overclock' the Athlons. It is because they have the EV6 front side bus from DEC, which is a point to point protocol. This means that you need a much more complex chipset with a lot more pins to do even dual-processor configurations.

    Intel on the other hand has a shared front side bus, so all the CPUs can see each other's traffic and results in a substantially less complex chipset, with a lot fewer pins. Probably lower performance (depending on several factors), but certainly a cheeper solution for dual and 4-way SMP machines

    - Mike

  • by molog ( 110171 ) on Monday April 03, 2000 @12:28PM (#1153802) Homepage Journal
    According to the article with a heafty cooling system and some modifications you can get it done. Granted that it is a royal pain in the butt by their own addmission but the fact is you can do it.

    So Linus, what are we doing tonight?

  • by scrye ( 169108 ) on Monday April 03, 2000 @12:35PM (#1153803) Homepage
    Do dual Athlons motherboard exits?, and at what cost?
    I do not believe there is one out there yet, but here [slota.com], near the bottom it says that tyan is making a board to be released Q4-2000 , Codename Dolphin, that is supposidly Dual-CPU capable.
  • by be-fan ( 61476 ) on Monday April 03, 2000 @03:12PM (#1153804)
    I'm tired of all you people saying that faster procs are useless because of other bottlenecks. I got news for you, in most non-server environments the proc is still the biggest bottle neck. To tell the truth, I enjoyed much more the 50% boost from 200 to 300 MHz than I did the 50% boost from a 66 to a 100MHz bus. For the most part apps are still computer bound, EXCEPT in server space. Thats why the Xeon is still chugging along at 550MHz. Examples of apps that are compute bound.
    1) 3D, games, rendering,modling, you name it.
    2) Any kind of realtime graphics.
    3) Photoshop type apps depending on wether you use filters more often or just edits to large files.
    4) Compiling.
    5) Audio editing
    6) Real-time video editing. (What, I have to wait 2 minutes to render the changes!)
    Things not compute bound
    1) Serving webpages, files, etc.
    2) Working with large photoshop files.
    3) Some types of scientific computing where data crunching is high volume, low workload.
    Yes I've forgotten a few tihngs, but the market for more CPU power is clearly more important than the market for higher bandwidth.
  • by John Carmack ( 101025 ) on Monday April 03, 2000 @06:22PM (#1153805)
    A GeForce should be able to run Q3 at 200 fps at 400x300 (r_mode 1) or possibly even 512x384 resolution if the cpu was fast enough. A dual willamette at the end of this year will probably do it.

    We currently see 100+ fps timedemos at 640x480 with either a 1ghz processor or dual 800's, and that isn't completely fill rate limited. DDR GeForce cards are really, really fast.

    Yes, it is almost completely pointless.

    The only reasonable argument for super high framerates is to do multi frame composited motion blur, but it turns out that it isn't all that impressive.

    I did a set of offline renderings of running Q3 at 1000 fps and blending down to 60 fps for display. Looked at individually, the screenshots were AWESOME, with characters blurring through their animations and gibs streaking off the screen, but when they were played at 60hz, nobody could tell the difference even side by side.

    Motion blur is more important at 24hz movie speeds, but at higher monitor retrace rates it really doesn't matter much.

    There are some poster-child cases for it, like a spinning wagon wheel, but for most aspects of a FPS, realistic motion blur isn't noticable.

    Exagerated motion blur (light sabers, etc) is a separate issue, and doesn't require ultra-high framerates.

    There are still plenty of things we can usefully burn faster cpu's on...

    John Carmack

Evolution is a million line computer program falling into place by accident.