
The Dual 1GHz Pentium III Myth 122
Sander Sassen writes: "HardwareCentral has the latest on the dual 1GHz Pentium III controversy. Here's a blurb: 'The 1 GHz Intel Pentium III seems to be the subject of much controversy, as many claims have been made about its inability to run in a dual CPU configuration. HardwareCentral has been following the discussion closely and decided to put an end to all the rumors and get a couple of GigaHertz Pentium IIIs and a dual CPU motherboard and find out what exactly is the truth of the matter.'"
The Deeper Truth (Score:1)
Athlons, on the other hand, can support up to 8 (well, if the motherboard guys ever figure out that configuration...) What else is good about multiple Athlons? Their bus. If you've got a 133MHz bus with 2 pentium 3s, each chip gets 67.5MHz of bus speed. Hook 8 Athlons up to a yet-to-be-built motherboard with a 400MHz bus based on the DEC EV6 architecture and you get 400MHz for EVERY CPU simultaneously.
E.
All this information is backed up on numerous sites, www.tomshardware.com, Ars Technica, etc
Re:Bus bottleneck, worse than you know (Score:1)
E.
Re:Dual Processors (Score:1)
Fry a steak? Heck, I don't have enough to keep my little Athlon/550 busy. It spits back kernels to me in 4+ minutes. Hardly enough time to fix a sandwich. Besides, you know the 1G P3's will start at least $1500. I've got a list of better things to have for $3k.
Any bets on whether we'll see 1G P3's or Dual-Athlon systems out first?
Re:To elucidate: (Score:1)
Re:Geez,when did slashdot become news of sysadmins (Score:1)
The largest speed increase I ever had was from a 486/33 to a PPro200. That huge jump aside, what I find works the best is, all other things being not-sucky (ie, not 16MB RAM, not 100mhz, etc), a switch from IDE to SCSI brings huge speed increases. My dual P3-500 box without its SCSI sucks it to my P200 for general use with SCSI (of course, now I run the SCSI on the dual P3).
That being said, lots of memory is helpful, and of course, processors do speed things up. But after, oh, about 600mhz and 256MB RAM, the bottleneck in the modern PC (remember, this whole thing is about PCs, not your incredibly powerful SGI rendering box) is the HDD.
Re:Ta DA (Score:1)
Re:Ta DA (Score:1)
Re:The Deeper Truth (Score:1)
That's a pentium, thus the word Pentium.
Or a quad Xeon. Uhh, Xeon is a Pentium II/III with
hot cache.
I've never read a
All your information may be backed up on numerous sites, but it's wrong.
kabloie
Re:Geez,when did slashdot become news of sysadmins (Score:1)
Or is this a limey spelling.
Just tryin to help, I like that quote as well.
kabloie
Re:Why? (Score:1)
Re:Why? (Score:1)
Octane with dual R12k, SSE graphics -- ~$35k
Estimated total system cost for dual 1ghz PIII ~$7000 - $10000
The octane will be faster, no doubt, but not three times as fast. The PIII system is a bargain.
And don't get me started with sun equipment... sun workstations are just PCs with different processors in 'em. UltraSparc just ain't that cool.
Re:You are totally wrong. (Score:1)
Time flies like an arrow;
DivX (Score:1)
Time flies like an arrow;
It's not all about the RAM (Score:1)
No it's effing not! POV takes forever to render a frame? Ray-tracing is a lot of floating-point and not much I/O. More CPU power would speed it WAY up. Your Quake3 framerate sucks? If you have a modern 3D card, it's probably because the CPU can't feed vertices to the card fast enough. Want to watch a DVD? A faster CPU would make it smoother. (assuming you don't have hardware MPEG decode)
But if the machine had ADEQUATE RAM IT WOULDN'T HAVE TO FUCKING PAGE AT ALL!!! GET IT?
THERE ARE A LOT OF TASKS THAT REQUIRE FAST NUMBER-CRUNHING, AND THEY WOULD BE FASTER WITH A FASTER CPU, DON'T YOU FUCKING GET IT?
Re:Dual Athlons (Score:1)
Intel's got the x86 smp market sewn up right now, but it'll only be a matter of months until smp K7's are available. So THEY say anyways... a whole lotta potential there...
Re:Wow (Score:1)
your website owns, Jeff K!
(hi Lowtax!)
Re: what's the point? (Score:1)
This sounds like a job for (Swhoooosh!)
BeOS man!
The pages are gone! (Score:1)
CONSPIRACY!
Re:Wow (Score:1)
Re:Wow (Score:1)
Re:Why? (Score:1)
GHZ have no meaning.
300mhz MIPS/R12000 has better FPU performance than any existing Intel CPU at any clock speed so far.
Re:Geez,when did slashdot become news of sysadmins (Score:1)
> 1) 3D, games, rendering,modling, you name it.
Usually video card bound
> 2) Any kind of realtime graphics.
What does this term mean? Did you make it up?
> 3) Photoshop type apps depending on wether you
> use filters more often or just edits to large
> files.
Cache size and speed dependent
> 4) Compiling.
Cache and I/O bound
> 5) Audio editing
> 6) Real-time video editing. (What, I have to
> wait 2 minutes to render the changes!)
Usually depend upon FPU which is largely ignored in commodity chips.
It's not that nothing is CPU bound. It's just very rarely the bottleneck.
No problem with cache (Score:1)
No problems? (Score:1)
"We had not troubles operating the same setup at 1066MHz, although very unstable [...]"
If "very unstable" doesn't count as trouble, I don't know what does.
Re:whats the point? (Score:1)
Re:Geez,when did slashdot become news of sysadmins (Score:1)
Some explanation required... (Score:1)
- What exactly did they do to cool this beast? Most of the article was about modifying the slocket. No information about cooling other than it was a "powerful" solution.
Also, I've never even heard of Iwill, but they're mentioned numerous times in this article and twice right now on the hardwarecentral front page.
Anyway, if they really did use OC'd 866s, they didn't prove anything, and this article is worthless.
Could it be in need of RAM, not GHz? (Score:1)
Ever considered buying more RAM? If you're maxing out a dual-500MHz machine, I would hope you've got at least 4 Gig of RAM on that baby. If you have less than 1 Gig, you're starving it, man.
I'm just mentioning this due to my observation that a lot of friends who maxed found problems like that went away when they upgraded from a measly 128Meg or 256Meg to some reasonable number like 512Meg.
Re:Could it be in need of RAM, not GHz? (Score:1)
4 minutes for a kernel? (Score:1)
All you crazies and your new-fangled processor thingies....
Re:Wow (Score:1)
Re:Bus bottleneck, worse than you know (Score:1)
The Article Wasn't Removed (Score:1)
Re:whats the point? (Score:1)
Servers generally aren't designed with raw processing power in mind. Maximum I/O is the design goal and the processor is just one piece of the puzzle.
Case in point; where I work, we have an RS/6000 F30. It only has a lowly 166mhz POWERpc, but for database operations, it kicks the shit out of our IBM Netfinity PII400.
I have a feeling that that "slow" PARisc HP9000 would make your BP6 look silly for similar operations.
Re:Could it be in need of RAM, not GHz? (Score:1)
>An extra Gig or 2 isn't going to speed up the compile when you have 4,000 files and a few million lines of code.
Well, the link is usually very memory intensive. Having enough memory to cache all common header files makes a big difference.
But the difference between one Gig & two? Probably close to zero.
rbb
Re:Could it be in need of RAM, not GHz? (Score:1)
>> I Have a Dual-500mhz machine, and i max out both cpu's all the time.
> Ever considered buying more RAM? If you're maxing out a dual-500MHz machine, I would hope you've got at least 4 Gig of RAM on that baby.
Compiling is CPU bound, not I/O bound.
An extra Gig or 2 isn't going to speed up the compile when you have 4,000 files and a few million lines of code.
Re:Geez,when did slashdot become news of sysadmins (Score:1)
The techinical terms are CPU bound and I/O bound.
Nice summary, BTW.
Re:Could it be in need of RAM, not GHz? (Score:1)
It takes 25 mins to do a compile, and about a minute to link (P2-400 w/ 128 Megs). The debug
> Having enough memory to cache all common header files makes a big difference.
Now that is something we haven't tested. I'll buy that. Now only if my manager would
Re:Bus bottleneck (Score:1)
The whole point of RDRAM was to allow deeply pipelined, concurrent memory accesses from PCI periferals, 4x AGP and 2 to 4 800+MHZ CPU's.
Each RDRAM module can independantly handle a memory access, and each RDRAM channel can independantly transfer data ( on a simple 16 bit bus ). You have tons of bandwidth, just tons of associated latency.
In a multi-CPU configuration, you're going to get memory contention and thus latency, no matter what you do, so the added latency of RDRAM is hidden in such configurations.
The problem comes about in single CPU and 2x AGP( or any minimally utilized periferal configuration ). Here the added latency seriously shines, and you potentially can get slower memory access than standard SDRAM.
In short, the memory was doing what it was designed to do.
-Michael
Re:Wow (Score:1)
Re:AMD rocks (Score:1)
They didn't add any functionality too the chip itself.
-T
Re:Wow (Score:1)
If (when) GPU's are capable of doing 1800x1200 at 150fps. I don't think anyone's going to care about this kind of visual trickery.
If I was at Nvidia, I would be laughing my head off at the crap hardware 3dfx puts out, but pitying for the consumers tricked into buying it.
whats the point? (Score:1)
Re:forget dual 1ghz; even dual 450mhz won't work (Score:1)
Re:AMD rocks (Score:1)
Re:Bus bottleneck, worse than you know (Score:1)
I guess there are Alpha-based EV6 SMP boards available, aren't they?
Re:Agreed (Score:1)
There are lots of applications where CPU bus and memory bandwith are the bottlenecks. Why do you think some high-end servers provide 4-way interleave access to their SDRAM main memory?
Re:Some explanation required... (Score:1)
Real 1GHz PIIIs are not yet available.
Re:The Deeper Truth (Score:1)
Re:Controversy over (Score:1)
MicroBerto hears three glaring letters off in the distance...
AMD
Mike Roberto (roberto@soul.apk.net [mailto]) - AOL IM: MicroBerto
Re:Why? (Score:1)
Re:Why? (Score:1)
way too many people are just blinded by the one most prominate number, and know nothing about buss speed and cache and ram speed and so forth..
Irrelevant! (Score:1)
No kidding! (Score:1)
Huh? (Score:1)
Furthermore, most dual CPU motherboards are used in workstation or server configurations, which would make meeting these temperature requirements an absolute necessity. Instability or system crashes are not an option in these configurations.
When is a system crash ever an option?Re:Controversy over (Score:1)
Re:Controversy over (Score:1)
I can hear it now...
"...THEY'RE GEEKY, THEY'RE LINUS AND THE pain-in-Bill's-butt-PAIN." ta-da-da-DA, dunt.
Hmmmm, time for a new sig.
TangoChaz
"It's not enough to be on the right track -- you have to be moving faster than the train." -- Rod Davis, Editor of Seahorse Mag.
Re:AMD rocks (Score:1)
Not exactly. Case in point - Abit BP6, able to overcome many processor limitations that would otherwise inhibit dual processor functions, in this case celeron.
Regardless of the INTENT of the processor's manufacturer's limitations, any properly designed MB can overcome this with a good chipset and other elements of good design.
AFAIK, with the PII to Xeon socket adapters, it is possible to run more than 2 PII/PIII's in a multi-processor MD.
Can anyone confirm or refute this from athoritative testing or experience?
TangoChaz
"It's not enough to be on the right track -- you have to be moving faster than the train." -- Rod Davis, Editor of Seahorse Mag.
Wow (Score:1)
Seriously, thats some amazing speed.
Re:Geez,when did slashdot become news of sysadmins (Score:1)
What are you doing with your computer? I'm pretty sure that 350+IDE->700+IDE will make your kernel to compile faster, games running better and so on than 350+IDE->350+fast SCSI.
Of course with 700 Mhz CPU you need SCSI to boost your system even faster, though UDMA66 should work pretty well also.
_________________________
Re:Wow (Score:1)
- - - - -
Re:whats the point? (Score:1)
Quake III Arena, baby! It's all about framerate! -- and perhaps ping times...
Re:Wow (Score:1)
Of cource if you were playing in WinXX then your Geforce would be quite respectable but it still has a ways to go in Linux. I built a system for a friend with a Geforce (win98) and tried it out in my system. I decided that for now I would stick with the V3 because of better driver support (at the time at least) and because I could buy 2 (which I did buy 2) of them for the same money as the Geforce. I figured by the time Nvidia gets the drivers up to speed I could upgrade again.
Re:Why? (Score:1)
Re:forget dual 1ghz; even dual 450mhz won't work (Score:1)
This doesn't mean there isn't a problem, but I certainly having seen one.
Re:Geez,when did slashdot become news of sysadmins (Score:1)
hmmm... 1000 mhz at 100 mhz bus...
CPU | DATA BUS | ACTION
1........1..........READ DATA FROM RAM
2...................PROCESS DATA (3 cycle)1/3
3...................PROCESS DATA (cont.) 2/3
4...................PROCESS DATA (cont.) 3/3 done
5--------------------ho hum waiting for data bus
6--------------------ho hum waiting for data bus
7--------------------ho hum waiting for data bus
8--------------------ho hum waiting for data bus
9--------------------ho hum waiting for data bus
10.......2..........WRITE DATA TO RAM
11..................READ DATA FROM RAM
12..................PROCESS DATA (3 cycle)
Oversimplified, not taking cache into account, but you get the point, Bus Speed Matters.
Re:Geez,when did slashdot become news of sysadmins (Score:1)
Re:Why? (Score:1)
To elucidate: (Score:2)
Or not _wrong_, exactly, he's just got slightly incorrect expectations. 3dfx is NOT helping him understand these issues, because naturally they want him to produce demos that are way over-the-top, like effects sequences in The Matrix or something.
Think of it as _time_ antialiasing. You don't antialias by having some big shocking Gaussian Blur that goes out for 10 pixels: it's very local, and the effect is to make angled edges appear more clearly angled (less stair-stepped, and not fuzzed). I do have a page that gets into this w.r.t fonts: it's this page [airwindows.com].
By the same token, if you can see visible blurring happening in action, the motion blur is _already_ too much. What 3dfx need is not to arrange for Quake to send 27 frames to make a smooth blurry contrail behind everything (no matter how nice that may look in screenshots)- what's needed is to buffer a _single_ frame and average it with the current frame at a variable blend ratio. 50/50 might even be overly strong though it would maximize the time-based antialiasing- 40/60 could be better. Screenshots might show a tiny video echo effect on the fastest moving objects, but the most significant effect at typically high frame rates would be the softening of edges only in the specific directions of movement. To demo this you'd want something like a guy dressed in black and white pinstripes- the desired effect would be that the guy wouldn't blur out, but you would have a much clearer subliminal sense of his exact movements- something it would take a gamer or multimedia geek to pick up on.
Running at sufficiently high framerates (say, over 80), the ideal density _would_ be 50/50, because as the speed is pushed, differences between frames become smaller. The end result would be strictly confined to the softening of just such edges that are moving the most, which would highlight exactly what types of motion are happening.
Serendipitously, this type of motion blur (being not a fancy cinematic 'blur' that you don't see ALL THE TIME anyhow, but time antialiasing) would be just as suited to the driver as the planned spatial antialiasing- and just as suitable for application to ALL extant games that can be played on a 3dfx card. Again, all that's needed is to buffer one frame and average it with the current frame. It's not supposed to be a lightsaber blur, and 3dfx is foolish to emphasize this concept.
Time-based antialiasing is just as effective as spatial antialiasing but people don't know what to look for, partly because nobody seems to be bothering to even try doing it properly! It'll be just the one frame buffered- or possibly two. For the purposes of 3dfx, clearly using a single extra buffer and setting a blend ratio is the way to go. For serious video, a better approach is this: think of the frames as one pixel _deep_ and treat the antialiasing as a sphere centering on each pixel. The most weight would be on the pixel being tested, the pixels that are directly left or right or up or down or earlier or later would have less of a weighting, and so on: a pixel that's one pixel up _and_ left _and_ a frame earlier would have the least weighting. I've had very good results experimenting with code that averages the frames directly before and after a frame, then doubles the size of the immediate frame and dithercopies it down to antialias it and averages it with the time shifted frames. The goal was to cut video noise, and this approach was very effective at doing so without causing lots of stupid blur. (anyone doing an open source video editing program might try this- I need to finish up my version if possible and gpl it ;) )
The long and the short of it is that 3dfx should offer this spatial antialiasing as an extra driver feature, and that Carmack shouldn't have to do anything to enable it- and the last thing you want is to dump lots of frames in for a nice pretty blur effect. As Carmack quite rightly says you can do that easily in the program anyhow and don't need special interaction with the card- the only thing he's not getting (and this is because 3dfx are pointedly not suggesting he try it) is that time based antialiasing really isn't his problem, and is an entirely undramatic effect, to be sensed rather than seen and marvelled at. And marvelling at effects might sell more 3D cards ;)
Again, the desired effects are practical rather than strictly visual in a 3D game. Racing and flying games would definitely be more exciting with such a driver feature, but in a 3D game it would be a matter of quickly sensing when a player you're chasing is turning or slowing- or whether you are dodging a flying projectile thoroughly enough. These things would be conveyed through the subtle softening of the lines perpendicular to motion- giving you more information than just the straight frames. (I really need to render up some demos of this...)
Tech detail (Score:2)
draw frame at 2X
dithercopy or otherwise antialias to a screen-size buffer
display blend of buffer and the last frame's saved buffer
copy current frame's buffer to last frame's buffer, repeat
This also uses less display RAM. However, it is inferior to this:
draw frame at 2X
make blend of buffer and the last frame's saved buffer, also at 2X
dithercopy or otherwise antialias the blend to a screen-size buffer
copy screen buffer to screen, repeat
The reason the second is so much better than the first (despite using a lot more RAM) is that subtle movements are a lot more likely to show up at the scaled-up level (this does assume that there is a buffer that gets drawn to and antialiased down, which I think is a safe assumption). Storing only the screen buffer means that for many pixels there would be no change from frame to frame- storing the larger 'aliased' buffer and averaging at that size makes it far more likely that there will be significant differences. These will be averaged, and then antialiased by either dithercopying to a smaller size, or some 3dfx approach that is effectively the same thing in practice (there are only so many ways to do this). The result would be the effect I describe, of a softening of lines perpendicular to movement, but it would take on a subtlety and delicacy quite comparable to film.
Which is of course what 3dfx really want...
Re:AMD rocks (Score:2)
I concede that the cache speed is a problem for AMD, although in most benchmarks it seems to affect things perhaps less than optimizing for Intel chips does, and maybe makes the Athlon comarable in speed to the PIII, I have run into situations on my K6 where programs run horribly because of the cache. However, people need to start writing code with less cache misses where possible! Smaller is still better, a lot of the time. But AMD is working on that anyhow, just like Intel is working on actually releasing 1Ghz chips in any measurable quantity.
Also, Intel has major problems with (guess what?) overclockability, high power consumption, not producing reliable chips in quantity, not selling them for reasonable (market?) prices, and high operating temps! When you're pushing the chips this hard, they're *all* going to have these problems. I admit that the Athlon is a beast, but it's also faster, clock-for-clock, than the PIII core, which explains the extra transistors.
As to the future: Intel will have their new (slower for x86!) next generation architecture, while AMD will have... copper interconnects? Faster cache speeds? Even faster 64bit x86-compatible chips? Well, we'll see what the future brings, but I know who I'm rooting for.
And if you really want overclockability, low power consumption, good operating temps, etc., etc., don't look to fast chips from Intel *or* AMD, but rather wait for Transmeta or get a PPC chip or something.
---
pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
The Dual 1GHz Pentium III Myth Myth (Score:2)
Agreed (Score:2)
I'm also tired of people such nonsense. The fact of the matter is that the CPU bus is not a bottleneck (it was proven in multitude of tests running 3d games on 66 and 100 MHz bus systems -- performance difference was less than 1%) and the memory bandwidth is not a bottleneck (it was proven in benchmarks comparing RDRAM to regualar PC100 SDRAM). However, for the 3d games, the CPU is the bottleneck at low resolutions. At higher resolutions (1024x768 and up), the video card becomes the bottleneck. (However, at any resolution the 3d card speed is much more important than the CPU speed). That is why a lowly Celeron equipped with GeForce DDR will kick a 1GHz machine with RDRAM and say a ATI card any time.
___
Re:Agreed (Score:2)
Re:Geez,when did slashdot become news of sysadmins (Score:2)
JEEZ! Did you even READ the original post? Let me quote the relevant portion for you; try to engage your brain before responding.
I got news for you, in most non-server environments the proc is still the biggest bottle neck
Hmm..."non-server". Let's say it again together..."non-server." Now, let's read the relevant part of YOUR post:
The computer in question serves files...
Tell me again how your response was relevant to the original post.
Pardon me while I mock this thread. Since your feeble mind isn't going to catch the analogy, I think I'll just sit back and enjoy my self-imposed feelings of superiority.
(/FLAME)
Now, where did I put those hot dogs?
Re:Geez,when did slashdot become news of sysadmins (Score:2)
While multiple processes exacerbate your CPU load, they exacerbate your ability to feed the CPU with data more or less in proportion. Your problem may not be the amount of RAM, but you certainly have a problem with memory bandwith on any x86 ever made.
And the above analysis is before factoring in cache. Any one of the things you name is likely to put a heavy load on your cache, and when you try to do several at a time, you effectively run (several-1) of them without a cache. Or maybe all of them, if you get into a cache thrashing mode.
And it's those cache misses that make a CPU stall out for multiple cycles at a time.
--
2 CPU's and 3DSMax (Score:2)
Re:2 CPU's and 3DSMax (Score:2)
I'll bet it does offer similar functionality to Lightwave, but I never really had a need for that type of functionality. My need was really, "Get as much processor time as humanly possible." We even resorted to stealing the CAD machines at night to do rendering for us.
The big problem that we had rendering was that bridges have a shitload of polygons in them.
But who is the target market? (Score:2)
So who has a budget for these things and actually wants them if not the technical incompetants? High end servers? Would rather have a fatter pipe. These processors are only useful for high end work stations. Maybe you care, but most of us don't use or need that.
Of course, I wish my processor was faster than yours...
-Ted
Re:Why? (Score:2)
It isn't like any normal consumers can afford
> one of those chips anyways.
Well I sure can't. But 1 GHz chips I can't afford means remaindered or price-slashed 700 MHz chips that maybe I can afford. Not that you need a fast machine to run Linux - my Linux box is a P5-100 - but if you're running an MS OS, damn buddy, think how fast Turbo C v.2 is gonna run on one of those! Go Intel & AMD!
Yours WDK - WKiernan@concentric.net
Re:Feasible, but ... (Score:2)
The rewiring is to make the motherboard support the chip. The chip supports SMP fine, no rewiring needed.
Re:Why? (Score:2)
Y?
I just can't wait to bolt some waveguide to the system clock and use it to cook my food - coming soon to a pc near you - the latest and greatest in pc cases, with a built in microwave - just open the convienient door, slip in that slice of cold pizza, and by the time your code is compiled, your food is hot...and just think, no more burnt popcorn - you can nuke it and have it right in front of you so you don't forget about it, just click a button on your desktop and it stops cooking (not recommended during a q3 fragfest)
of course, I guess that means we will need a bigger power supply....
forget dual 1ghz; even dual 450mhz won't work (Score:2)
if you run the well-known BX chipset, and both cpu's are run to near saturation for longish periods of time (more than a few weeks, usually), the system WILL lock up.
I found this while stress testing my dual cpu BX boards (asus, tyan and abit). I ran 2 instances of seti@home on each cpu (yes, 4 processes on a dual system). ALL my BX systems hung eventually. even the ones with extra fans on the bx chip and heat compound under the sink.
my guess is that when the BX was released, intel felt it was "good enough" and that no one would totally saturate their CPUs. certainly NT wouldn't - and I bet linux 2.0 wouldn't either. not until linux 2.2 would both cpu's be used to this level.
so I can't believe intel anymore when they lie about their own products. dual 1ghz - gimme a break; they can't even make ONE ghz work under normal consumer conditions!
--
Re:whats the point? (Score:2)
I Have a Dual-500mhz machine, and i max out both cpu's all the time. Having Twice the power would almost double my productivity. As for most users - Anything that is multi-threaded and cpu-intensive will take full advantage of these cpu's. I believe that apps like 3d max are multi-threaded and can take advantage of the SMP. My artist friend has to take whole afternoons off while his animations are rendering because he can't use his system for other things while it is rendering. he has a single cpu. If he had a dual-cpu system, im sure he could be doing other things(photoshop,email,etc) while his images rendered. Just my 2 Cents (3 in canada)
Re:Think of the possibilities for pr0n! (Score:2)
Homer: But I want them now.
Re:Wow (Score:3)
The Slashdot Law of Site Design (Score:3)
[WARNING 2: This is one of the more bitter posts I've made to Slashdot. You've been warned.]
And so it was so ordered, after legions upon legions of sites fell to That Which Was The Slashdot Horde:
If the content of the web page is not dependent on the identity of the user, then the content of the web page MUST not be generated specifically for that user.
Yes, that's an IETF must, damnit
This isn't a complicated concept, folks. If each user gets a very different page(think search engine), then you dynamically generate the new content live. If each user only gets a slightly different page...well, gee, dynamically generate that slight difference, but leave static everything else.
If you're dependent on the user, change the page for each user. If you're dependent on some local index of news, then change the page each time the local index of news changes. If you're dependent on an angel coming down and teaching you to code the goddamn meaning of life in Perl, *THEN CHANGE THE PAGE WHEN SOME GLOWING HALOED CREATURE WALTZES IN YOUR STUDIO*, but for *CRYING OUT LOUD* don't regenerate your page every time I try to read some godforsaken article!
It's simple stuff like this that make me feel like I have a moral obligation to be a Comp Sci major. Grrrr.
One other point...ya wanna talk overcommitment? The Linux kernel lists are going nuts about the reasonably rare situations that can arise when the OS allows processes to overcommit memory, on the probabalistic assumption that not all processes will actually use the memory they request. What to do when the memory actually commited actually becomes used? Should the OS die, so that the processes may live? What processes does the OS kill to keep itself alive? There's alot of argument about how to deal with overcommitment on the OS level, and I'll leave that fight to the experts.
But lemme tell ya, just view the Slashdot Victim of the Day to find web pages that deal with overcommitment. Since these sites aren't too likely to change their entire codebases all that soon, may I suggest that expressing Database Errors *might* not be the most graceful method of expressing degradation of resources?
In other words, faced with the choice of fewer ad impressions and less readers vs. temporarily switching to a cached copy of the page which is 99.9% accurate, might it not be nice to have built as a core element of Apache's modperl something along the lines of, "Run this script to generate this page UNLESS we're getting hammered; in that case, use mod_rewrite to change the URL to a static equivalent of our now thoroughly overloaded page"?
Ahhhh. I might actually be able to view pages about Gigahertz SMP
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
P.S. Irony #1235235: It's taking me forever to finally get this comment posted
Re:Geez,when did slashdot become news of sysadmins (Score:3)
secondly, it has always been true that you need to properly spec your system based on what you plan to use it for. If it's going to be serving static webpages, you want lots of memory and a PCI architecture that can support multiple concurrent bus masters (to handle the high bandwidth). CPU speed is less important than cache size as well in this configuration.
If you're wanting to run Quake, you want a smaller amount of RAM but it needs to be low latency (say, 6ns) and have a high bandwidth (PC133). You'll want a fast processor and a faster video card. HDD is unimportant for quake, as is the bussing architecture (PCI, ISA - not the FSB).
But don't tell me there's a single metric for measuring system performance.. that's a lie, as much as saying that average access time for harddrives is "the" metric. I'd disagree, it's the track-to-track latency and aureal density that I happen to have used to spec my system. Not that it's "the" metric, but it's the two I used (as well as internal xfer rate, of course) which is directly proportional to the RPM the HDD is rated for!
Feasible, but ... (Score:3)
To recap, the article has 6 pages on modifying the Iwill Slocket IIs (albiet with graphics).
I like the item mentioned in the conclusion [hardwarecentral.com] better: "a new revision of the Slocket II is currently in the works that will support FCPGA SMP out of the box, making the configuration of a dual CPU system a matter of plugging the CPUs in the slockets, no soldering required."
Re:Controversy over (Score:3)
Yet still less trouble than actually rounding up two 1GHz processors.
--
Slashdotted already... (Score:3)
Cant read the article
-- iCEBaLM
I'm so disappointed (Score:3)
Re:Controversy over (Score:3)
They may or may not have been right, but in any case they certainly did not run a pair of factory 1GHz CPUs in an SMP configuration. I'll grant that the core is almost certainly the same; but even if they are correct in their contention that the stepping is the same for the two parts, it is a trivial matter for Intel to change the packaging to render the 1GHz version incapable of supporting SMP -- don't bond the SMP pin to the die, and they've done it!
The big question from my point of view is this: why would Intel say their flagship processor won't support SMP? This isn't like the case with the Celeron, where they clearly wanted people to buy the more-expensive processor instead of the cheapie... so why don't they want people to buy their most expensive product in pairs? The Celerons were too cheap, and the early FC-PGA Coppermines (the 500E and 550E) were just too overclockable; it makes sense that Intel would want to disable SMP for them, and so they played hide-the-SMP-pin. It also appears they've gone further with the CeleronII (AKA Coppermine128), and simply not bonded the SMP pin to the die -- again, a pricing issue... But why the 1GHz part? It's hard to buy Sassen's argument that it's just heat -- that applies to single-CPU systems as well as to duals.
So I don't think the controversy is over; it has just gotten more complicated, is all.
---
Re:AMD rocks (Score:3)
I'm not sure about the 1GHz machines, but AMD was having real problems with thier caches, at least when the 800 MHz Athlons came out. They ended up setting the cache multipliers at 3/8. If you're doing hard core rendering and simulation, you'll want an Athlon, but you'll also want to be running an SMP machine. Unfortunately, there aren't a lot of SMP Athlon machines out there.
If, on the other hand, you're looking at gaming. I beleive the video card is the current bottleneck in a high performace gaming system. The Athlon/PII decision becomes a mattter of personal preference.
Database manipulation and general OS tasks would seem to be where the PII would shine, given that it's cache multiplier is 1. That whole cache multiplier problem is going to be a real big problem for AMD if they don't get it figured out soon.
Karl
I'm a slacker? You're the one who waited until now to just sit arround.
Bus bottleneck (Score:4)
Re:Dual Athlons (Score:4)
Intel on the other hand has a shared front side bus, so all the CPUs can see each other's traffic and results in a substantially less complex chipset, with a lot fewer pins. Probably lower performance (depending on several factors), but certainly a cheeper solution for dual and 4-way SMP machines
- Mike
Controversy over (Score:4)
Molog
So Linus, what are we doing tonight?
Re:Dual Athlons (Score:4)
I do not believe there is one out there yet, but here [slota.com], near the bottom it says that tyan is making a board to be released Q4-2000 , Codename Dolphin, that is supposidly Dual-CPU capable.
Geez,when did slashdot become news of sysadmins? (Score:5)
1) 3D, games, rendering,modling, you name it.
2) Any kind of realtime graphics.
3) Photoshop type apps depending on wether you use filters more often or just edits to large files.
4) Compiling.
5) Audio editing
6) Real-time video editing. (What, I have to wait 2 minutes to render the changes!)
Things not compute bound
1) Serving webpages, files, etc.
2) Working with large photoshop files.
3) Some types of scientific computing where data crunching is high volume, low workload.
Yes I've forgotten a few tihngs, but the market for more CPU power is clearly more important than the market for higher bandwidth.
Re:Wow (Score:5)
We currently see 100+ fps timedemos at 640x480 with either a 1ghz processor or dual 800's, and that isn't completely fill rate limited. DDR GeForce cards are really, really fast.
Yes, it is almost completely pointless.
The only reasonable argument for super high framerates is to do multi frame composited motion blur, but it turns out that it isn't all that impressive.
I did a set of offline renderings of running Q3 at 1000 fps and blending down to 60 fps for display. Looked at individually, the screenshots were AWESOME, with characters blurring through their animations and gibs streaking off the screen, but when they were played at 60hz, nobody could tell the difference even side by side.
Motion blur is more important at 24hz movie speeds, but at higher monitor retrace rates it really doesn't matter much.
There are some poster-child cases for it, like a spinning wagon wheel, but for most aspects of a FPS, realistic motion blur isn't noticable.
Exagerated motion blur (light sabers, etc) is a separate issue, and doesn't require ultra-high framerates.
There are still plenty of things we can usefully burn faster cpu's on...
John Carmack