Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Intel Cancels 800 MHz Xeon 52

goingware writes: "This article at C|Net tells how Intel canceled plans to produce an 800 MHz Xeon. They had feedback from major OEMs telling them they wanted fewer speedbumps with larger incremental improvements. I think that's a positive step, actually. I know from doing performance analysis of software that simply making a speedbump to a processor doesn't win the end-user that much, it's mainly for marketing reasons. This is because the performance of real systems these days is limited so much by memory access times and other factors. It would be better if manufacturers concentrated on engineering improvements that would result in real performance gains rather than notching up the clock speed."
This discussion has been archived. No new comments can be posted.

Intel Cancels 800 MHz Xeon

Comments Filter:
  • As You may or may not know... What intel OFTEN does is they simply switch their production line to produce the next higer chip in line, nad then STAMP them with a lower clockspeed. Thus I've been able to buy a P233MMX sold as a P166 (even WITHOUT MMX !!!), a P2-333 stamped as a P2-266 and even a P2-450 stamped as a P2-350. I have NO DOUBT that Intel will/would have continued to use this strategy, seeing as they don't even loose money on it (only one production line needed), if this announce,ent had not been made. And I too think it's a positive step. This means that Intel for once might actually be able to focus on developing a BETTER CPU rather than just a faster CPU. Does ANYONE doubt that Intel has not made any significant advances in technology since the jump from 286 -> 386 ? (OK, so integrating the MMX into the Chip rather than letting the vendors supply it as software MIGHT be considered a technological advance, but only a small one) If You have evidence to counter that point, I can give You even more that says otherwise. I think and HOPE that Intel will stop the race to have the FASTEST clockspeed, and insted start the race to build/design the best CPU. Right now they're being beaten in innovation by Transmeta and speed by AMD, so they really should focus on new areas that would make them more appealing to us the consumers, and I for one will not buy a new CPU simply because there's a few more MHZ in it.
  • 10.5 times - in the latest 700 Mhz Celeron.

    This is getting so ridiculous, with a 10.5 multiplicator the CPU is totally starving for datas, but Intel is controlled by the marketing drones, and they said "we will differenciate the Celeron and Pentium III, and Celeron shall not have a bus faster than 66 Mhz".
  • Oh, but don't you see? All the GL screensavers will run a little bit smoother on the NT servers with the 800MHz proc even when 25 people are logged on using Citrix (you laugh but I have seen a Citrix server with the GL screensavers activated!) jars
  • I had to re-read this summary about 3 times before I realized that "speedbumps" weren't problems (as in "we hit a speedbump") but feature enhancements (as in "we bumped up the speed").
  • The Abit KA7-100 supports memory interleaving with 133Mhz DIMMs. Haven't had a chance to try it because I sent the motherboard back (don't ask!), but there is support for it out there. How popular it will become only time will tell.
  • by / ( 33804 )
    Sure you can. You just have to shake it really fast. Like a tuning fork, but with less resonance.
  • I see a lot of posts here about improving bandwidth, blah blah blah....What the gurus who standardize this sheit need to do is ADD more friggen IRQ lines and DMA's!!! Breaking away from IRQ SHARING would be like the holy friggen grail!
  • I agree with the television example (which isn't yours originally, and you should at least try to provide a citation or at least disclaim authorship)

    Sorry, I didn't mean to give the impression that this was my idea. I first had it explained to me by a local TV account exec about 20 years ago. She didn't come up with the idea either, she was just explaining what everybody in the business knew, but few consumers/viewers understood.

    I disagree with its utility as an analogy. Customers are actually directly consuming CPUs...

    I don't think this is really true. Very few people do what I did, and buy a motherboard, case, disks, memory, and a CPU, and put them together themselves. They buy a complete system, often with some services attached. They are, at best, indirect consumers.

    Perhaps a better analogy is Thiokol. I doubt many automobile purchasers think of themselves as Thiokol customers, and Thiokol certainly doesn't think of drivers as their customers. GM, Ford, DCX and the other automakers who put airbags in their vehicles are the customers for Thiokol's "automotive inflators".

    In any case, the point is still valid: I'm not an Intel customer. The vendor I bought my last three CPUs from is not a major Intel customer. Compaq, Dell, Gateway and few other system builders are the customers about which Intel should care.


  • I agree with the television example (which isn't yours originally, and you should at least try to provide a citation or at least disclaim authorship), I disagree with its utility as an analogy. Customers are actually directly consuming CPUs, whereas tv-watchers are only actively consuming the program and try their best to avoid consuming the advertising. If Intel raises the prices on its chips, then that will directly affect consumers' ability to purchase those chips, whereas if stations raise prices on advertising spots, that only indirectly affects consumers (by affecting stations' revenues and subsequent ability to produce programs).
  • How about all the DX2 "upgrades" that got sold? Increase your computer's speed by 50%, drop that old DX-33 for a DX2-50...
  • As other have said, Xeons are used mostly in servers.
    The people who buy these are less likely to be
    impressed by a small increment in clock speed.
    No one is claiming that Intel are about to change
    their tactics on desktop stuff,especially when the
    press is so full of hype on the 1Ghz <milestone>.
  • I am really happy to see this. The CPU wars have been out of hand for years, with miniscule and meaningless speed bumps on a 3 month schedule since at least 96. As a consumer, I'd prefer this alternative method, too. Bump up the speed every six months instead. And make it somewhere between 100 and 250 MHz. More if you can. If it's going to be 25Mhz or 50MHz, why bother?

    Now, if Motorola and IBM would just get off their ases and start producing faster G3/G4 chips, that would be a bonus. Shit, when was the last speed bump for Macs? Last November, I think... *sigh*
  • I know Xeon's are generally only used in server apps and not on the desktop. They also cost a heck of a lot more. Can anybody explain to me what's so 'good' about them that justifies the cost...
  • YES!!

    I have seen it in the mainframes in late 70's, the minis in late 80's -- and always from MS ;-)

    When to open house just after the mainframes allowed for 16meg work spaces and guest speaker was talking a programmer the week before finding the 16meg was a limit and it was too small. He was reading the entire file into 16 meg space then processing it then writing it back -- a main file that was shared by all processes! Because it was faster!
  • The differences in officially supported like CPU design speeds in the Intel world are a matter of manufacturing efficiency more than anything else. Basically what happens is that they take a design spec and start to bake wafers. As each stepping comes out of the oven a bunch of performance, reliability and heat tests are applied to the batch. After a given % of each stepping batch fails the family tests at a given speed then the family is labeled with a bench speed. Better wafers pass higher speed tests and therefore higher official speeds are attached to that family. As the process becomes more refined; better tolerances, cleaner lithography, cleaner materials, etc. each family is better able to withstand higher speed tests and so are certified at those higher speeds. It takes a different physical architecture of the CPU to make those larger jumps in speed before they also begin to fail higher and higher speeds. So basically what it comes down to is that the difference between 700 and 733 or any other 2 fairly close speed gaps is that the manufacturing/baking process has been improved and the family is able to pass a higher QA standard. Normally the process works like - take a basic design, make some chips, sell them, invest some money in improving the process, speed up the chips and continue until the cost of investing in the process is greater than the benefit realized. At that point you make subtle changes in the physical design, apply whatever manufacturing process to that variation and continue in your quest for manufacturing process improvement.
  • "3D Accelarator and RAM manufacturers should be working on methods to bring down production these things can become more affordable."

    Seems reasonable..

    "BUS speeds needs to be enhanced. USB is getting there, but it's not enough...and IDE sucks - and we really need faster hard-disks."

    Hmm.. 1) USB is designed for slow speed peripherals, and is meant to replace serial connections.. No speed problem there. 2) IDE sucks, not because of the bandwidth (no way 2 devices will ever fill PIO4 -- 16.6mbs, let alone 100!), but because of the various contention issues (only one IDE device may speak at a time, etc). Think SCSI. Even at the "slow" 20mbps SCSI1, it outperforms ATA100. 3) Yes, SCSI HDs are faster...

    "Cache sizes and Access speeds needs to become a lot higher. "

    Not size, but efficiency. Your PII could have 512kb of cache, but my Athlon with 256kb of cache will kick its ass because I'm not duplicating the L1 cache in the L2 cache (exclusive vs. inclusive), and the associativity of my L2 cache is higher (4-way vs. 2-way).
  • by Mr. Potato ( 174670 ) on Thursday July 06, 2000 @02:00AM (#954705)
    This seems to be a good thing for computer sellers, rather than computer buyers.

    If the cpu speeds increase in less frequent steps, then the cpu prices will also decrease in less frequent steps (IMHO).

    This seems more like an attempt by Intel to "throttle down" the market. Also it makes their marketing campaigns look better if they can demonstrate these artificially-contrived "big improvement steps". Which in turn leads to higher prices asked by Intel, for those big steps.

    This seems to be a subtle money grab from the consumer...

  • Um... part of the reason that Intel hasn't been able to keep up with demand is because of the low yields for the higher-speed chips. Xeons come with 512KB, 1MB or 2MB caches; if Intel's having trouble with the 256KB cache on standard PIIIs, I don't think they're going to have an easier time with the Xeons.
  • > I've been told that thus is the only reason AMD has been able to get the foothold into the market that they have.

    It is indeed a big reason, but certainly not the only reason, the other important one being that AMD has selling equally good processors at a better price.

    And sometimes better processors at a better price. I've lost the link, but just a day or two ago I saw something comparing the Duron 700 to the Celeron 700, and the Duron won on almost every test. And that on top of being 25% cheaper.

  • by Betcour ( 50623 ) on Thursday July 06, 2000 @02:25AM (#954708)
    That's because the 700 Mhz is a 100 Mhz bus CPU, while the 733 Mhz is on a 133 Mhz bus... so those two CPUs are for different platforms and are really two different product lines. The next big thing after the 700 is not 750, and after 733 it is 800.
  • True 'nuff. But that's never stopped Intel before, has it? ;)
  • by Chris Frost ( 159 ) <> on Thursday July 06, 2000 @03:08AM (#954710) Homepage
    Indy (as does Indigo2, both out in 1993) had a 400MB/s memory bandwidth and a 266MB/s gio-64 bandwidth. The R4k Indigo (circa 91 I think) had these same limits. So if an earlier post is right that the p3 has a 400MB/s bandwidth, sgi's had that about a decade ago.

    The O2 (which you can get for under $1500 now) has a 2GB/s memory bandwidth, and the Octane has it's switch at around 1.6GB/s (memory bandwidth is dependent on cpu clock). Origin's memory bandwidth scales linarly with the number of nodes you have (each node having up to two processors and its own ram). So when the pentiumpro had just started coming out, with what intel pushed as its really fast l2 cache -> cpu bandwidth (1.2GB/s), there were computers from sgi with faster system memory (though more latency of course). Interesting.
  • I apologize, I did mean to say "one of the big reasons...". Thanks for catching me on that one!

  • This seems to be a good thing for computer sellers...

    Someone remarked earlier that Intel needs to listen to their customers. Since computer sellers are Intel's customers (not us poor computer buyers) I'd say Intel has indeed caught on.

    Understanding the identity of the actual customer is an important problem frequently overlooked. For example, the customer for a TV program is not the couch potato in front of the screen. The mass of couch potatoes is, in fact, the product. The customer is the ad buyer who wants to put something in front of that target demographic.

    Intel, as the article described, listened to Compaq, and a few other big buyers of this one tiny little part of their servers, and is concentrating on meeting their customers' needs.

    So yes, this is a very good thing for computer sellers, which is exactly what Intel intended.


  • I still can't figure out what it could possibly be worth to enhance a chipspeed beyond the capabilities of the system it's in. If you want a margin, your 1Ghz chip should still get at least an 800Mhz system. That is a 20% Margin - A LOT.
    In short instead of all them fast CHIPS, we need to work on fast SYSTEMS.
    That means:
    • 3D Accelarator and RAM manufacturers should be working on methods to bring down production these things can become more affordable.
    • BUS speeds needs to be enhanced. USB is getting there, but it's not enough...and IDE sucks - and we really need faster hard-disks.
    • Cache sizes and Access speeds needs to become a lot higher.

    All these things means that with our current chipsets we have the potential for much faster computers...where are they ?

    PS. That beer/mdma guy is like Barney, everybody wants him dead.

  • by evilj ( 94895 ) on Thursday July 06, 2000 @02:31AM (#954714)
    It is interesting that mainframes, whilst having less CPU power than a Pentium, could still outperform them on IO-intensive tasks.

    Nowadays, we have ultra-fast x86 CPUs, but chipsets that hold them back. I used to have a 486 motherboard that did memory interleaving to speed up memory accesses. I suppose when we had 70ns SIMMS it was more important, as well as cheaper to implement extra memory busses due to the lower pin count on a SIMM compared to a DIMM.

    Anyway, it would make sense for the current x86 chipsets to have interleaving, although with SDRAM burst reads, it might be difficult to get the timing right. Maybe that's what has prevented it in commodity chipsets? Otherwise, I suppose you could have 4 byte interleaving thus:

    DIMM1:o wo

    You would still need some really low-latency memory in-between the main memory and the processor, and I guess the cost is another barrier to use in commodity chipsets.

    It's interesting to note that the Alpha architecture has up to 8 times the memory bandwidth at 5.2GB/s than Athlon at 600MB/s or 12 times the Pentium at 400MB/s (although, an Alpha machine of such can cost $13,000) - check out /ws/alpha_21264.html [] for more in-depth information.


  • Typical applications that you and I use every day (Xwindows, Xemacs, Scheme, you know) don't care about memory bandwidth. Because interleaving, after all, improves bandwidth: where once you could get 1 byte per transfer over the bus (in your example), now you get four bytes --- a stunning four-fold improvement in bandwidth!! In fact, it is widely acknoledged that the bandwidth "problem" is really an issue of economics, not technology. Lay some more fiber, interleave some more, get more bandwidth.

    No, most applications care about LATENCY. And this is a technological problem, not economical. This is because they are accessing memory in a somewhat random fashion: A[0], A[500], A[25], ..., A[1]. Notice that we use A[0] and A[1], but there's a long interval in-between. Hence, this example is bounded by memory latency, not memory bandwidth.

    Way back when, Apple introduced two essentially equivalent computers: the Centris 650 and the Quadra Quadra 650. Both 25MHz 68040 processors, etc, but the Quadra had 2-way interleaving. Twice as much bandwidth, DUDE! It performed about 2-4% faster.
  • by jht ( 5006 ) on Thursday July 06, 2000 @03:28AM (#954716) Homepage Journal
    Mainframes don't just have faster memory - in fact, the DRAM used may even be slower. It's the overall I/O, the speed at which the mainframe talks to peripherals and storage, the speed and caching in the storage systems, and the ability of the mainframe to do all these things at full blast simultaneously. The speed advantage to a mainframe, as you indicate, isn't one of CPU power per se - it's the ability the mainframe has to walk and chew gum whilst simultaneously rubbing its tummy, so to speak.

    The memory difference isn't just one of memory interleaving (many boards do that now), or the memory-side bus. All PC processors get their speed from outrageous multipliers - which accounts for a couple of things to today's systems:

    1: The tight code loops of a lot of benchmarks operate mainly from cache - creating way-high scores.

    2: There isn't that much difference between a 1 GHz processor and a 600 MHz processor in real-world usage. Some things will be faster, but many more virtually unaffected.

    Mainframe buses don't have the bandwidth restrictions that PC buses have. And when you think about it, we have 10x multipliers on PC processors, but the bus has only improved 4x since the 486 and the glory days of the 33 MHz bus. Most servers need faster I/O buses, not faster processors. When 64-bit 2x PCI is commonplace (or something better), and the FSB hits 250 MHz, and the operating systems finally become worthy of all that horsepower is when the PC will really start to make a dent in the mainframe's world. Until then, there's a reason why a mainframe will cost you hundreds of thousands of dollars, and a PC server will cost (at most) tens of thousands. PC servers are neat, and they do a pretty good job at what they are designed for - but it ain't no mainframe.

    - -Josh Turiel
  • A grad student here at WUSTL just gave his master's thesis defence on optical interconnects. He compared the performance of using some very fancy (and risky) cache prefetching optimizations to using an optical interconnect between the CPU and the system memory. Essentially, he cut the effective lag of system memory to that of the L2 cache. Imagine having a 100% cache hit rate, w/out appreciably increasing the access time. Drool. That, and he also boosted the bandwidth to unbelievable speeds for chip to chip, and eliminated the crosstalk/RF problems (of course, what do you expect using fibre?). I just can't wait until the whole thing gets cheap enough to put inside my own box.
  • And this doesn't include the older big machines either like Onyx, the Challenge series (which had somewhere around a 1 or 2GB/s transfer rate), Crimson, etc.

    And then of course sun does well too, though I don't know as much about their machines (and someone already mentioned alphas).
  • The majority of a modern processors speed comes from massive pipelining and predictive techniques. While a high bandwidth data connection to system RAM helps some (RDRAM), what is really important is low latency.

    When the processor executes all of its instructions and has to fetch the next set from system RAM, it 'stalls', and just sits there spinning it's gears until the RAM gets back to it. The same thing happens if the processor needs some data value from the RAM. While the system RAM may be very fast, it is very slow compared to the processor, and it can take up to a hundred clock cycles for the RAM to reply. That entire time, the processor is doing jack squat.

    In order to keep the processor busy, modern processors have caches. The L1 cache is a (usually small) cache that lives on the same chunk of silicon as the processor. It is incredibly fast, and can be considered to have a latency of one clock cycle or less. Whenever the processor requests something (data or an instruction), it tries to fetch it from the L1 cache first. If it is there, all is happy. Otherwise, there is a cache miss, and the processor has to look elsewhere for the data. When it get said data, the L1 cache is also loaded with many nearby chunks of data, in the hopes that what the processor wants next will be nearby to what it wants now.

    In the event of an L1 cache miss, the the L2 cache is checked. The L2 cache is not on the same silicon, but it is still inside the same package. The L2 cache is much larger than the L1 cache, but also much slower. It has a latency of around 2 to 10 clock cycles. Just like the L1 cache, when there is a miss, it fetches in more than what is needed.

    If there is a L2 cache miss, then you may very rarely go to an L3 cache. Your system (probably) does not have an L3 cache. Next in line is therefore the system RAM. And, if you increadibly unlucky, what you need is not in the RAM, but has been swapped out, and you have to wait an eternity for it to be paged back in.

    Given all of this, one can surmise that if you are going to make a processor perform better at the same clock rate, then you can:

    1) Get more L1 cache.
    2) Get more L2 cache.
    3) Get more (or get a) L3 cache.
    4) Get more system RAM.

    Xeons perform so much better that the ordinary processors, especially if used in a multi-tasking intensive environment, because then have gobs of cache. They have 1-2 MB of L2 cache, as opposed to at most 256KB of cache on their standard processors. If you run something that routinely has cache misses, you may only be actually getting 10% perfomance out of your CPU. Xeons don't miss nearly as often. Of course, cache memory is very expensive (especially L1), and can drive up the price a lot. Xeons also give some performance benifits in SMP (as they were designed for it, rather than just support it). 8-way SMP w/fast processors that have gobs of cache is very expensive, and people tend to put such beasts of machines to heavy work, like dishing up web pages to millions of people, rather than running the screen saver.
  • There's the small problem of physics. It's a lot easier to get fast speeds at the micron level than the centimetre level. For a long the bus was stuck at 66mhz while designers struggled with RF interference between the tracks on the motherboard.

    Cache size is a tradeoff for cost VS performance one one level. On the other hand, having too large a cache can decrease performance in tight loops etc. Remember that memory seek time is c + (1-h)m, where c is cache access time, h the hit ratio and m memory access time. If you've got software that's already optimised for high hit ratios on smaller caches (as most modern software is) increasing the cache size from 2 MB to 4 MB will likely increase cache access time a lot more than the not as dramatically increased hit ratio can counter for, so your mean access time actually increases.

  • This means that speedbumps are a thing of the past. With the upcoming TransMeta Crusoe, that should put the heat on Intel's tail.
  • SMP

    Wow, I thought this idea was never going to come up. I couldn't agree more, actually. Have you ever used a PIII-600, then sat down in front of a quad P-150? Despite the fact that the PIII costs so much more today than a handfull of old pentiums, it seems like the quad runs SO much faster.

    The problem is, that many Operating Systems, let alone applications are not at all optomized for SMP systems. Heck, win95/98 won't even let you try SMP and a huge portion of the world uses it. And yes, win2k/NT supports it, but few apps truly benefit from it by itself. The real speed ups come when you can finally give that CPU-hog windows Explorer kernel thread it's own dang processor. And how nice multitasking gets with SMP.

    And I can't wait for the next big Linux Kernel release, did you see all the nice SMP support they put into it? *grin*

    So what I want to see is further development and support of SMP-friendly hardware (mobo's, vidcards), operating systems, and applications.
  • I've always been amazed by the shortsightedness and selfishness of people who think that raising the MHz bar every month or two is a bad thing. They usually think it's a bad thing precisely because they're the type of person who wants to have the top-of-the-line processor, and when the speed bump comes they no longer have a supreme God Box. That's the reality of the situation, because NO ONE LOSES when processor speeds increase.

    After all, processors are commodities whose values are expected to fall fairly quickly; they're not investments, they're tools, tools which are always being improved upon. No one ever says, "I wish they would stop making racing cars faster. I mean, it's absurd adding more horsepower and speed, when 120MPH is fast enough for anyone. They should just race around with that as the top speed, or they should invent new engine technologies to push performance up, they shouldn't tweak existing engine technologies to go faster." That's how absurd the argument that processor speeds should be limited is: it makes no sense, at least for the consumer. No one's use of Photoshop or Quake 3 or Premiere or compiling the Linux kernel was ever slowed down by monthly speedups in processor speed. You're no longer the fastest, but unless you're a complete jerk who has to compensate for small penis size or low self esteem by engaging in a Geek pissing contest about who has the fastest processor, it doesn't hurt you. And if you are that jerk, get a fucking life and a clue because there's more to life than bragging rights and there's more to a fast/useful system than just the CPU. And BTW, now that cache is integrated on-die in all new processor families, speed scales pretty linearly and reasonably with incremental MHz increases; the only exception is the dumbass Celeron on its starved 66MHz bus; but the P!!! at 133MHz FSB is good, and the Athlon or Duron with 200MHz effective EV6 bus isn't being starved for data and won't be for some time, as long as you have 133MHz or greater memory and a decent disk subsystem. Which isn't unreasonable, since PC-133 memory is about as cheap as PC-100 and can often be pushed up to 143MHz and sometimes 150MHz if you buy from the best manufacturers (look at the memory comparisons on Anandtech).

    The only people faster processor speeds hurt is Intel and the big OEMs, and we should lose no sleep over that. AMD has, in part, made its progress based on not just better design but faster processors; being the first out at most major speed grades has given them prestige they wouldn't had if they'd stuck to slower speed grades. Pushing the MHz envelope has helped AMD's reputation. Keeping up with AMD has hurt Intel's pocketbook. And the large OEMs like Compaq and Dell and HP, they get hurt because they buy massive quantities of CPUs ahgead of time, for building crappy integrated systems on bulk assembly lines--which is good for smaller screwdriver shops, who buy fewer CPUs at a time and buy less far in advance, thus helping the small guy relative to the big corporations. Who can complain about that? Especially on Slashdot, where we mostly like the underdog and the little guy rather than the corporate empire.

    Plus, in the long run, speed bumps are great for us consumers. I'm still using an old 400MHz box, ancient by today's standards, but I look at the long term and realize that every speed grade I fall behind is one I will gain when I can afford to upgrade. I think it's wonderful that early next year I will be able to afford a 1GHz or faster Athlon. I cannot complain in the least, and without the incremental speed bumps we end users would never be able to afford such a beast so soon.

    The OEMs saying nay to an 800MHz Xeon is good for them, but bad for consumers. It helps them maintain value for their slower MHz Xeon systems, but does nothing for the advancement of consumer interests. Not that Xeons have much use for the average consumer, since only wealthy consumers and, more likely, businesses have use for the Xeon's big selling point, 4-way SMP. But businesses and Net companies who could have used the extra horsepower would have benefitted a little by the speed bump. The one caveat is that I doubt the relatively old Intel bus could handle 4-way SMP at 800MHz very efficiently, anyway, but that's a bus issue and not strictly a processor issue, since the other selling point of the Xeon was always its integrated L2 even when other P!!!s didn't have L2 cache integration. Also, the advantage for consumers would be the lowering in price of slower Xeons. At any rate, processor speed increases are nothing for users to complain about, now that L2 is integrated and bus speeds on high end processors are reasonable and the main bottlenecks are memory and disk subsystems, which aren't the responsibility of Intel or AMD. Make Seagate make faster hard disks, and make Micron hurry up with DDR-SDRAM, but it's just plain silly to complain about faster processors and lower prices.
  • "I wish they would stop making racing cars faster. I mean, it's absurd adding more horsepower and speed, when 120MPH is fast enough for anyone

    Actually, race cars have been hobbled for the past 10-15 years or so. When races got up in the 250 mile an hour area, the cars could significantly outperform human reaction time, so no one was willing to pass except under ideal conditions. As a result car races became (more) boring. So they slapped airflow restrictors on the engines to slow them down.

  • I've been running my main development system on a Pentium Pro/200 for about 4 years, and resisted doing an upgrade through *3* independent releases of new processors from Intel - the Pentium II, the Pentium III, and the Xeon (4 if you count the Celeron, but by that point I was too disgusted at Intel to even bother working out what a Celeron is...)

    I recently decided that my compile times were just too long, and started looking for options - I chose an AMD 750. Mostly because AMD have a good chip, it works well, floating point is really nice, and it was very inexpensive.

    Sure, AMD do the speedbump marketing bit - they have to, or Intel would eat their lunch. But at the lower price, playing speedbump catchup with AMD is at least a little more reasonable.

    I think my AMD 750 system will keep me happy for a year or two, and hopefully by that time Transmeta or some other such company will have released a more cost-effective processor that can support another one of my personal policies for buying computers, which is to buy one for whatever function needs filled.

    I don't try to cram all the functionality I need into one big mega-computer - that has proven too unstable and a pain in the butt to administer. Instead, I opt for getting cheaper secondary/tertiary computers for whatever function.

    I have a laptop for email, correspondence, and general r&d work on code. I have my aforementioned AMD 750 for major development work (music software, telephony, etc). I have a Mac G4 for all of my audio needs - this recently replaced dedicated hardware sequencers for the job. I have a cheap old-skool Pentium for bookkeeping and print serving on the 'net. I have a couple of Pentium boxes (cheap and generic) for Linux development - no GUI required, and they sit idle most of the time. I have a fast P2/400 for my Linux web and mail server and it's idle most of the time as well.

    All of this has been quite successful from the standpoint of low-cost, low-administration, low-risk if something goes wrong, and I hope that things change with the CPU mfr's to make cheaper computing a lot more feasible. These superfast chips with super-$ price tags are not what we need...
  • SGI used interleaved memory to achieve high bandwidth. For example, the Indigo2 and Indy had banks of 4 72pin SIMMS and two-way interleaving. Even the O2 reqires two 288bit DIMMs to be added at a time. The Challenge/Onyx class machines could be configured with anywhere from 1-1 to 8-1 interleaving depending on how many memory boards they had.

    In contrast even `high end' Intel server motherboards like the L440GX+ Lancewood are expanded one DIMM at a time and have only 4 DIMM slots. Compare that to the Indy, Indigo2 and O2 which had 8, 12 and 8 memory slots respectively. The Challenge class machines could have dozens of memory slots.

    Would PC server consumers choose higher bandwidth or lower cost?

  • It would be okay if they didn't go making life difficult for overclockers as well.
  • by FreeJack1 ( 203705 ) on Thursday July 06, 2000 @01:43AM (#954728)
    If you think about it, all they (Intel, AMD, etc.) have to do is release CPU's at incremental sppeds of, say 100Mhz, 1000Mhz, 1500Mhz, etc. then leave it to us to overclock 'em (or in the case of the weak at heart; underclock) to whatever speed we like! I run a P90 at 750Mhz and have all the heat for my house I need! Shoot, I even installed a potentiometer in the front of my case to dial in whatever speed I feel like running at! (Great for those older DOS-based games!)

    All in favor, say "HI!"

  • by Jon Erikson ( 198204 ) on Thursday July 06, 2000 @01:50AM (#954729)

    The practice of releasing new chips with negligible differences in clock speeds every few months has really seemed stupid to me. The sort of people who have to have the latest and fastest processor aren't going to be impressed when their brand new expensive X Mhz processor is superseded by the even newer X+25 MHz chip in two months time, and many of them will probably wait.

    And it must also be costing Intel money to keep releasing incremental upgrades, simply because they then have to lower the prices of the now slower chips to get them to sell in the face of their latest speed king.

    This pissing contest between AMD and Intel hasn't done Intel any favours at all, and it's probably time for them to take stock of where they went wrong.

    Jon E. Erikson
  • Does anyone else think that Intel might have cancelled this because they're having trouble with yields on full-speed 1 and 2 MB caches? I mean, c'mon has Intel ever backed down on a new product simply because noone really needed it?
  • Um, you have a good idea, however, Intel's only major problem for quite some time has been keeping up with the demand for processors. (I've been told that thus is the only reason AMD has been able to get the foothold into the market that they have.)

    Under this circumstance, it only stands to reason that a request from the manufacturers to Intel to slim down the constant advancements in CPU's would be welcomed by Intel so that they may get a step or to further on their current production lag time.

    Captain, I'm processing as fast as I can!

  • That's a first, the interests of the consumer has gone above the Marketing department!
    I wonder if this trend will catch on, or if this is simply a blip?!

  • Not really. Servers are meant to be stable and fast, with stable being more important than fast. In servers, the bottleneck is generally not the processor but the IO subsystem. So small incremental increases in processor speed don't add much except for boat loads of testing to ensure the hardware is stable.

    Compare that to the desktop/workstation market where there still exists a number of processor bound applications. Any increase in processor speed is greatly appreciated in these applications.

  • by Anonymous Coward
    I agree about there not being much point just upping the clock speed - Do you remember when the 486DX2 was introduced lots of people thought it was pointless, the processor runs at twice the speed of the rest of the system! Daft! You'll just create huge memory bottle necks!

    And where are we now 5 times, 6 times?
  • You really don't want Intel to be the one solely responsible for designing the "next-gen" PC.

    If that happens (and it has happened to a great extent, e.g. PCI, AGP), you can be sure that they will do their best to lock-out other cpu & chipset makers.

    On the other hand, no one seems to accept innovations by other companies.

    The 320 & 540 SGI x86-based workstations dumped the traditional PC BIOS crap, in favour of a workstation style boot-level PROM architecture. And they got criticisms for being non-standard!

    People are sheep! (not anyone reading this of course)
  • On one hand I'm skeptical. They sure haven't been hurting much in the face of their traditional release methods.

    Perhaps this is all in preperation for something huge they have planned and they want to drop it on the market to a considerably awed consumer-base.

  • its about time , think about it .. 700mhz then the 733mhz came out , 33mhz what was the point in that?
  • I wish CPU manufacturers could concentrate on trying to make existing models cheaper to make and therefore cheaper for us to buy instead of spending too much time making them faster.
    If they did this and the prices fell significantly it wouldn't be long before 2 or 4 processors in even a desktop machine became commonplace, giving us all faster machines and driving the per-cpu price down still further.

    Honestly, I don't actually know much about why this might be or might not be possible or even desirable. I welcome any comments or reasons why I am wrong. It just seems like a good (albeit uninformed) idea to me.

  • But when will they design a good architechture around the CPU? Practically speaking, everything ooutside the cpu is CRAP on x86-platforms. Yes, we're talking about 1GB/s burst-rate with AGP-bus, but that's only for gfx-board. Back in ...was it 1992 SGI had 1.6GB/s continous transfer rate on Indy. And that was for all components. Just compare x86 to any Sun or SGI workstation. I'd choose old Sparc anytime over new PC.
  • Common sence is taking over whoooooo hooooooooooooo!!!!!!!!!!!! These small speed increases were getting on my nerves. IT was just giving people excuses for writing sloppy code. I mean, I magine if we maxed out a 200 mhz for a while never passed 233. All the software of today would have had to writtedn for the chip of yesterday. Like console systems I.E. Playstation, which was out for years yet the games just kept getting better. The software would have had to become better to keep people coming back for more.

FORTUNE'S FUN FACTS TO KNOW AND TELL: A giant panda bear is really a member of the racoon family.