Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel

Intel Employees Speak Out On Rambus Debacle 89

Coupland writes: "A fascinating article from Electronic News Online discussing the fall-out within Intel caused by the Rambus nonsense. The troops seem to be breaking rank." This is definitely the most informative article I've seen on the Rambus / Intel relationship, and it includes a timeline that pretty much sums things up. (What it doesn't mention is the trouble that PC manufacturers like Dell, Gateway, etc., are caused by the constant cycle of delay and deny.)
This discussion has been archived. No new comments can be posted.

Intel Employees Speak Out On Rambus Debacle

Comments Filter:
  • Is it just me or does this story get more inexplicable everytime it crops up?

    Here's some more Intel/Rambus goodness. [theregister.co.uk] Mmmm.. 64 mb of ram with every processor.

    This is insanity. Go AMD!


    -----------
  • by JurriAlt137n ( 236883 ) on Wednesday October 25, 2000 @12:02AM (#678164)
    As far as I'm concerned this is one of the best things to have happened recently. At least speaking from the perspective of the end user. Big corp's like Intel going through this kind of trouble often find back some of the spirit they had when they just started up. Instead of being able to sit on their asses and enjoying the fact that they are market leader they will have to fight back which can only result in a better quality/performance outcome towards the end users. It will also allow AMD to catch up even further which might result in a nicely balanced competition between the (currently) major chip-builders.

    People may get fired or quit of their own, and this is a bad thing for those people personally, but the fact that new people and new ideas will enter the company might bring some major improvements.
  • by pb ( 1020 ) on Wednesday October 25, 2000 @12:14AM (#678165)
    What's this with having parts of the contract blacked out? I've never heard of this. Is this a common practice?

    I've heard a few too many stories about heavy-handed tactics by Intel when dealing with their employees, or other corporations, so somehow it does not sadden me to see them trapped by RAMBUS. Maybe this will be a welcome breather to get some competition back into the industry.

    In any case, I'm pretty happy with my Athlon. :)
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • at least for a little while. Who knows, with a little luck you can get a personnel discount on one of theser puppies. Considering the prices mentioned on the page the parent has linked to, a little discount wouldn't hurt:-(
  • I love that illustration of the road sign near the bottom under "The Timultuous Intel-Rambus relationship", with a guy hitting his head on a bus mirror in frustration.

    --
  • Well I am glad Intel is finnaly changing there tune.
    I Pity the Intel engineers. What they have gone though is terriable.
    I have been in situations where I was told how something as to built, what technologies to use to build it, Exactly what it was to look like, and have early designs (built by marketing people) thrown in my face and been to told to just modify those. In short, no creative thought.
    The result of this? One of the worse websites I have ever built. Had to spend long hours trying to find ways to get around there restrictions (which conficted with each other) in order to get the site to be functional.
    If I had been given better flexiblity I could easily have gotten something better looking up that was easier to maitain (this was not), in about 1/4 the time. Remember if you get into a position of power (or are) listen to your technical people, and all them to be creative. They will be happier and your products will be better for it.
  • by Bug2000 ( 235500 ) on Wednesday October 25, 2000 @12:26AM (#678169)
    Mistakes at this level can really be lethal. Intel imposed its top-down approach even though bottom-up surveys showed that it was a poor decision... But the war for market shares tend to impose these top-down decisions for a question of survival.

    In these times of huge mergers between giant companies, it is quite likely that this happens again and at very high scales (ouch!!). AOL-Netscape was a first example, Intel-Rambus another one, who's next ?
  • (I wonder what hitting 'return' did just now, ohwell)

    Gateway uses Athlons now. I don't know if they still have any Intel-based machines. But certainly their not dependant on "Chipzilla" (excuse me) as, say, Dell. And it shows in their quarterly report where they're just about the only computer maker that's been able to stay afloat this year.
  • At 2:30am I do not use proper punctuation, proper spelling, or proper grammer. Deal with it.
    :-)
  • I don't know about you, but so far in tech support I've had nothing but trouble with both Athlon processors and Gateway computers. Correct me if I'm wrong, but as far as the average Joe is concerned, the moment he plugs his new USB toy into his computer, Windows should ask for a floppy, het puts in the floppy, and it works. This is what you get when using a PIII or a Celeron on a BX chipset. On other types of computers you might still be able to get it working with a little trick here and there, bvut this is usually way too complicated for the average luser. Athlons are great processors, but most of the motherboards that support them so far are nothing but trouble as far as I'm concerned.

    Ah well, that's what keeps us tech-support people working, I guess.
  • ...yes, inexplicable. What baffles me is, why do we even care anymore? With AMD picking up SMP, and chip-makers actually making AMD SMP boards available, the only need for Intel and Cyrix is to keep AMD innovating at the rate they've been innovating.

    Just my two cents.
    -C
  • Say what you want about Intel, they did take the chance, got into trouble, and now they're admitting they made a mistake. This is definitely something that can't be said about all companies.
  • by dnnrly ( 120163 ) on Wednesday October 25, 2000 @01:07AM (#678175)
    It is entirely possible that this development could have an effect on the price of memory. There is something to be said about the theory that the perception that people are going to need more non-Rambus memory leading to a shortage in supply and driving up the cost. Any economists have any ideas?

    As for Intel destroying the trust between management and those engineers. I think this is pretty dire! Those engineers are going to start thinking twice before giving their honest opinions on things. Could bery well lead to management not getting the information that they need, appraisals of new technology. If the engineers think that management are willing to push for it then people are just going to fold. Bear in mind that I use the word 'management' here loosely!

    dnnrly

  • I know it doesn't matter. And you're confusing serious with polite.
  • Isn't this illegal? In many countries it is illegal to use one product to also sell something else.
  • No.. Look at any car you buy.. Ford does not make the tires on the cars they sell. Yes bad partners are even a problem for them.. Think Firestone!
  • Heck with just a chip.. I saw where intel is giving it's employees a complete computer system with printer and free internet service. It was anounced about 6 months after Boeing in Seattle did it. Sweet.
  • Brevity is indeed the soul of wit.

    I actually missed that graphic the first time I read that article, I think I was in banner ignore mode.
  • Deal with it.

    I'm happy to deal with such things from people in a hurry with something to say, people learning the language, typos, whatever.

    Not from people who can't be bothered to make the effort. If you're working late, do you drive home in this state? For all our sakes, learn when to get some sleep.

  • by Anonymous Coward
    Intel didn't "take a chance" on a new and better technology - the financially focused executives saw a was to make scads of cash and didn't think it was necessary to consult the engineers who knew something about it, or even that the engineers had a valid opinion, IOW they (the execs) know best. I have seen this happen an awful lot -- management issuing "top down" designs, failure to listen to technical people who know, refusal to admit that anyone below the executive suite or, by extention, techies who go along with them ("disagree and conform"). What's happened as a result of this is major brain drain and when said all-knowing execs realize they really do need technical people who know they have usually moved on.
  • by Aceticon ( 140883 ) on Wednesday October 25, 2000 @02:21AM (#678183)
    The problem is, that the people they lost, might very much be some of the better amongst them - this is not good for any company.

    The article seems to indicate that Rambus adoption was completly a high level decision, and that the input from the lower levels (the engineering team) was not only disregarded but also, for those that persisted in voicing their diagreement with the technology, punished.
    Altough i believe that choosing Rambus was a bad move, i think that:

    1. "outsourcing inovation" (to Rambus Inc)
    2. Ignoring or even supressing internal opinions
    were by far the worst moves that Intel could've done.

    Think about this:

    1. It's more than obvious that Rambus Inc exists not to serve the interests of Intel, but to serve the interests of it's own members and/or shareholders
    2. Ignoring the opinions that come from experience, and taking punitive measures against those amongst Intel that were brave enough to stick to their opinions, will just push out from Intel the most knowleadgeable and daring - probably the same persons that are more willing to voice/try new and inovative ideas - and leave Intel with less free spirits and more zombies. Zombies do not inovate.
  • ...about my Athlon TB 750 or the MSI 6330 motherboard I'm using. I've certainly never had any problems with USB.
    --
    01 13 19
    TVDJC TDSLR AZNGT NWQSH KPN
  • Intel does take risks. This is part of what R & D is all about. Something in it's infancy may look promising. There is no way to prove an un-tried technology than give it a commitement and go with it. Some ideas did very well. Putting a math co-processor on the chip was one, Making a Pentium was another, Breaking 100 Mhz was even another. Transitioning to copper is the next thing to get the need for speed. Going after the server market with a 64 bit processor is another. Will any of them be a guranteed sucess? Won't know until they try it. It's all part of growing and developement. So they gambled and lost that one. If they didn't try and stoped growing anywhere along the way, they would have died. It's do or die in the industry. I don't think they will cancel other possible wins because of an occasional loss.
  • hehe i like the feeling when driving home a late night and you (the driver) suddely "wakes up":

    "already at home? I can't remember driving more than i few 100 meters"

    ;-)

    buy we don't have much traffic around here at night...
  • by Anonymous Coward on Wednesday October 25, 2000 @03:05AM (#678187)
    --Now this may go against common opinion, but in a team atmosphere, Intel's so-called "disagree and commit" thing is a common requirement. In general it doesn't mean "shut up and do what management says", it means if the whole team agrees on a particular solution, then you can't have the few who disagree continuing to undermine what you're trying to accomplish.

    For example... pretend I have 10 designers working on an ASIC, and one thinks the protocol we are using sucks. The majority agreed that this thing has a good chance to perform and sell well, but this guy was the odd man out. Now... what do we do? Do we throw away the other 9 opinions and say: "ok, scrap this, we'll do what you want"?

    I've worked with guys like this before. Not only do they refuse to accept the teams decision, but they continue to profess their negative opinions at every chance possible.

    The only reason you guys are eating up the negative view of a single ex-employee, is that the in this case, Rambus did have problems. Even though he may have been right about Rambus, its still tough for me to believe that "employees got bad reviews because they spoke out against Rambus". Chances are, this guy got a bad review because he was being counter-productive.

    -This is the opinion of one guy, just like that article.
  • Pardon me for butting in, but I don't recall Dell posting a loss this year. Dell's problem was that they did not meet earning expectations. Gateway, on the other hand, surpassed their earning expectations.

  • Actually, with the IT industry going the way it has been, this could mean that the good people would walk rather than stay out of a "misguided" sense of loyalty. I mean, I have a habit of not hanging around where I'm not wanted. The engineers would probably easily find new jobs.

    Also, this might be considered a badge of courage. At least new employeers would know you'd speak your mind.

  • by Anonymous Coward
    Zombies do not inovate.

    never heard of microsoft ?
  • Fuck you. Feels better now?
  • No, I've read a lot of things relating to how crappy things are for employees at Intel. I don't fell like digging up links, but do a Slashdot search on things that intel's current and ex employees say about the place.
  • by cyber-vandal ( 148830 ) on Wednesday October 25, 2000 @03:42AM (#678193) Homepage
    Yes, I was amazed to see that the opinions of the technical experts were ignored by the management. Surely this sort of thing is rare in IT (snigger)
  • > [I'd love to work there,] at least for a little while.

    I wouldn't work at a place like that if they paid me to.

    Er, uhm, never mind.
  • You're entirely right. This was driven more by "how can we corner the market" than by "how can we deliver a better product at a better price".
  • > Gateway uses Athlons now. I don't know if they still have any Intel-based machines.

    I was in one of their outlets last week, and they did still have Intel (as well as AMD) on display.

    The even had the elusive 1G Intel systems, which must finally be shipping in quantity. (Maybe even a 1.1G - I don't remember, because I'm not really interested enough to remember - not interested in paying Intel's price. They carry GHz AMD systems, too.)
  • The troops seem to be breaking rank.

    No, thery're not. Intel management has openly said their dealins with Rambus was a gamble that didnt pay off, etc. Seems the troops are toeing the party line to me.

  • > In retrospect, it was a mistake to be dependent on a third party for a technology that gates your performance.

    Is that verb derived from the noun "gate", or from Bill Gates' name?
    Windows really
    gatesed the performance of my desktop supercomputer, but I scratched the disk, so it can't gates me any more.
  • by Anonymous Coward
    Though those type of people exist, I really think that this was probably a case of bringing in external engineering teams (from RAMBUS), that have a vested interest in using their own product, and taking their opinion over Intel's own enginering team. Thus you have a conflict of interest.

    And what constitutes being-counter productive? Perhaps in this guys case, he pointed out that perhaps X,Y,&Z modifications will allow the chipset to support SDRAM also, not just RAMBUS, thus making the product more flexible. Well then you have the RAMBUS team go crying to management of "counter-productive" activites, when in actuality it just messes up their agenda.

    The fact is that the RAMBUS people are *not* on Intel's team.
  • You're posting to Slashdot. Sorry, you just disqualified from being either a luser or an Average Joe.
  • As more and more of this comes out someone should take the articles and use them to wall paper the office of the CEO of AMD. Let them learn from this also, they have been handed a free pass to the top of the chip maker pile with this snafu by intel. Let's see if they can avoid the cycle that seems to creep up once people get to the top nad get full of themselves.
    AMD please take this as a warning of the rocks ahead. If I ran AMD I would have the headline from Intel's press release anouncing they are going with rambus and the headline from their anouncing they are dropping rambus etched in bronze and placed at the entrance to every common area in the company. Put them their to serve as a reminder of what can happen when you get to big for your britches, pluss it would provide ammusement for a while to everyone as they watched intel suffer :-) .
  • Rambus was soundly rejected as a long term memory platform months ago by the industry and the public at large simply refusing to buy it. At this point its pretty much common knowledge that the vast majority of new computers will not be shipping with RDRAM.

    What might have an effect is if there is a boom in the sale of new computers when all the new tech comes out, but a price increase won't be caused because of Rambus. The market already knows that its a dead end.
  • At 2:30 AM the bulk of driving is done by drunks (well, and cops hunting drunks). "It wasn't his fault, he tested 0.00"
    Believe me, I know, I drive taxi in the wee hours. The worst drivers come out between two and three (last call) and between four and five. After five, the espresso stands open.
  • Now, if only Boeing was giving away free 747s to its employees...
  • by jht ( 5006 ) on Wednesday October 25, 2000 @04:36AM (#678205) Homepage Journal
    Gateway uses Athlons in some of their consumer PC lines (like the Select series), but their "corporate" systems (the Enterprise, or E-series) all use Intel chips. They have one desktop (the E1400) that's i810-based, one (the E3400) that uses the i815 without the built-in video, and a model (the E4400) that uses the i820 and RDRAM.

    The difference is that the E-series have longer product lifecycles and offer more consistency in the devices that they use (for instance, they offer the same video card and Ethernet card throughout the product's lifespan). THe lifecycle also runs longer - usually about 18 months compared to the 6-12 months that a consumer PC might be available.

    Most top-tier PC makers do something similar. The bleeding-edge and "cool" technologies go into consumer PC's (which small businesses also usually buy), and Big Business buys the managed systems (which are relatively boring, but consistent). Dell, as another example, has the Dimension PC's for home/small business, and they offer the Optiplex for their managed line (we used to be a Dell shop and switched to Gateway earlier this year).

    When I last discussed their roadmap plans with Gateway, they were starting to consider the possibility of adding an Athlon-based E-series PC, but it's still a little immature to them.

    - -Josh Turiel
  • I actually thought it implied a little more violence than that...

    The man was leaning out on the road, waiting to cross. Standing a little too close, he's about to get his skull shattered.
  • Dell is the only major brand computer manufacturer that won't touch AMD. HP, Compaq, IBM, Gateway, etc. all do.

  • Give me a candidate who speaks out against the war on drugs.
    (And isn't a total moron about everything else!)
    How about Harry Browne [harrybrowne.org]?
  • I have heard several times that Intel isn't a good company to work for. This just goes on to prove it. A good company would openly solicit constructive criticism on issues like this - checks and balances to ensure they don't screw up.

    Punishing employees for disagreeing is highly vindictive and very myopic. This can only hurt Intel.

    Intel is not a company I'd like to work for.

  • once people get to the top nad get full of themselves

    ...I just like the concept of "top nad" :)
  • by casio ( 90859 ) on Wednesday October 25, 2000 @05:42AM (#678211)
    I left Intel after 15 years. (I've been out 10 months.) I think the main reason for disasters like Rambus and many of the other execution problems is that the traditional Intel culture has been allowed to slip away. Believe it or not, the intenral culture revolved around responsibility and accountability. Around 6 years ago that started to change. Disenting opinions where not welcomed. (Shoot the messager.) and too many decisions are being made too high in the chain. (Specifically technical decisions.)
  • Gateway's long-term goal is 50/50 Intel/AMD. Right now they are at about 80/20.

    The whole industry is 85% Intel, 9% AMD.
  • > > Give me a candidate who speaks out against the war on drugs.
    > > (And isn't a total moron about everything else!)

    > How about Harry Browne?

    That's what my parenthetical statement was added to fix.
  • ...that Intel screwed itself by committing to a new technology. Large corporations are much more likely to be conservative and screw themselves by refusing to bet on new technology -- take Xerox.

    Well, I guess in both cases, management refused to listen to [at least some of] its engineers. Maybe that's the problem. Maybe if Intel engineers took a few more showers and ate a few more breath mints, this never would have happened.

  • Oh, let's go for that bright and shiny 1,500,000,000 Hz one, with all the blue headed men [theregister.co.uk] surrounding it. Just make sure I don't get any more than 64 Meg of RDRAM! For this premium I want to run constrained! I want to hear that paging drive Rattle and Hum!


    --
  • This was driven more by "how can we corner the market" than by "how can we deliver a better product at a better price".

    Absolutely not true. Your statement demonstrates you haven't been following this issue at all. I know not everyone on /. can know everything about every issue, so read this and learn.

    Intel went with RDRAM for 3 reasons, BANDWIDTH, BANDWIDTH, and BANDWIDTH. Intel was the only company trying to do something about the frontside bus bottleneck. They wanted faster frontside busses AND faster AGP being served by one memory channel. SDRAM honestly doesn't have the bandwidth, that is why there are 16, 32 and 64MB of graphics memory on all of your shiney Voodoo or GeForce cards: b/c SDRAM can't handle it.

    Intel wanted faster FSBs and graphics performance, so they absolutely had no choice but to go with RDRAM.

    Problems with latency and ridiculous electrical specifications, not to mention logical bugs coming from a first imlementation of new IP, and I doubt RAMBUS helped them... you see the point. Intel failed b/c they tried to put an untested technology in the mainstream from the get-go.

    The memory bandwidth bottleneck still exists, at least Intel tried to do something about it.

    AMD is sitting around waiting to see what floats to the top DDR or RDRAM, or maybe ADT. Why should AMD try to pioneer something when all they have to do is wait for Intel to demonstrate the best path???

    Sheesh. Do some research.


    ---
    Unto the land of the dead shalt thou be sent at last.
    Surely thou shalt repent of thy cunning.
  • Fortunately for Intel, they didn't have to take any risks, since every single one of the things you mentioned was done by someone else first. Hell, the Alpha alone did all of them before Intel did. Not one of these technologies were "in it's infancy" when Intel deployed them.

    The only risk Intel takes in deploying any of these technologies is the risk that Intel customers won't buy them. That's the risk every company takes when introducing a new model. While yes, it means Intel is taking risks, none of the risks Intel takes actually advance the state of the art.

    --
  • by Anonymous Coward
    Yes, I was amazed to see that the opinions of the technical experts were ignored by the management. Surely this sort of thing is rare in IT (snigger)

    Intel is not an IT organization. IT organizations build and operate production systems used to manage companies whose core business may lie elsewhere entirely. IT organizations don't do development of new products or technologies.

    Intel is an engineering organization. In successful engineering organizations (which Intel has been), technical experts do have a very large say, though not necessarily a final one; That technical experts were ignored in this case is actually quite surprising.

  • I personally hope that Intel survives this. There's an accurate post higher up in this discussion about how Intel was trying to get around the memory bandwidth problem with RAMBUS. Unfortunately, this didn't work out (The gamble didn't pay off, as Intel's CEO stated) and now Intel is much like somebody babysitting the neighbor's rabid dog. They can't just kill it because it's not theirs and the neighbor won't be back (for 2 years), and it's going arround the neighborhood mauling everybody's children. This is making everybody else in the industry hate the do.. err RAMBUS.

    The whole "AMD RULeZ0rS INTEL SUx0rS LUNIX 4 EVER!" thing on slashdot is getting rather tiresome. I personally run an Intel pentium III 800 system which I'm extremely happy with (the CUSL2 implmentation of i815e rocks). I initially had an Abit KT7-RAID and an Athlon Thunderbird 800. I had a sound problem so frustrating (and PCI issues) that I had to return the board and ebay the processor.

    What makes AMD the "l33t slashdawt hax0r" favorite? The reason that AMD hasn't been involved in any of the supposedly strongarm tactics intel has been using is that they have been concentrating on one thing: putting out faster processors that undercut intel's prices. If they manage to take Intel out of business while Intel's stuck with this RAMBUS bullshit, then what? You've got AMD ruling the roost, and they can do whatever they want. Next thing you know slashdot will swing over to "AMD SuX0rs!! CYRIX 4 EVER!!".

    Also, it's not as though AMD has never made a mistake. Witness the whole Slot A / Socket A debacle. "Here's a new packaging system for our future processors! Oops wait, we're gonna go with a socket package instead, sorry there's no upgrade path, why don't you spend some more money?". And as the earlier comment I mentioned stated, all AMD is doing is being quiet and waiting for the next big memory techonology to sort itself out - it's not taking enormously risky and expensive chances to try and destroy the bandwidth barrier that Intel has with RAMBUS.

    Basically.. Go Intel :)
  • No, thery're not. Intel management has openly said their dealins with Rambus was a gamble that didnt pay off, etc. Seems the troops are toeing the party line to me.

    Not really; Intel management has focused on more political issues. In particular, Rambus' tollkeeper business model. They didn't say anything at all about poor technical decision-making.

    These engineers who are being quoted are describing the Rambus deal as a poor technical decision, not a poor business decision. For a company like Intel, whose marketing depends heavily on a widespread perception that they make good technical decisions (though maybe only on the part of those who are not well-informed, technically), the latter is a much bigger deal than the former.

  • Readers of Face Intel [faceintel.com] might have predicted that something like this was just waiting to happen.

    That site is full of stories of disgruntled employees who are fired, demoted or reprimanded for trying to innovate or not following the company line. Assuming the stories are true, Intel is in a very sad state internally.

    --

  • I think its great that Intel is working to get Rambus figured out. Ok AMD has the speed *( for now) but it's not going to last. EVERYONE knows the bottleneck is memory. DDR is not going to be able to keep up. Rambus should sue to protect its patents. This will become water under the bridge soon and every one will be using Rambus. I can't wait to see the graphics of the play station on my pc! Nintendo & Sony figured out how to use rambus and they are able to produce better graphics. Rambus will be used in hdtv's to produce awsome hight quality screens & digital quality pictures. Go Rambus!!

  • 'It's not the first time we've shipped devices with negative margin.'

    If you do things right the first time... this kind of thing will never happen. Same with the 1.13GHz PIII's. I hope they are learning.

  • Some of the terms of the contract, especially the numbers, are not something you want your competitors to know. That holds true especially if the competitor may deal with the same company. RAMBUS wouldn't want AMD to come to them saying "but Intel gets theirs for $1, so $1.10 is the highest we'll go." Not that AMD would go with RAMBUS. There may be other trade secrets contained in the contract that niether party feels competitors AND stockholders should know about.
  • > Intel went with RDRAM for 3 reasons, BANDWIDTH, BANDWIDTH, and BANDWIDTH.

    You're mistaking the "three reasons" that Intel gave the public for the one reason that really drove the decision.

    It's all about cornering the market. This isn't the first time, and it won't be the last time.

    Frankly, I'm glad it bit them in the ass.
  • > I've worked with guys like this before. Not only do they refuse to accept the teams decision, but they continue to profess their negative opinions at every chance possible.

    ...and don't you just hate it when they're right?

    Sorry, but I lost several years of my life by keeping quiet, and going with the prevailing opinion. It's especially irritating when you (because you found the problems, and were the one who kept banging on about them) have to pick up the pieces as well.
  • RDRAM is an innovative technology--it is just too expensive and difficult to design for the mass-pc market. I defy you to tell me what is the flaw in Rambus technology.

    Now Rambus as a company, sure I don't like them anymore than the next guy--but that is just because I disagree with their IP/litigation business model.

  • AMD is sitting around waiting to see what floats to the top DDR or RDRAM, or maybe ADT. Why should AMD try to pioneer something when all they have to do is wait for Intel to demonstrate the best path???

    Let's not make things up now. AMD said from the start they had not current plans to use RDRAM, and has indicated that they may use DDR in their higher-end (server) chipsets. AMD has planned on only worrying about RAM technologies for consumer products in their 751 Irongate chipset. They have made it clear the rest is up to VIA and other thirdy party chipset manufacturers to decided what goes on the other side of the bus. VIA has been anti-RDRAM in their chipsets afaik.

  • Here's one quote from the article itself:

    "The [original Intel 820 chipset] issues were not defects within the MTH. The issues were with the Rambus channel itself and the use of large packages at channel speeds. Technically, the problem has been with microwave-like resonance effects in the component packages, connectors and in the structures formed by these when placed on printed circuit boards."

    Rambus' strict design rules left engineers with little elbowroom to be creative, another industry insider said. "Engineers as a whole don't like being dictated to," he said. "With Rambus' design there's no flexibility."

    Also, here is the Tom's Hardware Guide article: The Rambus Zombie Versus the Wounded Chipzilla [tomshardware.com]. Also, the benchmark [tomshardware.com] ; which shows the lower performance figures under Rambus.

  • ...I saw Tom Pabst skipping down the street the other day...
  • FACEIntel

    http://www.faceintel.com

  • and furthermore that you know little about memory hierarchies

    How did you glean that nugget? I'm fairly certain I know 10x more than you on the subject.

    Intel bet that going forward bandwidth would be a bigger issue than latency. Whoops. They also knew of the cost, and expect it to drop as volumes increased. Whoops. Heat was also an issue, but betting on bandwidth, this was ignored. Whoops.

    in very specific situations, of which 3D graphics happens to be one, but

    No shit. Intel was chomping at the bit to let loose with 3D as a way to sell more CPUs. That's why they did that horrible 740. They bet everything on 3D.

    given that RDRAM was known at the time not to address those problems any better than DDR SDRAM,

    current-generation DDR SDRAM is at least the equal of current-generation RDRAM for bandwidth, and offers

    Don't you know that DDR wasn't on the table four years ago? Aparently not.

    Whatever problem Intel was trying to solve, the engineers knew that they were trying to solve it the wrong way by using RDRAM.

    Gee, that's a qualified statment. Oh, wait, you probably heard that from "a friend of mine's friend that works at intel, but wouldn't use his name."


    ---
    Unto the land of the dead shalt thou be sent at last.
    Surely thou shalt repent of thy cunning.
  • Actually, at no point did I say that Intel was without blame. If you were to actually read the content of my post, you'd see how it points out that Intel tried to take advantage of an experimental technology and had problems. To do this, they signed a contract. Hooray! You missed the point! The post describes why they signed it in the first place. Which really doesn't make them morons. Learn to read.
  • No, the real bottleneck is the latency of memory. People just settle for raising bandwidth because they don't know how to reduce the latency. Few applications natively need large amounts of bandwidth. The rest have to find ways of prefetching data, and most often they can't do that. Processors stall waiting on memory loads and stores. Most of the time they ask for small amounts of data. They don't always know in advance where they'll be loading from, and the data isn't always ready long in advance. So they can't always find things to do while waiting for memory. High bandwidth doesn't help you here. You're limited by the latency of the first access.

    An example: RAMBUS currently has higher bandwidth than SDRAM, but it also has larger latency. It wins for high end 3D rendering software but loses for everything else. Bandwidth at the expense of latency is only beneficial to a small market segment.

    Playstation and the N64 are NOT PC's. Sony's reasons for using RAMBUS don't apply in the PC world. The PSX2 needs only 2 16MB RAMBUS chips, and you could keep adding chips. And RAMBUS involves fewer chips. In the case of the PSX2, you can just solder 2 chips onto the main board. So Rambus is simpler and more compact in this case. SDRAM DIMMS contain larger numbers of chips, and have a larger pincount. But this doesn't matter for PCs, there's enough space involved that the issue is moot. But Nintendo is abandoning RAMBUS for their next system, apparently the price of RAMBUS wasn't enough to justify the simplicity.
  • I have been wonderring all day why Intel would announce this. They are not noted for their "whoops we screwed up" Policies. then it dawned on me. They are setting up a fall guy, sacerficing a pawn as it where, if the pentium 4 falls flat on it's face it is not intel managments fault it is Rambus!

    With this announcement Intel has created a bogeyman that they can admit to. Bad earnings - Blame it on rambus. Poor rate of shipped product - blame it on rambus. This way they have a way to justify any losses or market slips on a bad business deal , keeping their shareholders happy. "we are not lossing market share because he let our technology lead slip due to bad managment. We got ripped off by our partner that's why! It's not OUR fault."
  • Claims are cheap, evidence is scarce. The statements made here are all that count, and those statements demonstrate a profound ignorance of the technologies and issues involved. If you know more, use that knowledge in your posts.

    Exactly. That's why I don't understand why you can call me 'ignorant'. What wasn't clear about my statements? I made observations about Intel's decision based on products and timeline. The person challenging me piped up and said I didn't know anything and then proceeded to talk about latency, and then slammed me personally again.

    In my view I made the more intelligent posts. Should I have seeded my post with buzzwords? Ok: skew, timing, reflection, leadoff, page miss, row miss, page hit, column adress strobe. There, now you know I'm smart.

    I supposed I could spout even more buzzwords about memory heirarchies -- which the previous poster said I know nothing about, and was fairly irrelevant to the discussion. Sure, most benchmarks run in the cache, blah blah blah, Spec, blah, Ziff-Davis, blah. (Maybe some genius will now explain to me what I cache is.)

    Again, because you completely ignored my original point: Intel was extrapolating into the future in 1996. That's why they wrote the first version of ZD 3DWB'97 and did the 740 dance. Everything pointed toward 3D as the future and only RDRAM would support AGP 2/4X (NGP) and superfast FSB.

    Someone made a claim that Intel was trying to corner the market by using RDRAM. Since it is obvious there is _no_ market for Intel to corner wrt memory, and based on the changes in intel's technology, I simply pointed out the obvious reason why intel tried to transition.

    But apparently I forgot to slam intel in the process. So much for trying to raise awareness.


    ---
    Unto the land of the dead shalt thou be sent at last.
    Surely thou shalt repent of thy cunning.
  • > "The [original Intel 820
    > chipset] issues were not defects within the MTH.
    > The issues were with the Rambus channel itself
    > and the use of large packages at channel speeds.
    > Technically, the problem has been with
    >microwave-like resonance effects in the component
    > packages, connectors
    > and in the structures formed by these when
    > placed on printed circuit boards."

    All that says is that Intel was having a hard time designing for the low-impedence, high frequency environment necessary to make Rambus work. When DDR SDRAM grows up to higher clock frequencies (such that it can actually have bandwidth per pin comparable to RDRAM) it will have the EXACT same problems. This just means that the cost-sensitive PC world isn't ready for Rambus signalling, not that the technology itself is flawed.

    > Also, here is the Tom's Hardware Guide article:
    > The Rambus Zombie Versus the Wounded Chipzilla.
    > Also, the benchmark ; which shows the lower
    > performance figures under Rambus.

    All that stupid document was looking at was Intel's pathetic RDRAM implementation where that only used a two RDRAM channels. The front side bus is actually the bandwidth limiter on that implementation, not the memory, so of course you aren't going to see Rambus in all its glory. I would suggest that you wait until Alpha's EV7 comes out before you pass judgement on RDRAM.
  • and leave Intel with less free spirits and more zombies. Zombies do not inovate.

    No... Zombies take up meaningless buzzwords from large corporations like Microsoft and use them as crutons in their word salads.
  • This [former] Intel engineer's comment has "the ring of truth" to it. Years ago I recall the observation/comment of my Engineering Economics professor: "Engineering Economics is the art of doing with one dollar what any fool can do with two!" Part of the problem might be that large company managements, under pressure from shareholders, (especially institutional shareholders), want the engineers to go even further and do for fifty cents what any fool can do with two dollars. I remember another professor from college and his comment: "The higher [up] engineers go in management, the more out-of-touch they get with [technical] reality."
  • But still, SDRAM will be eight times faster before it runs into the same problems that Rambus did.
  • "Team" is a modern euphamism for "committee". Teams are good where there are legitimate reasons to have them. Unfortunately, most engineering teams amply demonstrate why most of the more significant innovations in history are credited to individuals. Teams can be greater than the sum of their parts, but can rarely exceed the capabilities of their weakest members by a significant amount. The best model is a "skunk works" organization where you assemble the minimum necessary number of the most capable people available. Once you exclude the dull normals and traditional team players, it's amazing how quickly a consensus can be reached. I've never been considered a team player, and I'm rather proud of it. OTOH, as a manager, I've assembled some excellent teams of exceptional individuals where a team was legitimately required.
  • Ah, but you've got to take into account the width of the bus. The wider the bus, the harder it is to build all those big thick traces and the harder it is to do all that impedence matching, etc. That's kinda the point with Rambus--it is easier to build a serial bus (thinner) 8-16 bit interface that runs at high frequencies than it is to build a wide, 64-256bit bus that runs at medium-high frequencies.

    My proof is that there are 800Mhz (1066Mhz in the lab) Rambus parts, right now! Available to be bought. DDR SDRAM is at, what?, 133Mhz? with 200Mhz someday?

    Yes, it will be a little easier to build a motherboard for 200Mhz DDR SDRAM than for 1000Mhz Rambus signals, but clearly it isn't THAT much easier, because I don't see the motherboards or the memory out there yet. And keep in mind that even at 200Mhz, DDR SDRAM will have much less bandwidth per pin that a 1000Mhz Rambus setup.
  • An anonymous coward said:

    do you even work?

    It is said if you enjoy what you do for a living, you'll never work a minute of your life.

    I enjoy what I do (s/w dev). Ergo, I haven't worked in years ;-)

  • > What wasn't clear about my statements?

    It's not what wasn't clear; it's what was clear. And what was clear is that you do not understand that the RDRAM decision was made by Intel's executives, and that over the express objections of their own engineers.

    IOW, it was a political decision rather than a technical decision.

    > Someone made a claim that Intel was trying to corner the market by using RDRAM. Since it is obvious there is _no_ market for Intel to corner wrt memory, and based on the changes in intel's technology,

    Ah, you misunderstood the market that I was talking about. Intel is still trying to corner the PC processor/chipset market rather than on the memory market per se. That's why they're so fond of proprietary extensions to the PC architecture, such as RAMBUS, MMX, socket-of-the week, etc.

    And if the Rambus Scam had gone as intended, Intel's contractual priviledged position would have left them sitting pretty w.r.t. AMD.

    At the consumer's expense, of course.

    But it flopped, so Intel is backing out. They'll be trying something else soon enough. Meanwhile, the Rambus robber barons are still trying to use dubious patents to set themselves up as leaches on the entire memory market. That's not a technical decision either, nor does it have anything to do with giving the world's PC buyers a better deal.
  • And if the Rambus Scam had gone as intended, Intel's

    It wasn't a scam. The executives that made the dumb decision based on internal suggestions from analysis engineers realized there was no chipset vehicle to deliver higher performance processors based on SDRAM. The executive decision was rooted in fear that their latest CPU wouldn't look as fast b/c of memory bottlenecks.

    Remember, executives were thinking of 2001 in 1996. The were thinking of the future in terms of: bazillions of polygons per sec for 3D T&L, endless streams of motion comp vectors for DVD/MPEG4 playback, and rapid searching of large phoneme databases to compute hidden markov models* in text-2-speech and speech-2-text --- that's a LOT of memory bandwidth, kids. They weren't thinking SpecFP. They weren't thinking Winbench.

    Now you all considered it a scam b/c Intel made a partnership to supply high volume for Rambus, while Rambus would let them see their IP. Why is that a scam? People could by a VIA chipset with SDRAM and slap a PIII into it, can't they? It runs better than 820, doesn't it? How is that bad for consumers? Intel stumbled trying to make a high performance chipset. They weren't out to bone the masses. How would their partnership with Rambus stifle competition??? It hasn't and it wouldn't have if RDRAM panned out on all fronts.

    So does this mean that any decision made by an executive is political? I'm challenging the perception in this thread that Intel is out to screw the masses. What is the problem with my position?

    * buzzword alert

    ---
    Unto the land of the dead shalt thou be sent at last.
    Surely thou shalt repent of thy cunning.

  • Yeah. I'd go a step further tho.

    It reminded me of me when I came within four inches of a speeding delivery van when about to step off the curb and cross the street. The van had run a red light.

    So in the illustration I presumed that the van was still a few seconds away, and we're about to see the guy step onto the road and get a full frontal splattering.

  • by ToLu the Happy Furby ( 63586 ) on Wednesday October 25, 2000 @03:19PM (#678247)
    Fortunately for Intel, they didn't have to take any risks, since every single one of the things you mentioned was done by someone else first. Hell, the Alpha alone did all of them before Intel did. Not one of these technologies were "in it's infancy" when Intel deployed them.

    The only risk Intel takes in deploying any of these technologies is the risk that Intel customers won't buy them. That's the risk every company takes when introducing a new model. While yes, it means Intel is taking risks, none of the risks Intel takes actually advance the state of the art.


    That's just because he came up with a bad list. Despite the fact that there are very few totally new ideas in the MPU industry (just as there are very few totally new software algorithms), Intel has indeed bet the farm (well, bet the product line) on some very radical design ideas, both in the past and the present.

    Some were successful, some crashed and burned. One design that was extraordinarily innovative and successful was the P6 core, introduced in 1995 with the PPro. In it, Intel managed to do "the impossible"--execute variable-length x86 code out-of-order, something that was supposed to be only possible with a fixed-length ISA and was even relatively state-of-the-art there. The way they did this was by essentially "emulating" x86 code by decoding it into internal "RISC-like" ops, which could be run OOO. While I doubt this was an entirely new idea, I'm not aware of any previous implementations of it, much less one as wildly successful as the P6.

    One design that was a horrid failure was the iAPX432, an MPU spread out over 3 chips which essentially operated in an object-oriented manner, rather than iteratively like, well, every other chip in history. Perhaps a sign of what was to follow was the fact that the 432's "assembly code" was actually built to closely model ADA, the government's ill-fated OO language. The 432 somehow managed to work, but performed a bit slower than mainstream MPUs from 5 years beforehand. Not too many sold. But there is no doubt that here Intel took a huge risk based on a very interesting idea.

    Nowadays Intel is engaged in exactly the same "risky" design behavior in an attempt to further the state of the art. The P4 contains several totally new innovations. Perhaps most prominent is the trace cache, an L1 instruction cache which instead of just dumbly storing instructions, orders them safely and unrolls loops, allowing branch- and dependency-free operation for large swaths of code. In addition, the trace cache stores those internal "RISC-like" ops, not x86 ops like a normal instruction cache; this takes the x86->"RISCop" decoder out of the critical path and should result in higher top-clock-speeds and excellent performance on small looped code which can fit in the L1 trace cache--3D engines, encryption, and FFT (i.e. audio/video encoding/decoding, voice recognition), for example. Trace caches are not a new idea; they've apparently been studied quite a bit in the literature. However, the P4 is the first commercial MPU to include one, and that's a substantial engineering innovation.

    Another innovation which is, from what I've heard, actually a totally new idea is the P4's double-pumped ALU and supporting hardware. While the idea of different pieces of hardware running at multiple speeds is of course not new, this is apparently the first time it's been worthwhile to implement it on-die in a commercial MPU. More impressive is the fact that Intel was actually able to get an ALU--one of the most studied logic circuits in history--to run at up to 4.0 GHz in current .18 um process technology. Apparently the way they did this is by implementing a new, lower-latency adding technique. This is the circuit-design equivalent of finding, for example, a faster sorting algorithm; it represents a very impressive achievement. While the double-pumped ALU will likely not have as large an effect on overall P4 performance as the trace cache, it should help out noticeably and it's definitely a radical design.

    On the other hand, we have Intel's upcoming IA-64 ISA, an attempt to move the VLIW philosophy from specialized DSP work into general-purpose computing. Again, VLIW is not a new idea, and the idea of a VLIW general-purpose MPU is not either. However, the Itanium is one of the first attempts to actually build one (Transmeta's Crusoe is the other).

    Furthermore, it represents quite a risk from a performance standpoint. The basic idea behind VLIW is to in effect take the RISC revolution one step further. While the RISC vs. CISC debate is often treated as a fair fight capable of producing one victor, the reality was quite different. (The following is essentially a synopsis of this excellent article [arstechnica.com] on ArsTechnica.) Instead, each was the best ISA philosophy for the prevailing conditions at the time. CISC was the best design choice for its time--that is, up until the early 80's--and "pure RISC" the best for its time--from the mid 80's until the mid 90's.

    The main issues involved the evolution of storage capabilities and compiler technology. First a broad comparison of what CISC and RISC actually mean: CISC refers to a category of ISAs in which a new instruction is concieved of to take care of every possible situation. A (made-up) example of a CISC-like instruction is the following:

    CRAZY_OP, mem1, r1, mem2

    which does the following: load mem1 from memory, take r1 from a register on the chip, compute (mem1 - r1) / r1^2, and store that in mem2. And there actually were some CISC instructions which were nearly that crazy. The RISC philosophy, on the other hand, would break that one operation down into many--one to load mem1 to a register, one to subract mem1-r1, one to multiply r1*r1, one to divide the two, and one to store the result, for a total of...lessee...5 instructions.

    What's the difference? Well, like I said, it came back to storage capabilities and compiler technology. Back in the 70's when CISC was the Right Thing To Do, storage was extremely expensive and thus very scarce. If chips back then had used my RISC design, such an operation would have taken 5 instructions to code; with my CISC design, it takes just one. Yes, the CISC design might need to reserve some extra bits in the opcode field in order to code for so many ridiculous instructions, but overall the compiled RISC code is going to take at least 4 times as much storage space as the CISC code. So even if you didn't expect to run into the above situation very often, it made sense to have an explicit code name for it whenever you did.

    As we hit the 80's, these storage issues rapidly eased, to the point where it wasn't such a hardship taking 4 times as much space to say the same thing every once in a while. Meanwhile, back in the CISC way of doing things, you actually needed to find some way to make your chip capable of performing all the goofy instructions that might be asked of it. In essence, it's almost like your assembly code is "compressed" to save storage space, and thus needs to be "decompressed" by the chip. This means complicated chip implementations, each trying to do more in each clock cycle--which means lower top clock speeds. The RISC chip may need more cycles to do perform all 5 instructions, but since it only performs simple instructions, it can have a higher clock speed and thus come out ahead.

    But there's a problem with this too: people generally like to program in high-level languages. RISC is a low-level ISA philosophy. Thus you need to have good compilers, to be able to analyze high-level instructions and decompose them into all their composite parts for encoding in a RISC assembly language--often a more difficult process than in my example. Again, the compilers of the 70's weren't up to the task; only in the 80's did good enough compilers come along to enable this. In essence, we moved the "decompression" of a high-level instruction to its low-level constituant operations from inside the chip (CISC) to in the compiler (RISC).

    Thus, we went from CISC being a Good Thing to RISC being a Good Thing. The main issues were 1) code bloat not such a big deal and 2) move more instruction scheduling duties to the compiler.

    Since that time, we've moved from what I called "pure RISC" to what Hannibal in the article I'm summarizing calls "post-RISC". That is, people started realizing that with RISC operations being more-or-less uniform, a good way to make things to faster was to do more than one thing at a time, and that instead of sitting and waiting on a long memory access, etc., you could switch and do other stuff at the time. Thus we got superscalar and out-of-order execution, respectively.

    Moreover, we got deeper and deeper pipelines--sort of like assembly lines, in which each instruction goes through several stages, each 1 clock long, in its execution. This means we can clock the chip faster (less to do on each clock cycle), and get overall faster performance (think a fire brigade of 10 people each passing buckets a short distance, vs. one person running 10 times as far between buckets delivered). The problem is that, unlike buckets or trucks, code has dependencies; instruction 2 might take as its input the result of instruction 1, which is still in the pipeline--only halfway down the assembly line, as it were. Thus we need rescheduling logic to keep our pipeline stuffed--our assembly line filled--with instructions which don't depend on each other. Or, instruction 1 might be a branch instruction, which goes one way or another based on its result, so that we don't know "what comes next" until it is completely finished. Thus we use branch prediction, which uses some statistical methods to guess what comes next, and execute it accordingly, while aware that if when we get to the end of instruction 1 it turns out something else came next, we need to go over and do that instead.

    The result of all this out-of-order superscalar pipelined "post-RISC" stuff was much higher IPC (Instructions executed Per Clock), but also lots of complicated logic on MPUs to handle all the scheduling and dependency checking and prediction. Theoretically, just as all the complicated logic made CISC chips complicated and slow, all this complicated logic makes today's post-RISC too complicated, too large, too hot, and slower than they might otherwise be. [end summary]

    Thus, the basic idea behind VLIW is an extension of the idea behind the CISC->RISC transition. To wit: why not take all this complexity out of the MPU and put it back into the compiler? That way, we can get rid of all the unpleasantness once, at compile time--on the developer's time, not the user's. The way it does this is by trying to find all the parallelism, work out all the dependencies, and predict all the branches at compile time--in other words, to do all the scheduling at compile time. The way it communicates this to the chip, then, is to compile not to individual instructions for the chip to schedule, but rather into prescheduled "bundles"--or "Very Long Instruction Words"--which are supposed to be guaranteed to work well when run together in parallel.

    Or rather, this is how VLIW works where it is normally used--in DSP type processors, running programs for which it is very easy to extract this sort of data at compile-time. Problem is, it is much more difficult to do with general-purpose programs, which is why it hasn't been done before. As you might guess, there's just too much you don't know at compile-time for you to get unambiguous scheduling information. Transmeta solves this problem by compiling at run-time, using their code-morphing software, essentially a JIT compiler. The problems with this are obvious and well known: namely, that the JIT compiler uses resources which would otherwise go to running the program, and that you don't get the VLIW benefit of doing all the optimization once and forgetting about it. (The code-morphing software caches, profiles, and further optimizes the code its already run, but it still always running, and doesn't save this information from session to session.) Indeed, you're essentially moving the scheduling problem from one which is done by specialized on-chip logic in different pipeline stages than the execution logic--and thus not competing for execution resources--to one which is run by the general-purpose execution logic; a shaky trade-off at best. On the other hand, by working in software you theoretically get more flexibility to schedule instructions than when doing the scheduling with a chip's fixed logic.

    The way IA-64 handles this problem is to have the compiler insert "hints" about which instructions look like they *might* be able to run in parallel, without dependencies; which way a branch is *likely* to go; which scheduling is *likely* to make good use of the chip's execution resources. The problem with this is, as the hints are inevitably going to be wrong, the chip needs its own analogues of much of the scheduling hardware it was trying to get rid of in the first place. In some ways, it's little more than a change in terminology: with OOE designs you have a smallish general register set with a large set of "rename registers", so that each instruction running in parallel essentially thinks it has a full copy of the general register set all to itself; with IA-64, you just have a huge general register set so that each parallel instruction has enough registers to work with.

    The problem is, of course, that you haven't done what you set out to do--eliminate complex scheduling logic from the processor. Instead, you've just replaced it with similar but less-well understood versions of the same stuff. The end result is the that Itanium core, far from being small, simple and clocking fast, is huge, complex, unbalanced, and therefore capable of pitiful clock speeds. The die is ~300mm^2--roughly 3 times the size of a P3--yet only has room for a total of 16kb L1 and 96kb L2 cache, less than even a lowly Celeron. (Server level chips like Itanium generally need much *larger* caches than PC chips; Itanium is supplemented with a large off-chip L3 cache, but it is too high-latency to be much use.) Itanium was supposed to launch in early 1998 at 800MHz; it is only now yielding above 733MHz--again, Celeron territory.

    Furthermore, we run into trouble from an unexpected place--code bloat. Of course, it's not the same problem as in the 70's, when we used CISC ISA's to keep code small so that they could be stored at all; today's 100GB HD's testify to that. Rather the problem is that *bandwidth* to storage is very often the limiting factor with today's technologies, and that high-bandwidth storage--i.e. on-chip cache--is just as scarce as overall storage was in the 70's. With all its hints and bundling and exception codes to execute if the hints turn out wrong, IA-64 is much more bloated than x86 or RISC code, and thus those not-even-Celeron sized on-die caches are effectively even smaller.

    Of course, Itanium has more functional units than the P6 core, and if all these compiling tricks actually keep them full of instructions, it will perform much better per clock. Unfortunately, all indications are that even with the relaxed "hints instead of guarantees" rule, it's still just too difficult for today's compiler technology to keep this monster even remotely well-fed. Intel even had the gall to claim at their recent Intel Developers' Forum that the SPEC CPU benchmarks were "irrelevent" for Itanium's target market, offering instead a (hand-written in assembly) RSA encryption benchmark in which Itanium demolished a Sun USIII. Well, that's fine, except that a very cheap dedicated encryption chip can beat the Itanium at this game several times over for 1% the cost and power requirements. Of course, the SPEC benchmarks run exactly the sorts of programs used in Itanium's target market, and are the most relevent measure possible. And not coincidentally, they are extremely sensitive to compiler quality.

    So...to get back to our original topic, IA-64 is another huge risk--Intel has repeatedly called it a "bet-the-company thing"--which incorporates some very interesting, non-mainstream ideas in an attempt to radically advance the state of the art. And so far, it appears not to be working.

    Don't worry too much about Intel, though; from all indications, McKinley, the 2nd-generation IA-64 core, should perform just fine thank you. Interestingly enough, it was designed almost entirely by HP engineers. But it also must be emphasized that they have clearly learned from Intel's myriad mistakes with Itanium. (Everything about Itanium, from the pitiful tacked-on caches to the rather unnatural pipeline design--apparently an extra stage needed to be added late in the design process--indicates that this design was a "learner".) Plus, Itanium has been delayed so long that the almost-on-schedule McKinley is due out relatively soon--roadmaps have it as soon as Q4 2001 (dubious), and Q1 2002 might actually be reasonable. McKinley should clock just fine (although not as high as the CISC front-end P4), and has plenty of on-die cache. And in a year and a bit, the compilers might finally be ready too.

    So, Intel might just turn this risky strategy into gold. Maybe the "post-RISC" paradigm *will* run out of gas soon, and VLIW will speed past. The point is, for better or worse, Intel's MPU designers are not conservative in the least.

    AMD, on the other hand, has never introduced any significant new MPU design techniques that I can think of; instead they concentrate on implementing Intel's designs better than Intel. Indeed, their first PC MPUs had the same names as Intel's--AMD made a "386" and a "486", and possibly a "286" too, I don't remember. The much-vaunted K7 is really quite similar to the P6 core, just with more functional units, larger buffers, more decoders--more more. It's a better version of the P6 (though less power efficient), but it's not terribly innovative. Of course, AMD was in a precarious enough position market-wise that they didn't have the luxury of taking engineering risks. Intel being relatively secure (and percieving themselves, Andy Grove's catchy business-trade bestsellers notwithstanding--as even more so), they can and do experiment with some wacky stuff. Some of it works, some of it doesn't, and for some of it they take their massive market power and force it to work.
  • Nintendo & Sony figured out how to use rambus and they are able to produce better graphics.

    Odd that you should mention Nintendo. The N64 used rambus, but they descided against using it in the game cube. Rambus doesn't seem to be getting any repeat customers.

  • I hope that Intel, having gotten a taste of its own medicine in using patents as a tool to legally stifle competation, will help the community by stamping out some stupid patents.

    I'm not a hardware engineer, but comments I heard about Rambus' patents from people I consider knowledgable made me wonder about how valid they are.

    It is, after all, high time some big corporation wises up to the cost to everyone including themselves of patenting the obvious.

    Way back when, Digital had a patent on the decoding logic of their 8600 memory boards. The patent was clearly frivolous: anyone with a card edge pin-out and an ASIC programmer could have designed the logic. But the thing wasn't beaten in court. A small company bought up used 8MB memory boards, used a saw to cut the card edge plus decoding ASIC off the 8MB board, glued a 32MB board onto it, re-fastened all circuit traces and sold the resulting board at a profit for prices well below DEC's. It's rather sick that this involved process can be cheaper than paying license fees, and it shows that even if the patent itself were not frivolous, DEC abused their position as patent holder.

    Patent abuse is rampant, but only the big boys have the money, the lawyers and the engineers to make a case. I'm glad to see the table turned.

  • If Intel drops off as Rambus' sole supporter, expect Rambus to abuse their patent position even more. Hitachi (or was it Toshiba? I forget) already gave in and pays a Rambus tax on their RAM technology. Tom Pabsts hardware site [tomshardware.com] has some scathing analysis on this topic. In particular, read this article [tomshardware.com].

    When Intel really drops out, it is in everyones interest that Rambus gets put out of its misery quick.

  • I couldn't afford a tank of fuel and my back yard is too small.

1 + 1 = 3, for large values of 1.

Working...