Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel

Intel Cuts Back on 820 Chipset Manufacturing 49

BRTB writes "It seems that Intel has actually done something right: realized that its new 820 chipset (with Rambus memory support and speed increases) is so expensive for computer builders and end users - on the order of $500 added to the cost of an 820-equipped machine - that it's decided to cut back on production. Check out the News.com article here."
This discussion has been archived. No new comments can be posted.

Intel Cuts Back on 820 Chipset Manufacturing

Comments Filter:
  • by renoX ( 11677 ) on Tuesday September 21, 1999 @10:43PM (#1668320)
    I don't understand why Intel is so adament to support Rambus whereas memory makers would really like to forget it.

    Intel bought some shares of Rambus Inc. sure, but frankly Rambus is a really small company compared with Intel, so I don't think it is a major point.

    Technically, I'm not sure at all that Rambus is superior to cheaper alternatives: SDRAM at 133 MHz is only a first step: DDR SDRAM (which transfers data twice per clock cycle) should have a bandwith comparable to the Rambus memory AND they should have a LOWER LATENCY (and a lower price too!). So ?

    For those who don't konow: the latency is also very,very important, the bandwith is not the only thing to look at... (especially the maximum theoretical bandwith!).
  • You know, I was really looking forward to the 820 chipset. It was supposed to be the heart of my next system. However, Intel took a product that could have been a dominating force in the market (like the BX chipset) and killed it with their corporate marketing pedagogal greedy design decisions for the final product. Rambus was always questionable from day 1, and although it looks to be a viable technology in the long run, I think Intel stuck with it a little too stubbornly even as it faltered. I mean, it's kinda ridiculous for a consumer to pay a $200-$300 premium for new technologies that may not work so well in the short term due to bugs, or that are so far incompatible with anything else, or even that don't possess the performance to justify the cost. This wouldn't be true, though, if Rambus didn't have so many problems and wasn't so expensive. I even figure that with SDRAM prices where they are today, which are extremely high, that perhaps the separation of cost between the RAM technologies would shrink enough for Rambus to be considered a justified investment for a new system. I guess not. This is Intel's bad for holding back on the computer industry with a great chipset just so it can push one friggen product that's late out 'the gate and not so great. And they also failed to push down the price of Rambus too, which is the fatal factor. Alas, no new system for me right now...
  • So Intel is scaling back for now. No biggie. The 820 isn't going to be adopted that quickly, anyway - too much investment is out there in SDRAM for manufacturers and users to just leap to RDRAM. Eventually, that may make more sense but not yet. All this means is less inventory built up in the channel, and hopefully it means the slight bump they've had in BX availability will go away during the ramp-up.

    - -Josh Turiel
  • I can't believe I had to read this far down before I found anyone who saw how funny this is. It's especially funny after reading that "Intel finally did something right..."

    Some people are just gluttons for punishment. Unbelievable.

    MJP
  • Moore's law does not apply uniquely to Intel processors. Moore happened to be an Intel co-founder, but his observation was intended to apply equally across all vendors. I refer you to Moore's 1997 comments on Moore's law [cnet.com].

    You are, however, correct in that Moore's law is a predictor of transistor density. Some tech writer who felt the need to oversimplify must have been responsible for the transitor desnity=power thing.

    As for the transistor density of the chip (and the conductive properties of the silicon at small densitites), that's secondary compared to the "how do we keep all these transitors from frying each other" problem. Heat gets to be a real issue.
  • The whole Rambus issue really seems like Intel grabbed onto a technology thinking that just because they supported it it would become the standard. They're much too used to promoting a technology they've developed and it becoming the defacto if not du jure standard because they own most of the market. We were stuck at 66 MHz for much too long because Intel didn't want to make a move on the bus speed though other processor manufacturers were champing at the bit to go higher on the bus speed. Hell, Intel's socket 7 chipsets didn't even support SDRAM properly until the 430TX and even then it was crippled by the 64MB cachable limit.
    Even though Intel looks like they've made a mistake and are behind in their chipset designs, it doesn't mean anything. They've been willing to get as much as 6 months behind and then sweep in with a processor/chipset combination that's just enough faster than the competition that all the vendors rush back to the Intel camp. I sure hope AMD can scale the Athlon fast enough to avoid getting Intelled yet again.
  • by Anonymous Coward
    while we are on the topic of rambus, here's something i have wondered about. you constantly hear that rambus has a latency problem compared current sdram. however, if you visit the rambus webpage, you see a link to a paper titled lowest latency with the highest bandwidth [samsungsemi.com]. would anyone care to comment on these claims that rambus has the "lowest latency?"
  • by Anonymous Coward
    Let me get this right. Intel is helping the customer *save* money by cutting production, which will further drive up the prices on the chipsets when they are released (read: supply and demand)
  • Rambus memory has high initial latency, but then has very low latency to accesses within the same line. Also, it's a burst-memory protocol, which means that although it's only 16-bits wide, you can't access less than "x" bytes (last time I looked it was 8-bits wide and always sent at least 8 words or so). It's not good for random accesses, but it's very good at sequential accesses.

    What this means is that your performance depends on your data locality -- if your accesses are close to each other, you'll get much better performance.

    Also remember that processors don't directly access memory; they request data from the cache, which forces a cache miss and fill. If the system is designed well, the cache line size should be the same as the minimum data size for Rambus.
  • First of all let me clarify that when I say MHz I am referring to the data cycles and not the clock cycles. Its much easier to talk this way when dealing with products that transfer data on both the rising and falling edge. First of all to get Rambus with 2x the bandwidth of 200 MHz DDR ram it would have to be going at 1.6GHz. Why such a high number? Well very simple Rambus Ram has a quarter the bus width of DDR ram. 16 vs. 64 bit. so your 800MHz Rambus ram only equals the 200Mhz DDR ram. I should also put in that there is also 266MHz DDR ram. Letting DDR not only equal but exceed the bandwidth of rambus ram at a dramatically lower cost.
  • by chip guy ( 87962 ) on Wednesday September 22, 1999 @12:38AM (#1668333)
    Moore's law isn't the issue. The high cost problems for DRDRAM are fivefold. First the rambus access cell stuck onto a normal DRAM core plus the necessary changes to the DRAM core itself adds about 25% to die area. What's more the RAC doesn't scale down with the rest of the DRAM going to a smaller feature size process. Second, even in 0.22 um DRAM processes the AC functional yield of DRDRAM is about 30%. This means that out of 100 DRDRAM parts made and have all bits functional, only 30 of them can run at 800 Mbps. The others have to be binned to 600 or 700 Mbps speed grade parts which no system house will touch with a ten foot pole (lower performance than PC100 SDRAM). Third, these parts need *very* expensive production testers AND these testers can only test up 16 parts at a time compared to 64 for an SDRAM tester. Fourth, DRDRAM compatible motherboards and memory cards must be made with more expensive impedance controlled PWB technology. Finally, every DRDRAM and DRDRAM compatible chipset sold pays a small put significant royalty to Rambus inc. Some of these factors will lesson over time. But the 64 Mbit question is why would anyone pay 10 or 20% more (let alone the 50 to 100% seen now) for memory devices with significantly longer latency, thermal management problems, PWB design headaches *YET* offers little or no system level performance advantage over PC100 SDRAM (and evidence exists to show that formany apps DRDRAM is actually *slower* than PC100).
  • well, it wasn't so much a LAW as a Business Plan:

    Step 1: destroy all likely competitors to chip business -
    Step 2: ramp progress at a rate controlled to maximize profits over the long haul.
    (that rate is x2/18mo.)


    "The number of suckers born each minute doubles every 18 months."
  • Creative Labs had a product out a few years or so ago called the Graphics Blaster 3D (based on a Cirrus Logic chipset). This product had 4 MB of RAMBUS memory.

    http://www.soundblaster.com/graphics/gb%2D3d/fea tures.html
    http://www.soundblaster.com/pressroom/releases/1 997/p970303.html

    I would post in HTML, but I'm kinda pressed for time (and I'm typing this on an HPC :)...


  • See my above comment. The Creative Lab Graphics Blaster 3D has 4 MB of RAMBUS memory. Of course it sucked (having a Cirrus Logic chipset and all :), but it DOES exist and you could probably buy a gross of them for $1000. I'm pretty sure that there were a LOT of boards based on this CL chipset and all had RAMBUS memory.
  • Most PCs that people build use celerons as they'd rather spend the money on better hardware(like a 7200rpm hard drive :) than a more expensive cpu that doesn't clock as well. Although the PIII is cheap at $180 for a 450, the Celeron 366 is $58 and I got mine to 523 on a cheap i810 board for my parent's pc. Xeons are way overpriced for any 'home' user. As for the i820, I think its a waste of money to buy a new mobo/ram/133bus cpu for what will probably be a small performance boost

  • by Dan B. ( 20610 ) <slashdot@bryar.c o m . au> on Tuesday September 21, 1999 @06:24PM (#1668338)
    Like all other PC hardware, They'll get cheaper.

    Don't forget Moore's law - 18 months, double the power. The good thing is though, the prices are dropping while the power goes up. RAMBUS tech is new and not yet fully explored.

    I will garantee that next month they'll be rethinking their cut back stategy.

  • Some people would like to pay for speed. I'm not
    saying that Rambus would deliver it, vis-a-vis
    DDR, or other techniques, but the average end
    user doesn't buy PentiumIII's or Xeon's either!

  • "RAMBUS tech is new and not yet fully explored. "

    How do you figure? Video cards have used RAMBUS crap for some time.

    When we were switching from EDO to SDRAM it wasn't this much of a difference in price.

    There are some special problems here that warrant a new aproach. Moore's law still applies, and hardware makers will find _some_ way forward. I recind that guarantee though.










  • I mean, Intel wants to go for some expensive thingies alone, and if there is no support, Intel either drops it or end up holding the thing by itself.

    Is this type of story even worth a mention on /. ?

    Additionally, there _are_ alternatives, not only the via chipset, but also the SDRAM.

    Ultimately the market will be the final judge for everthing. You can come up with all the ding-a-lings you want, if the market doesn't buy it, you will end up with a warehouse full of ding-a-lings.









  • by Anonymous Coward
    Well all the server manufacturers for various reasons have strikken DRDRAM of their roadmaps for the near future... So the desktop is really the only place Intel can push those DRDRAM chips.

  • Which video cards are using RAMBUS? I have
    only seen ones on paper that propose to use
    it (Glaze3D)

    I know of no working PC video cards that use it.

    It's hard to even get RDRAM.

    Perhaps you confused Rambus with SGRAM, VRAM,
    et al?
  • but the average end user doesn't buy Pentium III's or Xeon's either!

    Yes, they do. If you look in a catalogue, you'll find that about 60-70% of all fully configured sytems will be PIII, and the rest Celeron for the cheepo systems. The odd one will have a Xeon but hese are marketed at the enthusiest/power user.

  • I'm far from a memory expert, but if I recall their statement is half true. I believe RAMBUS does have low latency after the initial latency. So that once it has started pumping data, it continues to do so fast, but I believe that initial latency is much higher than competing memory technologies.
  • by Anonymous Coward
    Didn't anyone tell you? Moore's Law is OVER, man... it's fucking OVER. It's all downhill from here. From August on, the cost of transitors will be doubling every 18 months.

    Moore's Law is passe. It's OVER.
  • Video cards have used RAMBUS crap for some time.

    What crap! Video cards have some pretty different RAM standards but none of them are using RAMBUS. And I think you'll find it's waaay different from both EDO and SDRAM.
  • I don't remember how well the article addresses latency, but I know this article gives a good performance comparison of Rambus RAM to SDRAM.

    http://www5.tomshardwa re.com/releases/99q2/990622/index.html [tomshardware.com]

  • plenty of video cards use Rambus - but none in the sub-$1000 price range
  • From the review [tomshardware.com] I've read of Rambus/i820 chipset on Tom's Hardware Page [tomshardware.com], it has the highest latency of all the currently available memory types for PC. The review said it has enormous bandwidth capability, but the latency is so high that you really don't get any speed/performance increase. The fact that it's only available in 8bit and 16bit don't really help either. Here's a good guide on Tom's Hardware about latency vs. bandwidth [tomshardware.com]. It's worth a read.
  • Ok, now remove the stick from your arsehole and talk nicely.

    Moore's law will max out when the silicone hits the 5 atom barrier. That is, when silicone is only 5 atoms thick, it loses some of it's properties. That does not mean a) we can't use something else and b) we're at the 5 atom barrier yet, although we will be soon.

    BTW, seen any rumors on 1-2GHz processors yet? Gee let me quickly calculate, yup they'll be her in under 18 months, hey gee they're twice as fast as the ones we got now. Gee, how about that.
  • I have seen video cards with RAMBUS ram, back in the days when the S3 Virge ruled the world, some of the early Rendition cards used RAMBUS ram, I have the specs in an old computer magazine somewhere, but I need to goto school soon so I can't really look it up right now :(




    _______________________________________________
    There is no statute of limitation on stupidity.
  • Video cards that cost over 1000$ do NOT count to what most people consider 'Video Cards'.. ;-P

    I don't believe there ARE any under a grand..
  • DRDRAM for servers? Even Intel looked into this and rejected it in favour of DRAM and eventually DDR (check out the 460GX chipset for merced :). Server vendors are most interested in two things: cost per gigabyte and to lesser extent, latency. The higher device bandwidth of DRDRAM is a non-issue for server guys - they will build as wide a memory or interleave as much as necessary to get the bandwidth they want. Servers today often have 512 or 1024 bit wide DRAM arrays. DRDRAM is cursed with finicky PWB layout and parametric requirements and a single channel can only handle 32 devices. For server size memories either multiple memory controller ASICs, each handling 2 or 4 DRDRAM channels are needed or a hierarchial memory design with fan-out repeater chips which add to the already miserable rambus latency. Both of these approaches are logistical nightmares with the complex PWB routing issues, cooling, and increased physical board area consumed and greater time of flight from long signal traces. Heck, mainframe guys wish the world had stayed with EDO :)
  • by timothy ( 36799 ) on Wednesday September 22, 1999 @01:18AM (#1668361) Journal
    I was at a party 2 weeks ago with several Dell employees, one of whom was a systems engineer working on Dell systems (I won't specify line, lest I get him / her / it -- who I'll call specify by the male pronoun for convenience -- in hot water).

    He said that the 820 was a particularly buggy chipset, and that it was causing them a lot of frustration, more so than previous chipset releases. I told him that it was being marketed as Intels most advanced chipset, and his response was (I'm paraphrasing as best I can) "they ought to call it Intel's most advanced piece of crap!" He had other harsh words for it, such as unreliable and inconsistent, and as in the subject line.

    So maybe it's being cut back on not just for "the sake of consumers" as this /. mention implied, but rather for some tweaking so it works better.

    Also, as others have pointed out, it doesn't make any sense to *cut* production on a chipset which people are willing to pay for in order to gain performance improvements. Someone mentioned Xeon, and I think it's relevant. No one is forcing you to buy a system with a certain set of components, and the bleeding-edge carries a premium. So what? That just means I can't buy it until it's not the bleeding edge.;)

    Again, this is hearsay, but from a good source ...

    timothy
  • It depends on your application. I'm writing some scientific applications which like many other scientific applications, are constrained by memory bandwidth. With careful programming, high latency isn't a problem, but high bandwidth is necessary. Rambus RAM will provide 2x the bandwidth as DDR SDRAM, so I'm all for Rambus.

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...