Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel

Major Problems with Rambus 73

A reader wrote in to alert us to the problems Intel is having with Rambus. Problems arise on the motherboards with three slots of memory, if the third slot is empty, memory can be lost between motherboard and memory. Initial estimates from one analysts said that hundreds of thousands of machines may be affected.
This discussion has been archived. No new comments can be posted.

Major Problems with Rambus

Comments Filter:
  • Impedance over the course of the entire channel is very difficult to control. The motherboard, connectors, RIMMs, etc all need to be impedance matched, or the reflections from the discontinuties in the signal propagation path cause big problems at the reciever.

    The third slot just adds enough noise to use up all the margin, and "bad things" happen. BTW, very carefully designed boards work just fine at 400MHz with 3 slots; it's the fringe cases which are the problem. Unfortunately, the fringe cases are significant enough to make it not worth shipping. It's not unlike overclocking... it will usually work, except when you really, really need it to :).


    Adding a terminator pack as the third RIMM would indeed work well, but there are three problems, all non-technical:
    1) User instruction: a user has to know that the third slot can't be filled with RAM, causing headaches for OEM support teams..

    2) The additional terminator card adds cost

    3) The chipset was spec'd to use three RIMMs, and two don't provide enough memory.

  • This kind of thing is typical when rolling out entirely new technologies like RDRAM, although with the i820's relatively long development time, and the delays already experienced, Intel should have been expected to have already found and corrected the problem.

    No one in their right mind would be deploying RDRAM or i820-based boards in server or other mission-critical applications at this early stage of development anyway - it simply hasn't been proven to work reliably yet, and given what little benchmarking I've seen comparing performance with AMD's and existing Intel chipsets, for home and gaming use the supposed performance improvements seem more a promise for the future than a current reality, rather like AGP before AGP 2X was implemented. With the caveat that this could, of course, be due to the use of preliminary or experimental BIOS versions in the machines being benchmarked.

    Advice: wait until it has been proven to work, before jumping onto the Rambus.
  • And by the way, the RAMBUS site has some white papers on the general technology and how it works. Fun stuff. :)
  • Intel weren't suckered, it was just another lame attempt in a series of lame attempts to dictate PC platform standards in a way that hurts it's competitors and makes them play "catch-up".

    I'm very glad it's blowing up in their monopolistic faces.

    Alternatives are looking more and more attractive every day.

    "The number of suckers born each minute doubles every 18 months."
  • I'll have to plead no contest on this. On my own systems I always use ECC memory to avoid the possibility of a stray cosmic ray or alpha particle scrogging a program. This of course begs the question of whether ECC RDRAM is even available or usable by the i820, I haven't investigated past finding the i820 conspicuously absent from Intel's chipset comparison page. At some points in time, particularly during the time of the "logical parity" scam (a cheap parity generator chip on SIMMS that generated signals making it appear there was real parity memory present), it was almost impossible to get ECC-capable EDO SIMMS from any source for reasonable prices.

    But on the other hand, the probability of power fluctuations or ordinary software bugs destroying data is probably a couple of orders of magnitude greater than RAM being at fault. In my experience, particularly if Microsoft software is being used, system failures tend to be so common that real memory problems almost aren't worth investigating, or are impossible to detect due to being concealed down in the "noise". I've been told "Uh, our memory is only guaranteed to work with Microsoft products" when trying to return defective RAM that generated repeatable Signal 11 errors compiling Linux kernels. The "flaky" merchants involved depending on blaming Microsoft software for their own defective products.

    The only thing close to a solution is to use a good UPS, "backup, backup, backup", take computed results with a grain of salt, and retain those backups for a long time. Admittedly this won't fix the "game going zing" problem, but if this is important enough to you I suggest saving the game's state regularly. Provided the game lets you do so.

    All life is a risk, when you come right down to it.
  • Same here, Intel is not the end all of chips. Can't wait for those new PPC Linux boxes.
  • It was obvious for some time that the 820 and RAMbus was a bad idea. I was really hoping Intel would come to their senses and move it to PC133 - for instance the 810 performs well... if you use a real video card.

    Alas, now that means the early PC133 systems are going to be paired up with the 810e, which if they didn't add ECC support and external AGP support is gonna be lame. (The taiwanese don't like the 810 much either.)

    I wonder how this is going to affect Micron's 533B/600B annoucement - since they decided to opt for the VIA PC133 chipset. :)

    Oh, and you can actually use a Celeron with the VIA Apollo Pro 133. Heh.

  • Darn, those memory dealers are irresponsible. Even Fry's dosen't pull that junk, usually. (But I wouldn't put it past them)

    Tip on Fry's RAM: The stuff they advertise is junk, and the good stuff is typically priced a bit high.

  • Um, your comparing two different sets of numbers. C|Net was saying that the new 2 slot rambus boards only support a maximum of 512 Meg, while current 4 slot BX boards can support 1 Gig. Then they go on to say that the broken 3 slot rambut boards could have supported 768 Meg. They just didn't phrase it very well. So there math is right.
    (bx) 1024MB / (new rambus) 512MB = 1/2 the mem
    (bx) 1024MB / (old rambus) 768MB = 2/3 the mem
    (old rambus) 768MB / (new rambus) 512 = 2/3 the mem of old rambus.
  • Anyone looking for other technical critiques might try:

    Analysis from InQuest, including Dell Office+Rambus benchmarks [inqst.com]

    A performance comparison of contemporary DRAM architectures. [umd.edu] Vinodh Cuppu, Bruce Jacob, Brian Davis, and Trevor Mudge. Proc. 26th International Symposium on Computer Architecture
    (ISCA-26), pp. 222-233. Atlanta GA, May 1999.

    Or here to pick up Intel documentation on it here [intel.com] and here.

    --LP

  • by taniwha ( 70410 ) on Friday September 24, 1999 @06:52AM (#1662560) Homepage Journal
    I'm a chip designer - I've designed both Rambus memory systems and more traditional ones

    Here's the skinny - (the latest) Rambus transfers data on a 16-bit bus at up to 800MHz (400MHz clock data moving on both edges) that's 1.6Gb/sec/channel. PC133's moving data on a 64-bit bus at 133MHz - that's 1.064GB/sec. EV6 (K7) moves 64-bit data on a 200MHz bus - but is limited by having a traditional memory backend (SDRAM performas as above, 100MHz DDR would give the full 1.6G).

    Rambus has a major downside - slightly higher latency to memory (this has been somewhat mitigated in the latest RamBus incarnation).

    It also has a really major upside - memory granularity - as memory densities go up if you don't need to increase your total memory you can use fewer and fewer RamBus DRAMs and still get the same performance - and you can upgrades at a chip level rather than SDRAMS which you must upgrade in increments of a whole SIMM. For low end PCs this will start to become important (unless M$ manages to waste another 64Mb in Win2K).

    Another Rambus advantage is that it can handle many more parallel transactions (esp. overlapping RAS precharge and sense operations because it can have a lot more independant banks in memory) today this isn't so important (except maybe for graphics operations) because to get a big advantage from this you need a lot of concurrent transactions in the memory system and todays CPUs on bridges on the other side of buses don't see as much as you'd like - put directly it on the CPU and there's much more scope for performnce increases - esp. with ISAs like IA64 where much more memory system parallelism is exposed to the programmer.

    On the purely physical side there is one other major RamBus down side - mostly because they're pushing the envelope with respect to clock speeds - building working memory systems is harder - PC board tolerences are much tighter (including on the SIMMS) and the bus interface is really analog rather than the digital one most designers are used too - my experience has been that it takes more chip debug time to get a working reliable RamBus system you have to fiddle and tweak a lot untill you have robust workable system - I have no inside knowledge but I suspect that Intel is working through exactly these issues.

  • I hope that AMD can take advantage of this problem, because you know that Intel would if the chip was on the other foot. Athlon is a strikingly good processor, and there is nothing that Intel has that can match it -- and with the delays in Coppermine and 820 chipsets, this will be true for at least many more months.

    Intel is not going to have copper-interconnect chips 'til mid-2001 (their number, and their numbers have been known to slip), while AMD's Dresden copper plant should be running long before that.

    Intel has, to their credit, invested in revolutionary architectures (Merced, Rambus) while AMD has been pushing 'conventional' architectures harder. At least in the short run of the next year or so, it looks like AMD's approach is better. And, at least in my business, a year is the half-life of a computer.

    Perhaps I'm reading too much into Intel's announcements, but their dribbling out of delay and bug announcements seems calculated to keep people from moving to other platforms; to keep them hanging, waiting for the Intel solution that's just around the corner. I, for one, and buying and AMD machine today :)

    thad

  • by Silverpike ( 31189 ) on Friday September 24, 1999 @07:06AM (#1662562)
    about Intel's situation:

    RAMBUS is still a bleeding-edge technology. Signal integrity is a major headache, and Intel has the unlucky fortune to be the first to try it; there are bound to be problems.

    RAMBUS is not a serial at the physical level, it is a parallel channel with a 16-bit width.

    To offset the relatively small bus width, a super fast clock is used. Currently I think the i820 is designed for 600MHz, but I'm not 100% sure on that.

    RAMBUS is a packet-based scheme, where multiple transactions can be pending at any given point in time (similar to TCP/IP). This allows non-blocking memory accesses and better bus utilization.

    Intel's current RAMBUS implementation might not be cost effective over SDRAM, but given a year or two probably will be (if they can get past the current problems).

    I don't know what you guys are thinking of when you say RAMBUS sucks. RAMBUS isn't another USB (yuk) -- it is an extremely well thought out technology that is (admittedly) somewhat ambitious. RAMBUS scales much better than any existing memory technology. I see people bitching about RAMBUS, but SDRAM will probably max out at 150 MHz next year.

    Please understand that Intel != RAMBUS. Don't hate RAMBUS because you hate Intel; RAMBUS was founded by two people who were very smart guys and as a separate entity.

    Intel's problem is probably not with the i820 itself, but the way the RAMBUS signals behave on the bus. They have noise and termination problems, which are very similar to SCSI problems. This is why the last slot on the bus is problematic -- having a RIMM or nothing at all makes a big difference to the the signalling scheme.

    This problem will not affect SDRAM, even with an i820 (unless there is a different problem I don't know of).

  • If I am understanding correctly, RAMBUS tech is based off of something similar (in concept) to ethernet - basically a very high speed analog interconnect between the CPU and memory, using some sort of packeting scheme (maybe similar to IP?)...

    If this is correct, then is the third slot problem being caused by what is essentially a non-termination of the bus - similar to what can happen if a SCSI chain or 10bT chain is left unterminated?

    Or do I have this completely wrong?
  • by hawk ( 1151 )
    Some old systems used to really lose bits. There were a couple of models of core memory that quite literally dropped their little rings over time, accumulating a pile at the bottom. Fortunately, core tended to have spare banks . . .
  • by Anonymous Shepherd ( 17338 ) on Friday September 24, 1999 @07:48AM (#1662565) Homepage
    Your third point: Initially RAMBUS at 800MHz was specified, but when manufacturing and quality difficulties occurred, it was stepped down to 600MHz and lower. And PC133 was added as well.

    I don't know about SDRAM maxing out at 150MHz. Word is that Apple is working on DDRAM, which uses both clock edges, as well as working it at 266MHz. Now this may just be Double the rate at 133MHz, or Double the rate at 266MHz, I don't know. Link is
    http://www.macosrumors.com/8-99.html

    But otherwise I have to agree with most of your points.

    -AS
  • by Red Herring ( 47817 ) on Friday September 24, 1999 @07:50AM (#1662566)
    Rambus memory is a big deal for a few reasons. First of all, most modern chips are becoming very pin-limited, so the 80 or so pins to implement an SDRAM interface compared to the 20 or so pins necessary to implement an RDRAM interface is very important. Routing 80 or so signals at 133MHz is very difficult; routing 20 signals at 400MHz is easier.

    RDRAM is also "multi-symbolic", meaning that there are multiple transactions on the bus at any one time. The clock speed is faster than than the propogation of the signal, so it's possible to have multiple pieces of data on the bus at the same time. This allows higher speeds; higher even than the 400MHz clock (800MHz data transfer rate) that RDRAM uses today.

    I heard someone else mention DDR SDRAM... this may work in a server, but it requires a huge amount of power, which in turn means cooling, which in turn means cost. It's not terribly suitable for a general desktop. RDRAM manages power better.

    While RDRAM may or may not be the future, SDRAM and deriviatves are definately not. They simply cost to much to scale to higher throughputs. Intel tried to move to something better, but got burned because it was to technically difficult for a generic OEM to produce. Oops.

  • TurkishGeek wrote:
    But in overall production costs, ignoring the price premiums tacked on the price by companies, RDRAM is more cost-effective. That is, RDRAM is actually cheaper to produce.This alone makes it attractive in the long run.

    Dunno where you get this. I work with memory manufacturers and the lowest quoted die-size penalty for Rambus is 20% over DDR SDRAM. If you talk to the engineers instead of the spokesuits the number is more like 40%, and the yield is quite a bit lower.

    Rambus advocates will argue over the amounts of the price premium and the performance changes relative to PC100 SDRAM. Based on whether you highball the performance and lowball the price adder or highball the price adder and lowball the performance you can either conclude that RDRAM is the inevitable next mainstream memory or that it's DOA, but nobody in the trade is claiming that RDRAM is cheaper to produce than SDRAM or DDR.
  • by David Ham ( 88421 )
    Does this affect motherboard manufactured by intel only, or motherboards using the intel chipset? I'd be interested in knowing - I'm about to sink some serious money into an upgrade to my system, gonna go with the Abit BP6, but if there's a problem with that as well, I just might wait....
  • Losing memory on a motherboard is a little different than losing data on a motherboard. I prefer the article's reference to losing data to the original /. phrasing.
  • While the concept of Rambus looks really nice I think Intel really screwed up on its implementation. They should have waited a little longer and done some thorough developing work on it because at the current state Rambus is starting to look ridiculous. Apart from that I hope that they'll manage to get it working properly as soon as possible, especially when I consider the current memory prices. (Which are still rising here in Luxembourg)
  • Hell, they are so low that this intel thing is bound to do something to em....
  • Maybe they are using those sony matchstick memory bits as in those sony digital music players [slashdot.org] mentioned on slashdot the other day.

    Damnit, I lost 8 megs, oh wait, its wedged between a PCI slot and an AGP...

  • All intel i820 boards that use rambus memory and three memory slots have potential problems corrupting data. Isn't this going to be fun.. NOT!

  • admitting i don't completely understand the source of the problem, couldn't intel just solder some sort of feedback chip into the third ram slot, so that all data sent is simply sent right back, or on to the processor, or wherever it's supposed to be going, but not currently? it seems like there should be some one to fix a problem of having one too many ram slots cheaper than remaking the whole board.
  • Thanx for the correction. 2hours of sleep + comprehension don't work.
  • I wondwr if this will hit the sdram based boards. They are just going to have a memory protocol adapting chip in between the i820 memory controller and the memory slots. If the problem is in being able to control three slots properly, wouldn't this effect *all* i820 boards with three memory slots (all that have been made so far)?
  • ...sell it cheaply and give an explicit warning that the third slot will not work. Or disable it in some sorta physical manner. A machine maxed out at 512megs of memory is not the end of the world.
  • "The existence of the third memory slot can cause data to get lost while being transferred between memory and the main processor"

    Has anyone seen any other reports with specifics on the problem? A problem like this would cause severe system instability, and basically make any machine built on top of it worthless. Since according to the article, some manufacturers have opted to ship computers with the problem, either 1) the author of the original CNET article got it wrong, or 2) computer manufacturers have a severe lack of ethics.
  • by Anonymous Coward
    Read the article a bit more carefully. It states that data can be lost whether the third slot is being used or not. All of the current rambus mobos will have to be scrapped.
  • Since when is 512 half of 768?
  • This product would make computers more expensive, run slower, and less reliable. Is it just me or does it sound more like a Microsoft product than an Intel product?
    This is what happens when Intel tries to force the industry in one direction when there's a better short term alternative (DDRSDRAM) because Intel has invested in the technology it wants to make the standard. I'll really be disappointed if this doesn't translate into gains by VIA, AMD, and some of the other smaller players.
  • ...not half of 768megs.
  • Very correct. The RAMBUS channel needs to be terminated, exactly like a 10bT or SCSI channel. The signal reflections from an unterminated stub cause major headaches, to the point where the system won't work. That's why either a memory stick or a continuity stick must be in each socket, so that the signal can propogate all the way to the terminator resistors on the motherboard at the end of the channel.
  • by Anonymous Coward on Friday September 24, 1999 @03:09AM (#1662585)
    Apparently they miscalculated the tolerences. If they make the memory bus long enough to have three slots, it occasionally scrambles data, regardless of which slots are filled.
  • Since when is 512 half of 768?

    512 is 2/3 of 768, which makes sense if you can only use 2 slots instead of 3.
  • With respect, and nothing personal, but I always get my dander up when I hear phrases like "a little flaky - don't use for mission critical server apps". So what does that mean? Scientific and engineering workstations, and, yes, home systems don't count?

    I for one don't want my financial and other records screwed up. I don't want my email and newsgroup archives lunched. I don't even want my games going "Zing" at odd moments.

    This sounds like a laughably crappy job of engineering to me.
  • A Rambus channel is 16 bits wide and transfers data on both edges of a clock, so a 400 MHz Rambus memory system (what Intel calls 800 MHz for marketing reasons) transfers 1600 MB/s, although with some delay for packet processing. The memory manufacturers couldn't get yield at that speed and wanted 600, but the system houses said they couldn't sell 600; Intel compromised on a hair over 700. Call it 1400 MB/s

    For comparison, a 64-bit PC133 DIMM transfers 1066 MB/s with less latency. A PC200 DDR DIMM transfers 1600 MB/s with latency a hair less than the PC133, and a PC266 DDR DIMM transfers 2133 MB/s at about the same latency as the PC200.

    HTH. HAND.
  • I don't know the answer to your question, but one nit picking correction: 10BaseT is not a chain, it is a star topology physically, and thus doesn't require termination. 10Base2 and 10Base5 are the ones that use a bus topology terminated on both ends.
  • Rambus is a parallel architecture running at serial-transfer data rates. All of the data lines have to arrive together on the same clock edge or they get confused with another word. Intel and the memory shops have specs on how much skew each part is allowed to introduce and how much each is required to tolerate, and what's left is the budget for the motherboard.
    The MB manufacturers build their motherboards to spec but now it turns out that the Camino needs a little more "eye opening" (data good window) than was in the budget. Since even empty slots cause some reduction in eye opening, the situation isn't improved by plugging a blank in the empties (Rambus is a ring topology, so you can't have totally empty slots.) In fact, a full slot might improve matters by resyncing everything.
  • Yes, I did indeed read the article. I was hoping that someone would have more info on whether this will effect rambus and sdram i820 systems or just rambus configurations. But thanks for the advice and stock updates. Long live AMD and VIA.. :)

  • I have to admit the numbers I have are from "spokesuits" (trade publications). If the lowest quoted die-size penalty is %20 over DDR SDRAM, the yield should not be significantly lower in the long run. DRAM processes are relatively very mature, if we consider that DRAMs have been around for a while.

    I am not a RDRAM advocate per se. But I like RDRAM because it is a new approach-a fresh, welcome improvement for solving the memory bottleneck problem. It looks like ordinary DRAM can only be improved so much, while with RDRAM, there seems to be more opportunity for further improvement.

    Thanks for the useful information. If you don't mind, may I ask you what kind of work with memory manufacturers you are involved in? Based on some of your previous posts, I guess you are more involved in process technology rather..
  • No way is DRDRAM cheaper to produce. I know, because I'm currently employed in a DRDRAM design department. There are clear fundamental reasons adding to the expense - silicon area, tighter tolerances, higher frequency testers, etc. From this point you have to ask, "What is the value of the extras DRDRAM offers?" Then you have to decide if it's worth it. In a heavily loaded, multitasking system, the increased bank count of DRDRAM translates to better availability and therefore performance. At some level of chip integration, (we're not there, yet) DRDRAM will be a great solution to system-DRAM-on-a-chip. But there's another perspective. Here we are, in one of the ultimate bastions of open-ness, debating the Intel/Rambus attempt to take the ultimate VLSI commodity, memory, and making it proprietary and royalty-bearing. It's given me real ethical headaches over the past few years. But kids gotta eat, and my ability to 'make a difference' toward killing off 'free' (speech, the beer's getting expensive, these days) memory negligible.
  • My thoughts from wht I have read would be that the chipset is the faulty part, because you already have to terminate Rambus slots I think I saw one on one of the hardware sites linked off of anand's page. It wouldn't be a surprise if Intel has pushed out a bad chipset it happens.

    I think Intel should have just modified the trustworthy BX chipset to properly support 133 and come out with a chip that works correctly with RAMBUS and the motherboards.
  • The BP6 uses the "Old" BX chipset with lots of wierd stuff built onto it by ABIT. It doesn't use the Rambus memory (unless I'm SORELY mistaken), so we don't have a problem.

    I highly recommend this motherboard, it's nice and reliable and I LOVE the dual celerons. YMMV of course, but all the reviews I've read say about the same thing.
  • The BP6 uses the "Old" BX chipset with lots of wierd stuff built onto it by ABIT. It doesn't use the Rambus memory (unless I'm SORELY mistaken), so we don't have a problem.

    You're right; the BP6 takes normal or ECC SDRAM. I've got one of these, with 256 MB of ECC and 2 366 MHz Celerons, which I've successfully overclocked to around 420 MHz -- total cost on the order of $700, about half of that for the ECC.
  • by Kettlerp ( 18064 ) on Friday September 24, 1999 @04:50AM (#1662598) Homepage
    Remember DIMMs without SPD-EPROMS? Same sort of thing. Although this time, from what I understand of RDRAM, you need a terminator card in each slot that is NOT used, which explains why the three-slot board is a problem.

    RDRAM is a waste of time and money. Why bother converting to an unproven memory technology when PC133 and PC150 SDRAM give much better bang for the buck with today's technology, and are even faster according to most benchmarks.

    I understand why Intel has to go with RDRAM. It's part of their legal obligation with Rambus to push RDRAM into the marketplace until the end of year 2002 - http://www.theregister.co.uk/990906-000003.html - but that doesn't mean we the public have to even acknowledge that the 820 chipset exists. Go VIA or stick with the tried, true, overclockable 440BX and supercool those AGP cards.

    It amazes me that Intel could have been suckered into this RDRAM quandry. They should have known better, or at least have the backbone to tell the Rambus guys that they're nuts to release a slower alternative to SDRAM, and waited until RDRAM was ready before forcing it on the industry.

    Just my $0.02

  • I paid $140 for 128 megs of ECC when I got my BP6 a couple months ago. The same vendor who sold me that memory wants $285 for it today. The price has more than doubled in around 60 days.
    $%*(@#&%*.
    --Sync
  • by Anonymous Coward
    "It's a feature, really." "It is a minor flaw that will only affect a very small minority of users in scientific and engineering applications."
  • by TurkishGeek ( 61318 ) on Friday September 24, 1999 @05:37AM (#1662602)
    RDRAM has significant advantages to SDRAM. In overall latency and performance, it is true that RDRAM is only about as good as SDRAM. And true, RDRAM is more expensive now because of those outrageous royalties paid to Rambus, and the low volume.

    But in overall production costs, ignoring the price premiums tacked on the price by companies, RDRAM is more cost-effective. That is, RDRAM is actually cheaper to produce.This alone makes it attractive in the long run.

    There are several good papers about comparisons of modern DRAM architectures, which highlight this point. The more technically oriented among you might want to take a look at the following:

    A Performance Comparison of Contemporary DRAM Architectures [umich.edu]

  • I actually meant 10Base2 - not 10BaseT as I wrote - I was thinking co-ax, but typing twisted pair. However, thank you for the correction...
  • Then why are they having problems with "dropouts" when the third slot _is_ filled? Is there something "magical" in the two slot/four slot config?

    Or is it something having to do with "length of the chain" - ie, RAMBUS was only designed for two devices on the chain, but someone thought it would be cool to add a third (Intel?) for extra memory, found that it worked (most of the time), but then found it really wasn't that stable in the end?

    As far as fixing the problem with the current motherboards - I was thinking like a termination pack that would go in the slot. Does this sound right? I have a feeling it wouldn't help the matter any (might even make it worse)...

    This RAMBUS tech is wierd - can anyone point me to a more in-depth explanation as to how it works (spec-sheets?)...
  • If I could, I'd moderate you up a few notches for that one!

    (hint, hint, hint ;)
  • This seems a topologically unlikely story.

    To release a single ferite core to fall out, not only would both the X and Y address lines which were pulsed to read/write the core be broken, disabling a large number of bits, but also the sense wire which was threaded in a serpentine manner through all the cores in a memory bank, rendering the entire bank unusable.

    Spare banks may have been put in to account for defective cores or wiring errors, but I've never heard of a core memory system with the ability to switch to new banks in the field without the help of a repair technician.
  • pretty much - it does need to be terminated carefully (at the end of the chain) - the SIMMs on a RamBus motherboard are really a chain of SIMs unlike traditional DRAM where it's more of a a bus structure. Signal levels are changed to match loads on the fly - the hardware (or software) does a statistical measurement of error rates at power on time to find a sweet spot to drive the bus at (basicly a measurement of the channel impedance plus the impedance of the drivers on the chip) to figure out how much current to sink in order to swing the signals at the correct levels.

    The packetting scheme is much simpler than IP (these days it's not even really a packetting scheme in the sense that earlier RamBus protocols were) etc and NOT collision based (although they did flirt with that in very early versions) - the memory controller is the master and controlls all the bus usage and bandwidth.

  • I don't have all the details, but it seems to me that it came up a couple of months ago in alt.folkore.computers, from one of the people who actually encountered them. I want to say (but I'm not certain) that something about the environment was allowing the rings to crack.

    I think it came up in one of the "worst equipment of all time" type discussions, along with the ibm strip reader that shuffled when it put the magnetic strips back . . .
  • Red Herring wrote:
    I heard someone else mention DDR SDRAM... this may work in a server, but it requires a huge amount of power, which in turn means cooling, which in turn means cost. It's not terribly suitable for a general desktop. RDRAM manages power better.

    Interesting observation, but I'm curious where you get the power comparison. RDRAM certainly has more power-management features than DDR but that doesn't mean it uses less power. The proof would be in actual power-usage comparisons.

    FWIW SSTL-2 drivers run about 8 mA/bit during active signaling, for a total across 64 bit memory of about 512 mA. RDRAM burns 25mA per bit per node times 16 bits times 2-4 nodes for a total signaling current of 800-1600 mA.

    WRT on-chip power the memory arrays, sense amps, etc are all standard in both cases. There is a difference in the DLL (DDR's DLL can be run without static power) and I/O circuits (Rambus runs current-mode signaling and therefore has some analog circuitry with attendant static currents in the RAC.)
  • TurkishGeek asked:
    Thanks for the useful information. If you don't mind, may I ask you what kind of work with memory manufacturers you are involved in? Based on some of your previous posts, I guess you are more involved in process technology rather..

    I'm an I/O designer and signal-integrity engineer with a semiconductor company (thus the handle); I help out with some of the JEDEC memory standards work and the memory manufacturers there.

    BTW, I had lunch today with an engineer from Intel working on this problem. He says it comes from crosstalk among the data lines and (although he was diplomatic and didn't say so directly) appears to be inherent in Rambus rather than being the Camino chip per se.
  • It seems this last setback was too much. Intel has called off the introduction for the second time. Intel had planned for the release of the 820 or Camino chipset this coming Monday, but it has now been delayed 'indefinitely'. Check out this ZDNet article [zdnet.com].

  • by ch-chuck ( 9622 ) on Friday September 24, 1999 @05:55AM (#1662615) Homepage
    usual gritty Intel detail site [x86.org] - now w/ Dr. Dobbs

    Chuck
  • c't (famous german computer magazine) writes [heise.de] that they could not measure any stability problems on the RamBus boards they tested, not even with fully loaded RAM-banks. So this must either be a very rare problem, or it affects only certain board designs.
  • Holy damn! did you see how much intel and rambus stocks dropped! i don't know much about stocks but yesterday intel was up 2 points and rambus was 8+ points now rambus is -11 and intel's -4, geez.
  • Why is rambus memory such a big deal? It seems to me (I may be wrong) that the advantage of Rambus is that it's serial and thus transfers really really fast.

    Aren't current DIMMS 64 bits wide and running at 100MHz? Wouldn't that mean that Rambus memory would need to run at 6400 MHz to match the throughput we have right now? Or are they mounting many banks of RDRAM on a single module and running those in parallel at the speed of the CPU? But then we get back to running the memory in parallel, but faster, creating more errors presumably?

    Can someone who knows more please shed some light?

    Thanks
  • Computer companies which hype the claims of companies like Rambus (or in this case, really Rambus / Intel) are partly to blame.

    I know that the computer catalogs my company works on have been hyping the advantages of Rambus / RDRAM for a little while, based on the input given us by our client. Since they're in the position presumably to know the truth of such claims / realism of the release schedules (and since they're the client) we don't have a lot of choice about it.

    So the question might have been facietious (about 'who else but intel to blame?' but ...

    cheers,

    timothy
  • Would it be cheeper for them to replace things or to give those people affected some RAM to stick in the third slot?
  • yes, infact if you read the article, 100,000-1mil systems have been built, now they have to open them up, and take out the mobo. some companies are just gonna ship them out and repair them when parts are available. hehe intel's stocks went down 4 points, rambus inc. went down 11!

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...