Major Problems with Rambus 73
A reader wrote in to alert us to the problems Intel is having with Rambus.
Problems arise on the motherboards with three slots of memory, if the third slot is empty, memory can be lost between motherboard and memory. Initial estimates from one analysts said that hundreds of thousands of machines may be affected.
Re:Ok, but... (Score:2)
The third slot just adds enough noise to use up all the margin, and "bad things" happen. BTW, very carefully designed boards work just fine at 400MHz with 3 slots; it's the fringe cases which are the problem. Unfortunately, the fringe cases are significant enough to make it not worth shipping. It's not unlike overclocking... it will usually work, except when you really, really need it to
Adding a terminator pack as the third RIMM would indeed work well, but there are three problems, all non-technical:
1) User instruction: a user has to know that the third slot can't be filled with RAM, causing headaches for OEM support teams..
2) The additional terminator card adds cost
3) The chipset was spec'd to use three RIMMs, and two don't provide enough memory.
Not surprising, really. (Score:2)
No one in their right mind would be deploying RDRAM or i820-based boards in server or other mission-critical applications at this early stage of development anyway - it simply hasn't been proven to work reliably yet, and given what little benchmarking I've seen comparing performance with AMD's and existing Intel chipsets, for home and gaming use the supposed performance improvements seem more a promise for the future than a current reality, rather like AGP before AGP 2X was implemented. With the caveat that this could, of course, be due to the use of preliminary or experimental BIOS versions in the machines being benchmarked.
Advice: wait until it has been proven to work, before jumping onto the Rambus.
Re:Ok, but... (Score:1)
Re:No one sensible was planning to use RDRAM anywa (Score:2)
I'm very glad it's blowing up in their monopolistic faces.
Alternatives are looking more and more attractive every day.
"The number of suckers born each minute doubles every 18 months."
Re:So little old me doesn't count? (Score:1)
But on the other hand, the probability of power fluctuations or ordinary software bugs destroying data is probably a couple of orders of magnitude greater than RAM being at fault. In my experience, particularly if Microsoft software is being used, system failures tend to be so common that real memory problems almost aren't worth investigating, or are impossible to detect due to being concealed down in the "noise". I've been told "Uh, our memory is only guaranteed to work with Microsoft products" when trying to return defective RAM that generated repeatable Signal 11 errors compiling Linux kernels. The "flaky" merchants involved depending on blaming Microsoft software for their own defective products.
The only thing close to a solution is to use a good UPS, "backup, backup, backup", take computed results with a grain of salt, and retain those backups for a long time. Admittedly this won't fix the "game going zing" problem, but if this is important enough to you I suggest saving the game's state regularly. Provided the game lets you do so.
All life is a risk, when you come right down to it.
Re:Ah, this is so nice.... (Score:1)
Sigh... why didn't Intel 'get it'. (Score:2)
It was obvious for some time that the 820 and RAMbus was a bad idea. I was really hoping Intel would come to their senses and move it to PC133 - for instance the 810 performs well... if you use a real video card.
Alas, now that means the early PC133 systems are going to be paired up with the 810e, which if they didn't add ECC support and external AGP support is gonna be lame. (The taiwanese don't like the 810 much either.)
I wonder how this is going to affect Micron's 533B/600B annoucement - since they decided to opt for the VIA PC133 chipset. :)
Oh, and you can actually use a Celeron with the VIA Apollo Pro 133. Heh.
Re:So little old me doesn't count? (Score:1)
Darn, those memory dealers are irresponsible. Even Fry's dosen't pull that junk, usually. (But I wouldn't put it past them)
Tip on Fry's RAM: The stuff they advertise is junk, and the good stuff is typically priced a bit high.
Re:C|Net's bad math (Score:2)
(bx) 1024MB / (new rambus) 512MB = 1/2 the mem
(bx) 1024MB / (old rambus) 768MB = 2/3 the mem
(old rambus) 768MB / (new rambus) 512 = 2/3 the mem of old rambus.
More info, critique of Rambus (Score:2)
Analysis from InQuest, including Dell Office+Rambus benchmarks [inqst.com]
A performance comparison of contemporary DRAM architectures. [umd.edu] Vinodh Cuppu, Bruce Jacob, Brian Davis, and Trevor Mudge. Proc. 26th International Symposium on Computer Architecture
(ISCA-26), pp. 222-233. Atlanta GA, May 1999.
Or here to pick up Intel documentation on it here [intel.com] and here.
--LP
More Rambus Info .... (Score:5)
Here's the skinny - (the latest) Rambus transfers data on a 16-bit bus at up to 800MHz (400MHz clock data moving on both edges) that's 1.6Gb/sec/channel. PC133's moving data on a 64-bit bus at 133MHz - that's 1.064GB/sec. EV6 (K7) moves 64-bit data on a 200MHz bus - but is limited by having a traditional memory backend (SDRAM performas as above, 100MHz DDR would give the full 1.6G).
Rambus has a major downside - slightly higher latency to memory (this has been somewhat mitigated in the latest RamBus incarnation).
It also has a really major upside - memory granularity - as memory densities go up if you don't need to increase your total memory you can use fewer and fewer RamBus DRAMs and still get the same performance - and you can upgrades at a chip level rather than SDRAMS which you must upgrade in increments of a whole SIMM. For low end PCs this will start to become important (unless M$ manages to waste another 64Mb in Win2K).
Another Rambus advantage is that it can handle many more parallel transactions (esp. overlapping RAS precharge and sense operations because it can have a lot more independant banks in memory) today this isn't so important (except maybe for graphics operations) because to get a big advantage from this you need a lot of concurrent transactions in the memory system and todays CPUs on bridges on the other side of buses don't see as much as you'd like - put directly it on the CPU and there's much more scope for performnce increases - esp. with ISAs like IA64 where much more memory system parallelism is exposed to the programmer.
On the purely physical side there is one other major RamBus down side - mostly because they're pushing the envelope with respect to clock speeds - building working memory systems is harder - PC board tolerences are much tighter (including on the SIMMS) and the bus interface is really analog rather than the digital one most designers are used too - my experience has been that it takes more chip debug time to get a working reliable RamBus system you have to fiddle and tweak a lot untill you have robust workable system - I have no inside knowledge but I suspect that Intel is working through exactly these issues.
Opportunity for AMD (Score:1)
Intel is not going to have copper-interconnect chips 'til mid-2001 (their number, and their numbers have been known to slip), while AMD's Dresden copper plant should be running long before that.
Intel has, to their credit, invested in revolutionary architectures (Merced, Rambus) while AMD has been pushing 'conventional' architectures harder. At least in the short run of the next year or so, it looks like AMD's approach is better. And, at least in my business, a year is the half-life of a computer.
Perhaps I'm reading too much into Intel's announcements, but their dribbling out of delay and bug announcements seems calculated to keep people from moving to other platforms; to keep them hanging, waiting for the Intel solution that's just around the corner. I, for one, and buying and AMD machine today :)
thad
Some educated guesses... (Score:5)
RAMBUS is still a bleeding-edge technology. Signal integrity is a major headache, and Intel has the unlucky fortune to be the first to try it; there are bound to be problems.
RAMBUS is not a serial at the physical level, it is a parallel channel with a 16-bit width.
To offset the relatively small bus width, a super fast clock is used. Currently I think the i820 is designed for 600MHz, but I'm not 100% sure on that.
RAMBUS is a packet-based scheme, where multiple transactions can be pending at any given point in time (similar to TCP/IP). This allows non-blocking memory accesses and better bus utilization.
Intel's current RAMBUS implementation might not be cost effective over SDRAM, but given a year or two probably will be (if they can get past the current problems).
I don't know what you guys are thinking of when you say RAMBUS sucks. RAMBUS isn't another USB (yuk) -- it is an extremely well thought out technology that is (admittedly) somewhat ambitious. RAMBUS scales much better than any existing memory technology. I see people bitching about RAMBUS, but SDRAM will probably max out at 150 MHz next year.
Please understand that Intel != RAMBUS. Don't hate RAMBUS because you hate Intel; RAMBUS was founded by two people who were very smart guys and as a separate entity.
Intel's problem is probably not with the i820 itself, but the way the RAMBUS signals behave on the bus. They have noise and termination problems, which are very similar to SCSI problems. This is why the last slot on the bus is problematic -- having a RIMM or nothing at all makes a big difference to the the signalling scheme.
This problem will not affect SDRAM, even with an i820 (unless there is a different problem I don't know of).
Explain RAMBUS tech...? (Score:1)
If this is correct, then is the third slot problem being caused by what is essentially a non-termination of the bus - similar to what can happen if a SCSI chain or 10bT chain is left unterminated?
Or do I have this completely wrong?
s/c/s/ (Score:2)
Minor quibbles... (Score:3)
I don't know about SDRAM maxing out at 150MHz. Word is that Apple is working on DDRAM, which uses both clock edges, as well as working it at 266MHz. Now this may just be Double the rate at 133MHz, or Double the rate at 266MHz, I don't know. Link is
http://www.macosrumors.com/8-99.html
But otherwise I have to agree with most of your points.
-AS
Re:Explain please? (Score:3)
RDRAM is also "multi-symbolic", meaning that there are multiple transactions on the bus at any one time. The clock speed is faster than than the propogation of the signal, so it's possible to have multiple pieces of data on the bus at the same time. This allows higher speeds; higher even than the 400MHz clock (800MHz data transfer rate) that RDRAM uses today.
I heard someone else mention DDR SDRAM... this may work in a server, but it requires a huge amount of power, which in turn means cooling, which in turn means cost. It's not terribly suitable for a general desktop. RDRAM manages power better.
While RDRAM may or may not be the future, SDRAM and deriviatves are definately not. They simply cost to much to scale to higher throughputs. Intel tried to move to something better, but got burned because it was to technically difficult for a generic OEM to produce. Oops.
Re:Advantages of RDRAM (Score:2)
But in overall production costs, ignoring the price premiums tacked on the price by companies, RDRAM is more cost-effective. That is, RDRAM is actually cheaper to produce.This alone makes it attractive in the long run.
Dunno where you get this. I work with memory manufacturers and the lowest quoted die-size penalty for Rambus is 20% over DDR SDRAM. If you talk to the engineers instead of the spokesuits the number is more like 40%, and the yield is quite a bit lower.
Rambus advocates will argue over the amounts of the price premium and the performance changes relative to PC100 SDRAM. Based on whether you highball the performance and lowball the price adder or highball the price adder and lowball the performance you can either conclude that RDRAM is the inevitable next mainstream memory or that it's DOA, but nobody in the trade is claiming that RDRAM is cheaper to produce than SDRAM or DDR.
? (Score:1)
c/memory can be lost/data can be lost/ (Score:2)
Delayed again? This is starting to hurt! (Score:2)
Buy AMD shares... (Score:1)
Re:c/memory can be lost/data can be lost/ (Score:2)
Damnit, I lost 8 megs, oh wait, its wedged between a PCI slot and an AGP...
I820 bugs (Score:1)
feedback chip? (Score:1)
Re:That's not the problem... (Score:1)
What about i820 sdram boards? (Score:1)
When life gives you Rambus.. (Score:1)
More specifics needed (Score:2)
Has anyone seen any other reports with specifics on the problem? A problem like this would cause severe system instability, and basically make any machine built on top of it worthless. Since according to the article, some manufacturers have opted to ship computers with the problem, either 1) the author of the original CNET article got it wrong, or 2) computer manufacturers have a severe lack of ethics.
That's not the problem... (Score:2)
C|Net's bad math (Score:1)
Good. (Score:1)
This is what happens when Intel tries to force the industry in one direction when there's a better short term alternative (DDRSDRAM) because Intel has invested in the technology it wants to make the standard. I'll really be disappointed if this doesn't translate into gains by VIA, AMD, and some of the other smaller players.
Half of current BX/GX limits (Score:1)
Re:Explain RAMBUS tech...? (Score:2)
Re:More specifics needed (Score:3)
Re:C|Net's bad math (Score:2)
512 is 2/3 of 768, which makes sense if you can only use 2 slots instead of 3.
So little old me doesn't count? (Score:1)
I for one don't want my financial and other records screwed up. I don't want my email and newsgroup archives lunched. I don't even want my games going "Zing" at odd moments.
This sounds like a laughably crappy job of engineering to me.
Re:Explain please? (Score:1)
For comparison, a 64-bit PC133 DIMM transfers 1066 MB/s with less latency. A PC200 DDR DIMM transfers 1600 MB/s with latency a hair less than the PC133, and a PC266 DDR DIMM transfers 2133 MB/s at about the same latency as the PC200.
HTH. HAND.
Re:Explain RAMBUS tech...? (Score:2)
Re:More specifics needed (Score:2)
The MB manufacturers build their motherboards to spec but now it turns out that the Camino needs a little more "eye opening" (data good window) than was in the budget. Since even empty slots cause some reduction in eye opening, the situation isn't improved by plugging a blank in the empties (Rambus is a ring topology, so you can't have totally empty slots.) In fact, a full slot might improve matters by resyncing everything.
Re:What about i820 sdram boards? (Score:1)
Re:Advantages of RDRAM (Score:2)
I have to admit the numbers I have are from "spokesuits" (trade publications). If the lowest quoted die-size penalty is %20 over DDR SDRAM, the yield should not be significantly lower in the long run. DRAM processes are relatively very mature, if we consider that DRAMs have been around for a while.
I am not a RDRAM advocate per se. But I like RDRAM because it is a new approach-a fresh, welcome improvement for solving the memory bottleneck problem. It looks like ordinary DRAM can only be improved so much, while with RDRAM, there seems to be more opportunity for further improvement.
Thanks for the useful information. If you don't mind, may I ask you what kind of work with memory manufacturers you are involved in? Based on some of your previous posts, I guess you are more involved in process technology rather..
WRONG! (from DRDRAM designer) Re:Advantages... (Score:2)
The Chipset (Score:1)
I think Intel should have just modified the trustworthy BX chipset to properly support 133 and come out with a chip that works correctly with RAMBUS and the motherboards.
Abit BP6 (Score:2)
I highly recommend this motherboard, it's nice and reliable and I LOVE the dual celerons. YMMV of course, but all the reviews I've read say about the same thing.
Re:Abit BP6 (Score:1)
You're right; the BP6 takes normal or ECC SDRAM. I've got one of these, with 256 MB of ECC and 2 366 MHz Celerons, which I've successfully overclocked to around 420 MHz -- total cost on the order of $700, about half of that for the ECC.
No one sensible was planning to use RDRAM anyway. (Score:4)
RDRAM is a waste of time and money. Why bother converting to an unproven memory technology when PC133 and PC150 SDRAM give much better bang for the buck with today's technology, and are even faster according to most benchmarks.
I understand why Intel has to go with RDRAM. It's part of their legal obligation with Rambus to push RDRAM into the marketplace until the end of year 2002 - http://www.theregister.co.uk/990906-000003.html - but that doesn't mean we the public have to even acknowledge that the 820 chipset exists. Go VIA or stick with the tried, true, overclockable 440BX and supercool those AGP cards.
It amazes me that Intel could have been suckered into this RDRAM quandry. They should have known better, or at least have the backbone to tell the Rambus guys that they're nuts to release a slower alternative to SDRAM, and waited until RDRAM was ready before forcing it on the industry.
Just my $0.02
Re:Abit BP6 [drifting wildly offtopic] (Score:1)
I paid $140 for 128 megs of ECC when I got my BP6 a couple months ago. The same vendor who sold me that memory wants $285 for it today. The price has more than doubled in around 60 days.
$%*(@#&%*.
--Sync
intel (Score:2)
Advantages of RDRAM (Score:3)
But in overall production costs, ignoring the price premiums tacked on the price by companies, RDRAM is more cost-effective. That is, RDRAM is actually cheaper to produce.This alone makes it attractive in the long run.
There are several good papers about comparisons of modern DRAM architectures, which highlight this point. The more technically oriented among you might want to take a look at the following:
A Performance Comparison of Contemporary DRAM Architectures [umich.edu]
Err - I stand corrected... (Score:1)
Ok, but... (Score:1)
Or is it something having to do with "length of the chain" - ie, RAMBUS was only designed for two devices on the chain, but someone thought it would be cool to add a third (Intel?) for extra memory, found that it worked (most of the time), but then found it really wasn't that stable in the end?
As far as fixing the problem with the current motherboards - I was thinking like a termination pack that would go in the slot. Does this sound right? I have a feeling it wouldn't help the matter any (might even make it worse)...
This RAMBUS tech is wierd - can anyone point me to a more in-depth explanation as to how it works (spec-sheets?)...
Re:Explain please? (Score:1)
(hint, hint, hint
Re:s/c/s/ (Score:2)
To release a single ferite core to fall out, not only would both the X and Y address lines which were pulsed to read/write the core be broken, disabling a large number of bits, but also the sense wire which was threaded in a serpentine manner through all the cores in a memory bank, rendering the entire bank unusable.
Spare banks may have been put in to account for defective cores or wiring errors, but I've never heard of a core memory system with the ability to switch to new banks in the field without the help of a repair technician.
Re:Explain RAMBUS tech...? (Score:1)
The packetting scheme is much simpler than IP (these days it's not even really a packetting scheme in the sense that earlier RamBus protocols were) etc and NOT collision based (although they did flirt with that in very early versions) - the memory controller is the master and controlls all the bus usage and bandwidth.
Re:s/c/s/ (Score:2)
I think it came up in one of the "worst equipment of all time" type discussions, along with the ibm strip reader that shuffled when it put the magnetic strips back . . .
Re:Explain please? (Score:2)
I heard someone else mention DDR SDRAM... this may work in a server, but it requires a huge amount of power, which in turn means cooling, which in turn means cost. It's not terribly suitable for a general desktop. RDRAM manages power better.
Interesting observation, but I'm curious where you get the power comparison. RDRAM certainly has more power-management features than DDR but that doesn't mean it uses less power. The proof would be in actual power-usage comparisons.
FWIW SSTL-2 drivers run about 8 mA/bit during active signaling, for a total across 64 bit memory of about 512 mA. RDRAM burns 25mA per bit per node times 16 bits times 2-4 nodes for a total signaling current of 800-1600 mA.
WRT on-chip power the memory arrays, sense amps, etc are all standard in both cases. There is a difference in the DLL (DDR's DLL can be run without static power) and I/O circuits (Rambus runs current-mode signaling and therefore has some analog circuitry with attendant static currents in the RAC.)
(OT) Re:Advantages of RDRAM (Score:1)
Thanks for the useful information. If you don't mind, may I ask you what kind of work with memory manufacturers you are involved in? Based on some of your previous posts, I guess you are more involved in process technology rather..
I'm an I/O designer and signal-integrity engineer with a semiconductor company (thus the handle); I help out with some of the JEDEC memory standards work and the memory manufacturers there.
BTW, I had lunch today with an engineer from Intel working on this problem. He says it comes from crosstalk among the data lines and (although he was diplomatic and didn't say so directly) appears to be inherent in Rambus rather than being the Camino chip per se.
Camino delayed indefinitely (Score:1)
Don't forget x86.org (Score:3)
Chuck
c't says they could not see any stability problem (Score:2)
HOLY DAMN! (Score:1)
Explain please? (Score:2)
Aren't current DIMMS 64 bits wide and running at 100MHz? Wouldn't that mean that Rambus memory would need to run at 6400 MHz to match the throughput we have right now? Or are they mounting many banks of RDRAM on a single module and running those in parallel at the speed of the CPU? But then we get back to running the memory in parallel, but faster, creating more errors presumably?
Can someone who knows more please shed some light?
Thanks
Who else to blame? Computer companies. (Score:2)
I know that the computer catalogs my company works on have been hyping the advantages of Rambus / RDRAM for a little while, based on the input given us by our client. Since they're in the position presumably to know the truth of such claims / realism of the release schedules (and since they're the client) we don't have a lot of choice about it.
So the question might have been facietious (about 'who else but intel to blame?' but
cheers,
timothy
Which would be cheeper for Intel? (Score:1)
Re:What about i820 sdram boards? (Score:1)