Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

What Will Be The Next Generation Of RAM? 180

Wister285 asks: "I've been hearing a lot about new RAM technologies. Two of the main new forms seem to be RDRAM and DDRAM. Little known to a lot of people currently though is MRAM (magnetic RAM that works more like a hard drive than an electric memory saver, which means that RAM memory is never erased until the computer says so, even through power offs). MRAM seems to be the best form of RAM, but it might not be out for another 1 or 2 years. With these three choices, what is the next generation RAM?"
This discussion has been archived. No new comments can be posted.

What Will be the Next Generation of RAM?

Comments Filter:
  • Does anyone have a clue how flash works? My memory cards for my camera seem to store images indefinetely even without power?

  • Some links explaning the different technologies might be nice.

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • ...but what about latency? Who cares if you get 1600MB/second if it takes longer to access a given word in memory? What are the latency times like for Sun stuff, which, as far as I know, uses the same basic chips as cheap PC RAM does?

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • I don't think the limitations exist with RAM quite as much as their implementation. Look at what other companies have accomplished with RAM by simply providing a better back-end architecture like SGI and SUN - both of which use a crossbar switch internally to provide much greater memory bandwidth.

    Intel is struggling with 800MB/s? Sun's latest is at 1.9GB/s and SGI's is around 1.6GB/s. Maybe we should optimize what we've got more - without obsoleting everything we already have like RAM.

    Sun hardware goes so far as to allow you to use SIMMs from the SparcStation 20 in the Ultra 2 and even the latest Ultra 60 and Ultra 80 systems!

    Obviously, RAM isn't quite the bottleneck as much as Intel and others would lead you to believe.

  • If you know what you are doing, though, you can get under the hood and get your hands dirty.

    And my uncle who runs a donkey farm says the thing really hauls ass.

    ducks and runs for cover
    --
  • Any OS worth it's salt will zero out (or otherwise erase) memory it hands to a user to prevent security problems.
    --
  • It is really a nice option, it's just that you'll want two things:
    • the BIOS must include an option to erase memory, in case the memory contents is screwed up
    • chipsets/CPUs need to have crypto on-board, to prevent nasty people from doing a post-mortem debugging session on your RAM. My suggestion would be to have another field for every (MMU) page containing the page key, or at least a crypto bit to make it optional so you could make part of the warm boot work without key but for the rest you'd have to type a passphrase.

    Cheers,

    da Lawn

  • This also makes creating a hibernation function much, much easier - no more need for a large image file on your harddisk, just let the BIOS know it should *not* erase memory contents after next reboot.

    Well, There is actually a function in the BIOS for this, originally intended for 286's to get out of protected mode, because the cpu had too actually be rebooted. I remember playing with it back in my assembly days too see what I could do with it :).
  • I don't have anything to say on the subject, but am interested in the effect of non-moderated comments on my karma ... does my +1 for registered user count as a +1 moderation? Let's find out, shall we?

  • Matt Ownby spewed thusly:
    What would happen if a virus was loaded into your memory and you wanted to shutdown and wipe the virus from memory, but your memory was permanent? I don't see that as a good thing at all.

    ... which shows why people sans clue shouldn't use computers.

    main() { char *c=malloc(4096); int fd=open("virus.bin", O_RDONLY); read(fd, c, 4096);}

    OH NO! MY COMPUTER HAS A VIRUS IN MEMORY! AAAH!

    Here's a free clue for the clueless: memory is useless unless something refers to it. If you "reboot" a computer without powering down, the RAM isn't cleared. (until the BIOS walks it). Not that it matters, since until something actually jumps to that memory location it never gets executed. What'll happen to your "virus in static ram!"? It'll get overridden by w0rd 2005 when it uses 3-4 gig of system memory, of course. Duh.

    Do they actually TEACH you anything in school anymore?

    As for the people that think that powering their computer down is safe... Hah! Only if you're sure nobody gets to it for 20 minutes. If you use something more sensitive then a modern motherboard you can get bits off a chip for quite a while. Not that that's practical yet (not portable, so they'd have to get your SIMMS to a lab within 10-15 minutes) don't expect that to last forever.

    At least memory isn't as bad as harddrives... when you overwrite memory it basically stays overwritten. Drives have some nasty ghosting of previous data that can be seen at high resolutions.

    Besides, any security-concious app rewrites "critical" memory anyway. none of the OSs I've used zero memory before allocating it to a new process.... it's actually quite entertaining to malloc a few meg and read through it. memset(0) is so simple. Learn it. Love it.

    --Dan

  • Who would have thought 30 years ago that we'd all be running a Unix-like operating system on machines with magnetic core memory?

    Mmm, magnetic core. Core wars. Non protected mode. God, those were the days.

    Anyone have a good place to send the kids to show them what CORE really was? Most of them have no idea what drum memory was...

    --Dan

  • Nothing but full speed static RAM. Yum. Using what the rest of us would call L1 cache as main memory. Now THAT had some throughput. I think the research ought to go into making this more economical

    Um...any efforts at making SRAM more economical would have the side effect of making DRAM more economical. Each SRAM bit is implemented with 4-6 transistors of different types, whereas each DRAM bit is implemented with one n-type MOSFET. That's a huge decrease in size, and is why people put up with awkward timing schemes, address strobes, and pesky refreshes to use DRAM.

  • I think that it's hard to say what the next RAM will be. I think with the new things that are coming out every day we can't really predict what the next RAM will be. We may look at the current possibilities and say that one looks like the best possibility, or the best technology. However tomorrow any company could put out a new type of RAM that will revolutionize the market.

    It's just a matter of looking at the past. Everyone thought that the old add in cards of RAM that you put in your ISA slots to add another meg of RAM (Remember those days? *shudder*) would last forever. The cards would get bigger and the on-board chips would get larger, but nobody could've really said that SIMM's would take over until they came out and suddenly appeared everywhere. I think the next generation of RAM will be the one that nobody sees right now, the one that is in development at the bottom basement of some company, just waiting to be released. Sorta like DUST PUPPY!
  • Nothing but full speed static RAM. Yum. Using what the rest of us would call L1 cache as main memory. Now THAT had some throughput. I think the research ought to go into making this more economical.
  • The ability to erase all your RAM to me is like "starting fresh", similar to rebooting Windows to regain some temporary stability.

    No Windows, no problem. Same old story...

    It wouldn't be hard to park Linux "nicely" within a few milliseconds, running on power from the capacitors in the power supply just long enough to do this. When the machine is re-powered, Linux can simply reinit devices a la Two Kernel Monte and then pick up where it left off. That and journalling filesystems equals reliability heaven.
  • by Guy Smiley ( 9219 ) on Tuesday August 15, 2000 @09:24PM (#852460)
    One of the problems with flash memory is that it has a limited lifespan, in terms of the number of writes that it can do. The lifespan should be in the 100's of thousands of writes, but if they did something bad like have a regular filesystem on the flash, and update the size of a file a few hundred times while it was being uploaded to the RIO, you will quickly wear out one part of the flash (e.g. block 0).

    One of the ways you can avoid having a problem like this is to use a log-structured filesystem, which simply writes the data in one long loop around the device, rather than always starting at the beginning of the device. The exact details escape me, but the general idea is correct.

    One of the new Linux filesystems, JFFS (journalling flash filesystem) does this, I believe. It was accidentally added to the 2.4 development kernel recently when one of the developers working on a flash driver submitted a patch to Linus, and forgot to remove the JFFS code from his patch... (Please, no flamewar about reiserfs here, there was enough on lkml already).

  • If anyone recalls, memory used to be magnetic core -- there was a grid of fine wires, with small ring magnets that could flip. Apparently at one point it got quite dense (for it's time).

    I wonder if this is where the name "core" came from in respect to *NIX systems.

    --
  • by Ryn ( 9728 )
    I propose PRAM: Paper RAM.
    You write information on paper, then stick it inside the computer. Later, when you need to retrieve it, you quickly grab the paper and read it out loud. Fast and cheap solution to everyday computing needs.
  • This means that RAM needs consistency verified bits like the ones used on hard-drives to tell fsck that things were shut down cleanly.

    Someone mentioned that you could have the BIOS auto-detect when you purposely shut things down, or hit the reset button. Well, what happens if the BIOS is buggy and that function doesn't work? It's much better to have a bit that says that something claims that memory is in a valid state for shutdown than something that specially flags that you want to erase memory on startup.

    This would cause problems in power failure situations, but that could easily (and cheaply) be solved by having a capacitor bank 'UPS' that could keep the machine running for about 5 seconds or so while the OS went through the motions to put itself in a hibernating state.

  • Umm... I think you should read about how OSes actually work before you post again and embarass yourself further.

    Programs free up memory all the time and it's cleared by the OS and given to other programs. That's part of the virtual memory subsystem. It's been that way for years and years. The only commonly used OS that didn't do that that I know of in the past 15 years is MS-DOS.

  • Well.... When you boot up, your computer starts up from ROM, reads some stuff from disk into memory, runs that etc. So it doesn't really matter what was left in the ram from the previous session, it gets overwritten. And viruses rather infect stuff on your harddrive anyway. Why bother with the memory?

    Actually, now that I think of it, if you can count on the ram contents being unchanged after a power cycle, you could just more or less continue where you left off when you turned off the computer. Sort of like normal hibernation, except way faster, because you don't have to save to/restore from disk. Boot up in like a second!

  • On a related note: No non-volatile memory will not help you computer boot faster. Why do you reboot your computer? Because 1) something is messed up and you need to reload the contents of your RAM to fix the problem or 2) because a configuration change was made, and the reload is needed.

    3) You want to get some sleep and the thing is making noise!

    About 90% of the time when I boot up it is because the computer was off, not because I rebooted it. If instead I could just hibernate it, that would save me a lot of time.

    I think quite often people just leave their computer on all the time because it takes so much time for the thing to boot up again... With MRAM and an appropriate BIOS + OS support, by the time the monitor would finally be fully awake, the system would be up and running.

    It will however offer a lower power sleep mode.

    Actually it will power off completely. Imagine the improvement on notebook battery life if you could just completely turn it of if you wouldn't need it for just a few minutes, without having to wait for ages till it booted up again. (Even save to/restore from disk takes some time)

  • Scientific American [scientificamerican.com] had an article about MRAM a while ago. You appear to be confusing two things.. RDRAM and DDR DRAM are about the way the ram communicates with the rest of the computer, while magnetic ram is a different storage technology on the memory chips themselves, like SRAM, and thus could be used instead of DRAM on both Rambus and regular DDR memory modules.
  • but think about using an optical interconnect rather than copper wires. No cross-talk, and you could actually send those insanely high-frequency signals the handful of inches between the memory and the CPU with nearly zero latency.

    Drool....

    Some guy who works across the hall tried using optical interconnects, and got the performance of main memory up to nearly L2-cache levels. Xeon, we don't need no stinkin' Xeon :)
  • He wasn't using standard optical stuff. He was using these newfangled solid-state lasers/detectors that aren't available except special made to do research. He determined the performance experimentally (on this really nifty 8-way SMP rig, btw) and saw that the performance of optical interconnects w/out a L2 cache was the same as with L2 cache and standard connections. Both setups also used some aggresive prefetch techniques that are again not available commercially.
  • Yeah, just like Golems who have Holy Words in their heads (read Terry Pratchett - Feet of Clay, it's real fun :)
  • by Idaho ( 12907 ) on Tuesday August 15, 2000 @10:34PM (#852471)
    Yes, we do.

    Look at it this way: you could program the BIOS to always erase the memory on POST, *unless* there was a power faillure (modern ATX supplies can already detect this I believe)

    So when you reboot on purpose, everything will be erased, but when power fails, you'll lose nothing!

    This also makes creating a hibernation function much, much easier - no more need for a large image file on your harddisk, just let the BIOS know it should *not* erase memory contents after next reboot.
  • PRAM == Paper RAM? No...
    PRAM is pr0n RAM. It's the next generation because it accesses your pr0n in current memory really, really fast. When you need it. You could probably even throw in some encryption to hide it from family or coworkers.
  • In the short term, DDR SDRAM is going to be the man of the day. DDR is an improvement on an existing technology (and quite an ingenious one, at that!) It's easy to work with, well-known and since it works in lockstep with the CPU, it's easy to develop for.

    In the long term, however, we will see a transaction to bus-based memory, such as RDRAM. (I personally don't think RDRAM will ever fly; some other incarnation of the same idea will likely spring up, a few years down the road.)

    Abstracting your core memory behind a memory bus gives the advantages that your chipset can talk to any kind of memory that supports the bus standard--it could be of any speed, implemented with any technology (for instance, holographic memory.) Its disadvantage--and few people seem to realize this!--is that it's quite slow, compared to SDRAM where the chipset (and the CPU) has direct access to the data lines coming from the RAM.

    To compensate for this inadequacy, the makers of Rambus RAM pumped the ram bus's clock rate to some absurd speed--I recall hearing 400MHz mentioned. They should have realized that memory technology isn't sufficiently advanced yet, and left well enough alone.
  • Thank you for a very interesting and informative post!

    I'd say eventually the industry is going to have to give up the idea of expandable RAM, and change the entire architecture of the motherboard so that the CPU and main memory are moved off it, onto a daughter card, like the graphics card is now.

    The above, in particular, is extremely interesting. I can see it happening. Indeed, it would fit current trends. We had 30-pin SIMMs forever, but now you're lucky if you keep your memory across two CPU generations. So move all the fast-changing stuff onto a single expansion card, and keep the more stable PCI bus and basic I/O functions on a backplane/mainboard.

    I don't think traditional expandable RAM has to go away completely, though. I think the solution would be further extending the NUMA (Non-Uniform Memory Access) concept of cache memory. We've already got very-high-speed L1 and L2 cache. Say this CPU+high-speed-memory card you propose has N ultra-high-speed on-die L1 cache, N*16 super-high-speed off-die L2 cache, and N*(2^10) of very-high-speed, CPU-local RAM. Then have an expandable main memory system of merely high-speed RAM, slower, but expandable and much larger, say, N*(2^12). To fill in some example numbers:

    128 KB L1 cache
    2 MB L2 cache
    128 MB local RAM
    512 MB main RAM

    That way, you get the best of both worlds.
  • Nothing but full speed static RAM. Yum.

    The problem is, static RAM (SRAM) fundamentally takes up more space then dynamic RAM (DRAM). SRAM uses six transistors per memory cell (bit) where DRAM uses one little thingy (I forget what it's called) per cell. This means a much larger package. (It also means lower yields, since SRAM is harder to make.) After a while, trace lengths and the speed of light are going to get in your way.
  • optical interconnect rather than copper wires. No cross-talk, and you could actually send those insanely high-frequency signals the handful of inches between the memory and the CPU with nearly zero latency.

    Hmmmm. While the zero cross-talk is a big benefit, many optical systems actually have higher latencies then their copper counterparts. More bandwidth, but slower response time. This isn't a factor in networks, so fiber is the media of choice, but I imagine it would be inside your computer.
  • So given that DRAM is a pain and requires a separate controller to work it, why do we use it? ... performance - for SRAM to change state, one gate has to change and the other gate follows it, so it takes twice as long for a state change. This is all approximate, of course.

    I'm confused. They use static RAM as cache memory, right? Because it is faster, right? So how can it also be slower?
  • I wonder if this is where the name "core" came from in respect to *NIX systems.

    Dead on. Unix geeks often refer to a program loaded into memory as "in core" for this reason.
  • Interesting. I'd first point out that, to me at least, what you're proposing sounds a bit more like a gigantic (128 MB) L3 DRAM-cache than traditional NUMA. Of course, I don't know too much about NUMA ...

    I don't either, except some basic theory. But basically, good NUMA systems eliminate the duplication of data that traditional PC cache memory systems use. The hardware and OS know that some memory is faster then other memory, and put more frequently used pages in the faster memory.

    NUMA is traditionally used in very large microcomputer systems (IBM, Sun, SGI) and clusters. SGI does a lot of this in their high-end number crunchers. Memory on the local processor node is significantly faster to access then memory on another node, even with a 2 GByte/sec backplane.

    True, it does require OS-level support for this, and the circuitry is more complex, and blah blah blah, but I think it is worth it for systems where expandability is important (e.g., high-end workstations, servers). I don't see it happening in the home PC market, though. You might as well stick with the "onboard" very-fast-ram then.

    And indeed, that would be the biggest problem with this scheme--the latencies.

    Well, yes, but without this system, the latencies are even worse. If the data isn't in cache or "onboard" RAM, then it must be on disk. And even 70ns FPM SIMMs are faster then paging to disk!

    But still, I think you'd find on such a system that upgrading the main DRAM--like enlarging your virtual memory swap file--wouldn't have the same effect on system performance as upgrading main memory does today.

    I'm sure, but I still think the performance and expandability is worthwhile, especially for something like a web server, where some data (e.g., database engine and indexes) are going to be accessed frequently, while other data (random static web content) is less important (and limited by the speed of the pipe in any event).
  • There are MANY very interesting problems that have two approaches:
    1) nasty heuristics, not guaranteed, but workable,
    and
    2) brute force, perhaps optimized, but still brute.

    and since there are so very many brute force problems, software approaches change in KIND as the hardware scales up.

    When I can take the time/space to do a brute force search on a problem, I can guarantee certain things about my answer, which is very valuable computationally.

    Translation: software is a gas, it expands to use all the space/time given to it, and it will continue to do so.

    If you disagree, well, I guess you wont be using any voice recognition software next year when it hits hard, because that is a clear example of the effect of increased resources.

    -- Crutcher --
    #include <disclaimer.h>
  • Well, my (non-coder) idea would be to have two separate areas of memory - volatile and nonvolatile. The VM would be used to execute programs and to store temporary data, while the NVM would be used more for storage and NOT executable code. While this doesn't help that much (virii can simply load from the NVM) it at least ensures that whatever is currently running isn't when you start back up. Of course, now there is no such thing as instant-on as there would be if the entire memory map was NVM.
    _______
    Scott Jones
    Newscast Director / ABC19 WKPT
    • Of course this makes the die bigger, but as we get to smaller process technologies this isn't so much of a problem.
    Doesn't matter to me. I'd rather have a large processor casing with 128MB+ on-chip, running at full processor speed, than a smaller one with only 256KB-2MB. I wonder why nobody has tried this yet. Especially if AMD were to do something like this, it could blow Intel out of the water on performance. Caveat - current cache memory IIAC* is extremely expensive. But just imagine, 128MB of L1 cache :)

    * IIAC - If I Am Correct
    _______
    Scott Jones
    Newscast Director / ABC19 WKPT
  • chipsets/CPUs need to have crypto on-board, to prevent nasty people from doing a post-mortem debugging session on your RAM.

    I find your excessive concern for keeping secrets disturbing. You must be doing something illegal. That's why your computer (and all the other computers purchased with the "secure memory" feature) will, in fact, be equipped with a remote monitoring device which periodically broadcasts a memory dump and can be used to give you a paralysing shock through the keyboard.

    We're out to get you, you know. All of us.

    ---
    Despite rumors to the contrary, I am not a turnip.
  • ...are very difficult to make accuratly.

    To begin with, may I just point out that most of what has been discussed here about what is, and isn't possable, is actually about what can and can't be manufactured economically.

    For example, Ferroelectric DRAM. Basically, a DRAM is a switching capacitor, so stick a ferroelectric in there, and the size of the cappacitor can be made smaller for the same charge storage. The best material to use for this is probably BST (Barium Strontium Titanate). This is difficult to deposit in a standard fab.

    It is easy (scientifically) to do. You just etch a flat surface on silicon, and grow a layer by MBE (Molecular Beam Epitaxy). Or deposit a layer by MOCVD (Metal Oxide Chemical Vapour Depositon). Problem is, to get these to work, on silicon, is expensive. It can still be done.

    I spend my time surrouned by cutting edge scientific research. Every day I see things that most people would consider impossable, or miraculus. For example, I have seen pure [0] aluminium, as strong as steel. That's not specific (per weight) strength. That's per voulme strength.

    Frequency tuneable solid state lasers. Sure. Colour tunable over half the visable spectrum, by rotating a part. Smaller than a drinks can.

    A slight digression there, but the point is that to see what the future might hold, is not too tricky for the next the next 3 or so years. After that, you need to look at the skunkworks projects. And then in to the labs of the academics. Because that's where the future can be glimpsed.

    [0] A4N standard.
  • That's what a degausing coil is for :) I've still got mine! You can make your shit minty-fresh in about 5 seconds.

    __

  • Generic x86 motherboards are fairly low performance, but are also very low cost (for the most part)... Note also that AGP 2x is essentially DDR (an effective 133MHz), and that there has been 64b/66MHz PCI for quite a long time now, but cheap boards and cheap cards won't take advantage of that (especially since you really need two PCI busses on a system then).

    Comparing MHz of a GPU, CPU and RAM module is generally fairly useless... saying that the GeForce2U only runs at 250MHz doesn't really mean a whole lot. A more effective comparison would be the throughput/bandwidth of the chip/memory modules, since that is more immediately relevant.

    --
  • Your disk storage (regardless of whether it happens to be mechanical or otherwise) will still take quite a bit more than 80ns to actually return to the uProc, mainly because of the PCI arbitration, command, disconect, reselection, etc that all has to occur before the actual data gets transferred...

    There are solid state drives (HD form factor, basically filled with DRAM) that are rather fast, but one GB will set you back a few thousand $$. That also, of course, isn't preserved across power cycles, but for use as a large cache is rather exciting (see the new pop favorites "Swapping out to disk never felt so good" or "Is My Entire Database is RAM?" and of course the new craze, "Oops, I cached it again").

    I gotta get more sleep...
    --
  • Good for making toast, too! 128MB of that has got to get pretty hot, methinks.

    --
  • as long as you don't have a 30GB - 1TB database, that's probably ok (unless you have a machine that has a few hundred GB of RAM... then it's a lot easier).

    --
  • WRT CMOS leakage:
    The trouble is that the transistors don't turn off completely. There are always some thermal carriers in the channel. If the transistor has a high threshold voltage, the minority carriers are extremely rare and leakage is small. As the threshold voltage goes down, the number of minority carriers increases and the leakage current rises. The ultra-small devices needed for ultrahigh density memory have to have really low threshold voltages for a lot of reasons, so they leak. A lot.
    (Speaking as a transistor-level CMOS designer.)
  • As others have noted, the SDRAM/DDR/RDRAM issue is about the communication channel between the memory device and the controller, which is independent of the storage mechanism used.

    DDRII (next-generation DDR) is targeted for cycle times of 2500 ps in large systems and 1250 ps in small ones. In contrast, current DDR runs 2500 in small systems (e.g., video controllers). One hopes that main memory running at 3.2 GB/s for 64-bit memory will stave off disaster for a little while. The truly greedy will just have to go to 128-bit memory.

    WRT storage technology, I'm surprised that nobody has mentioned FRAM. Ferroelectric RAM is nonvolatile and much denser than flash; as dimensions sink, it's even denser than regular DRAM. Which is why the big memory houses are furiously searching for a way to reliably manufacture it.

  • I heard VC (Virtual Channel) RAM is supposed to be the shit. My Asus K7V takes it, but you can't find it anywhere...
  • 'Z' stands for zero. Almost as good as a bit, but half the cost.
    Ryan
  • Most formatting assums block 0 is good. What happens when it goes bad?
  • I expect the next great thing [tm] in memory will be better interfaces to the cheaper and well known memory that we already have.

    There have been people adding things like fast static ram in dram chips for a while but it never took off.

    With the widespread use of flash memory, I would love to see a flash package that is smart enough to remap bad blocks once they are detected. Its a real pain that my rio now can't write to block 0 most of the time because its developed a problem.
  • They're called books.
    --
  • WRT storage technology, I'm surprised that nobody has mentioned FRAM. Ferroelectric RAM is nonvolatile and much denser than flash; as dimensions sink, it's even denser than regular DRAM. Which is why the big memory houses are furiously searching for a way to reliably manufacture it.

    I share your frustration. FRAM is actually being researched and produced by big companies such as SAMSUNG [samsungelectronics.com] in densities as high as 4Mb. You are not correct, though, to say that FRAM is denser than flash. Remember that flash can store two bits in a very small memory cell. So far, flash has also proved more scaleable than FRAM, which is why you see flash densities today orders of magnitude better than FRAM even though FRAM is an older technology. A good reference for reading about non-volatile memory technologies can be found at EDN Access [ednmag.com]

  • by Ungrounded Lightning ( 62228 ) on Wednesday August 16, 2000 @02:27AM (#852498) Journal
    MRAM has the disadvantage that it can be erased by a strong magnetic field.

    Amorphous silicon RAM works by melting the switch element and refreezing it, either so it crystalizes and is very conductive (but still resistive enough that you can remelt it) or becomes glassy and very resistive (but with a "breakover" voltage that lets you drive current into it to remelt it). Selection is by the length of time the write current is on, and thus the amount of heat deposited in the meltable bit.

    Magnetic fields won't touch it. EMP strong enough to affect it will fry the whole box anyhow. Ditto heat.

    Write time is in single-digit nanoseconds. Read is as fast as ROM.

    (But will it ever come to market? Same question for MRAM.)
  • Just to nitpick, the Pentium 4 bus is not "double wide double data rate". It is 128 bits wide, and quad-pumped. It does transmit data at 400 MHz. When Intel claims it is a 400 MHz bus, they're just as correct as claiming that AMD claiming a 200 MHz bus, and Rambus claiming an 800 MHz bus.

    I'm pretty sure I'm right here. For one thing, the maximum bandwidth of the P4 FSB is 3.2 GB/s, which is 2 "pumps" x 100 MHz x 128-bits. For independent proof of this, note that Intel's top-of-the-line P4 chipset, Tehama, uses dual PC800 RDRAM channels, yielding...3.2 GB/s. If it were quad-pumped, 100 MHz and 128 bits wide as you claim, the FSB bandwidth would be 6.4 GB/s.

    For another, there's no way I've ever heard of to actually "quad pump" a clock signal. "Double pumping" works because a clock signal is actually made up of two signals--the so-called "rising edge" when the signal turns on, and the so-called "falling edge" when the signal turns off. In contrast, there's just no natural way to divide a signal into four without using a PLL and a seperate clock generator. How do we know Intel isn't doing that? Well...the question becomes: "seperate" from what? The FSB clock *is* the only clock in a chipset; if they wanted to make it go twice as fast, they would just clock it at 200 MHz.

    And no, Kingston's "quad-pumped" SDRAM isn't really quad-pumped either; it's just DDR which is cleverly interleaved to essentially make it twice as wide.
  • Thank you for a very interesting and informative post!

    Thanks for actually reading the whole damn thing!

    I don't think traditional expandable RAM has to go away completely, though. I think the solution would be further extending the NUMA (Non-Uniform Memory Access) concept of cache memory. We've already got very-high-speed L1 and L2 cache. Say this CPU+high-speed-memory card you propose has N ultra-high-speed on-die L1 cache, N*16 super-high-speed off-die L2 cache, and N*(2^10) of very-high-speed, CPU-local RAM. Then have an expandable main memory system of merely high-speed RAM, slower, but expandable and much larger, say, N*(2^12). To fill in some example numbers:

    128 KB L1 cache
    2 MB L2 cache
    128 MB local RAM
    512 MB main RAM

    That way, you get the best of both worlds.


    Interesting. I'd first point out that, to me at least, what you're proposing sounds a bit more like a gigantic (128 MB) L3 DRAM-cache than traditional NUMA. Of course, I don't know too much about NUMA, except that it's supposed to have a way to actually manage which data goes in which memory efficiently--which would be a very difficult problem in such a system.

    For one thing, it's worth noting that under almost any concievable implementation, all 128 MB of data in the local DRAM/L3 cache would have to be mirrored in the 512 MB main DRAM--thus essentially "wasting" 1/4 of your main RAM capacity. There are ways around this (e.g. Thunderbird/Duron with their "exclusive" cache hierarchies) but from what I understand they would introduce tremendous latencies into such a system.

    And indeed, that would be the biggest problem with this scheme--the latencies. If a program was looking for a piece of data, it would have to first check the L1 cache; then (if it didn't find it there) the L2 cache; then (if it didn't find it there) the very large local DRAM/L3 cache; and only then would it look for it in main memory (and god forbid not find it there either and have to pull it out of virtual memory!).

    The upshot of this is that you'd get a pretty large miss-penalty every time you had to search all the way down to your main DRAM to find some data. On the other hand, it wouldn't be as large as the penalty currently associated with virtual memory, and we use that all the time. But still, I think you'd find on such a system that upgrading the main DRAM--like enlarging your virtual memory swap file--wouldn't have the same effect on system performance as upgrading main memory does today.

    Could be wrong, though. Certainly an interesting idea.
  • by ToLu the Happy Furby ( 63586 ) on Wednesday August 16, 2000 @07:58AM (#852502)
    Just a clarification: when referring to RDRAM, RAMBUS decided that PC800 means 800MB/sec, not 800MHz, so it isn't really running that fast.

    Just a clarification: you are completely wrong. The various types of RDRAM do in fact refer to their clock speed, not their bandwidth. PC800 does indeed refer to 800MHz; as RDRAM is 16-bits wide per channel, this means PC800 has a theoretical maximum bandwidth of 1.6 GB/s. By way of comparison, PC133 SDRAM is 64-bits wide and 133 MHz, and so it has a max bandwidth of 1.1 GB/s.

    So, to reiterate, you're wrong. Now, however, it begins to get confusing. First off, PC800 RDRAM isn't really running at 800 MHz; it's running double data rate--transmitting twice per clock--at 400 MHz. As far as the PC industry goes, it's an acceptable fudge, and not nearly so bad as Intel saying the double-wide double data rate 100 MHz FSB on the P4 is "400 MHz".

    Then it gets even more confusing. See, it turns out that PC 700 RDRAM actually runs at 2x356=712 MHz most of the time (good!) whereas PC 600 RDRAM actually runs at 2x266=533 MHz most of the time (bad!). This has to do with the vagaries of timing these cobbled together brands of RDRAM (only marketed because the yields on PC 800 were so awful) to run with 133 MHz FSB chipsets. If run on a 100 MHz FSB chipset--which they never are--they will run at their advertised 600 and 700 MHz rates.

    So...in order to get rid of all this confusion but keep the handy-dandy "PC___" designation (and to one-up Rambus in the "my number's higher than yours" game), JEDEC has decided that from now on all its DRAM standards will be numbered based on their maximum bandwidth rather than their clock speed, actual or DDR or otherwise. Thus, the DDR we will see in DDR motherboards in a couple months will either be branded PC1600 (2 x 100 MHz x 64-bits) or PC2100 (2 x 133 MHz x 64-bits).

    All done? Not hardly. It turns out that the first generation of PC2100 will have higher latency timings for the various stages of a random access than will PC1600, thus making it slower in certain situations while faster in others. Of course, within a couple months, lower latency PC2100 will be around, which may or may not be designated PC2100A. See how this all helps the customer and makes things easier???

    Of course, the DDR for graphics cards is categorized neither by its maximum bandwidth nor its clock rate but rather by its clock period: i.e. 2x166 MHz DDR is called "6ns DDR" when its on a video card (because 1 second / 6 nanoseconds = 166 million); 2x183 is 5.5ns, and the new GeForce2 Ultra's are shipping with incredible 2x250 MHz 4ns DDR SDRAM.

    And, of course, any and all of the above DRAM is overclockable to any speeds and latency timings you want; it's just only guaranteed to work at the marketed speed. Oh, and how fast any of this all is depends just as much on your chipset and, in the case of RDRAM, your power consumption settings. (Even if you're plugged into the wall, don't be too profligate with those power settings or the whole thing will overheat!)

    And I forgot to mention VC SDRAM, which is available now, and FCSDRAM, eDRAM, DDR-II and DDR-IIe, any of which might/will make the jump to PC main memory in the coming years (at least before MRAM). Isn't it all so simple now?? Good.
  • by ToLu the Happy Furby ( 63586 ) on Wednesday August 16, 2000 @10:10AM (#852503)
    Apologies for the uber-post, but man this discussion needed an injection of information.

    Myth #1: It's Rambus vs. DDR vs. MRAM. It's been mentioned before, but bears repeating: MRAM will not be the next generation memory technology. It will at best be the next-next-next generation memory technology, as it's at least 5 years from commercial viability. However, I'd guess that even in MRAM's wildest dreams it will take longer than that before it ever makes it to PC main memory; first, it will be used as a replacement for what it is actually most like--not DRAM, but flash memory. While it has the potential to maybe one day be faster, smaller, and cheaper than DRAM, until then it will only be used in those places where its most important attribute--nonvolatility--is actually necessary.

    Furthermore, there are any number of exotic competing technologies which are a) going to make it to market first and b) actually aimed at the PC main memory market. These include:

    VC SDRAM: like SDRAM with a small SRAM cache--already available, but with disappointing performance, due to a bad implementation of a good idea; don't count it out in a future incarnation

    FCSDRAM: which allows a more efficient ordering of access requests to cut down latency

    DDR-II: the packet-based successor to DDR SDRAM, and the probable next standard

    DDR-IIe: DDR-II with caching technology similar but superior to VC SDRAM's

    and eDRAM: an exotic technique for putting DRAM directly on a microprocessor, which allows for extraordinary bandwidth and tiny latencies but requires an entirely new manufacturing process.

    In any case, the above are not mutually exclusive (indeed, RDRAM is a DDR type of SDRAM), and I wouldn't be at all surprised to see some VC/e FCDDR-II be the PC main memory of choice in a couple years. (It'll have a better name, though :)

    Myth #2: DRAM bandwidth is holding back the performance of today's PC's. Actually, the problem is not in the DRAM chips but rather in the bus that connects them to the CPU--that is, the Front Side Bus (FSB). The FSB on all current Intel chips is only 64-bits wide, single pumped. That means you only have 1.1 GB/s of bandwidth to the CPU with a new 133 MHz FSB P3, 800 MB/s with a 100 MHz FSB P3 or P2, and a measely 533 MB/s with a lowly Celeron. Not so coincidentally, the maximum bandwidth of the various standard types of SDRAM, PC133, PC100 and PC66 are...1.1 GB/s, 800 MB/s, and 533 MB/s respectively.

    Ever wonder why 1.6 GB/s RDRAM wasn't any faster than 1.1 GB/s PC133 on all those P3 benchmarks earlier this year? At the time you probably either heard from someone else (or decided yourself) that it was just because "Rambus sucks," which, while true, isn't the whole story. Instead, the reason that the faster RDRAM didn't perform any faster is because its extra 533 MB/s of bandwidth is all dressed up with no place to go--it certainly can't go to the CPU, because the FSB is in the way, and it only lets through 1.1 GB/s. Now, there are couple edge conditions where that extra bandwidth can be utilized by sending some over the AGP bus and keeping some in buffers on the chipset to send later, but by and large the P3 is completely saturated by plain old PC133. This is the same reason why, when DDR chipsets finally come out for the P3 in a couple months, their performance is going to be a mite disappointing--all this extra bandwidth, no place for it to go. As for why the RDRAM system is actually slower most of the time...well, that's because Rambus sucks. (RDRAM has higher latencies than SDRAM, plus Intel's i820 RDRAM chipset is nowhere near as good as its BX or i815 SDRAM chipsets.)

    Luckily, this is a bottleneck that is finally getting removed. AMD's Athlon and Duron CPU's both have double-pumped FSB's, meaning they'll be quite happy slurping up the extra bandwidth they get from their DDR chipsets, due out hopefully by October. Their FSB's can currently be set at either 2x100 MHz (1.6 GB/s) or 2x133 MHz (2.1 GB/s). And Intel's upcoming P4 goes a step further--it has a double-wide double-pumped FSB, allowing 3.2 GB/s @ 100 MHz core clock, and 4.3 GB/s @ 133 MHz.

    These steps are, to put it mildly, vastly overdue, as the ratio of CPU-clock to FSB-clock has gone from 1:1 in the pre-486 days, to 2:1 with the 486DX2, to, for example, 3.5:1 on the two year-old P2-350 I'm typing on now, to a ridiculous 8.5:1 on the latest greatest (nonexistant) P3-1133 to a miraculously exorbitant 10.5:1 on a Celeron-700. What this means is that that CPU is spending a whole lot more of its time waiting every time it needs to access memory--10.5 clock cycles for every 1 cycle of memory access, to be exact. While the impact of this can and has been minimized through all sorts of tricks like bigger caches, out-of-order execution, and prefetching compilers, the overall performance impact is "damn."

    So thankfully these ridiculous ratios will finally be brought down as the next generation of CPU's with decent FSB's ships.

    Having read this, you're probably now lulled into believing our third myth. Unfortunately, you're wrong.

    Myth #3: DRAM performance will hold back the performance of tomorrow's PC's. As it turns out, that's not true either. For proof, just take a look at the latest generation of PC graphics cards. The latest and greatest offerings from ATi and nvidia both include 64 MB of double-wide DDR SDRAM at speeds up to 2x183=366 MHz. That's 5.9 GB/s of bandwidth, way more than enough to saturate the FSB of top-of-the-line CPU's for at least the next 18 months. All this is available, plus a very complicated GPU, fast RAMDAC, and some other components, on a card selling for about $400--thus we can guess that that 64 MB of 5.9 GB/s RAM costs around $250--or, humorously enough, about the cost of 64 MB of 1.6 GB/s PC800 RDRAM! Furthermore, nvidia just announced the GeForce2 Ultra, with 64 MB of 2x250=500 MHz DDR. That's 8 GB/s!! The cost? Another $100.

    But all of this is disregarding that little something called supply-and-demand. There are several legitimate reasons why such high-speed DDR costs more to make than normal-speed DDR (which costs a negligable amount more than plain-old SDRAM), but the main reason for its (not even so) high price is its scarcity and the incredible demand for it by graphics card makers. On the other hand, the main reason RDRAM has come down in price so much (6 months ago it cost around 3 times as much) is because there is a glut of it on the market. Everyone in the industry (except Dell) has realized that the i820 chipset is a dud, a bomb, already shuffled off to obsolescence. RDRAM on the PC is a no-go, at least until the P4 comes out. Thus, excess RDRAM is being sold-off at fire-sale prices. Once the P4 is out in enough volume to actually impact prices (i.e. January or February if Intel is lucky), expect another surge in RDRAM prices. Back on the other hand, in a year or so that 8 GB/s (!) 500 MHz DDR SDRAM in the new GeForce2 Ultra will be pretty mainstream stuff, going for but a modest premium over even bottom-of-the-line SDR SDRAM (which will still be around for some time).

    "So great!" you might say. "Let's make chipsets with 8 GB/s FSB's, and all our problems will be solved!" Well...there's the rub.

    See, the point of this story is, the problem in getting a high-speed memory subsystem in your PC is not the DRAM--they can get that damn fast already. (8 Gb/s!! Ok I'll stop now.) The problem is the stuff in between: the motherboard and the chipset. That is, the bus.

    It turns out that it's easy to get a super-high-speed bus onto a graphics card, but an electrical engineer's nightmare to get one on a PC motherboard. Let's count the reasons why:

    1) The traces (wires) on a motherboard are a whole lot longer than on a graphics card. The higher the capacity of a trace, the higher quality (read: more expensive) it has to be. The longer it is, the higher quality it has to be to have the same capacity. Eventually, it's just beyond the capabilities of our current manufacturing to make traces that are long enough and high enough capacity to work with high-speed DRAM on a big motherboard.

    2) There's lots of other components on a motherboard. This means more interference ("crosstalk"). This means--you guessed it--the traces need to be even higher quality.

    3) A motherboard has to be designed to work with almost any amount of DRAM--one DIMM, two DIMM's, three DIMM's, of varying amounts, made by anyone from Micron to Uncle Noname. Graphics cards are fixed configurations which can be validated once and forgotten about.

    4) The DRAM in a graphics card is soldered to the board. The DRAM in a motherboard has to be removable and communicate through a socket, which adds to the electrical engineering complexity.

    Plus there's probably a couple more I can't think of at the moment. The point is, the weak link in the memory subsystem is not the DRAM. Today it's the chip's FSB, next year it will be the motherboard and the chipset, but it's not the DRAM.

    However, there are ways the DRAM might be changed to get around this limitation. (Disclaimer: I don't know as much about this part of the equation as I do about the rest.) Apparently the packet-based protocol used in RDRAM is one way to do this--for some reason, communicating in packets minimizes the danger of data loss due to crosstalk. Probably for the same reason it works for networks, the Internet, etc.

    Great! The problem is, RDRAM isn't designed to maximize bandwidth, but rather to maximize bandwidth/pin. While this is real neat for itty-bitty embedded devices where you need to keep pin count to a minimum, the problem is that each pin is connected to its own trace...and thus RDRAM ends up requiring the motherboard to carry much more bandwidth/trace than DDR SDRAM. See above (#1) for why this is a bad idea.

    So, the packet-based, but-otherwise-more-or-less-normal-DDR DDR-II, due out in 1.5-2 years, looks like a good candidate to solve this problem, at least for the time being.

    In my opinion, though, even that is only a temporary solution. I'd say eventually the industry is going to have to give up the idea of expandable RAM, and change the entire architecture of the motherboard so that the CPU and main memory are moved off it, onto a daughter card, like the graphics card is now. That would mean you would have to buy your CPU and your RAM together--no more adding more RAM as a quick performance booster, which would be a considerable loss. However, it seems as if it would get rid of the tremendous memory bandwidth problem PC's are facing today in one fell swoop. In comparison to the performance gains realized, it would be an easy tradeoff for the vast majority of consumers, who never upgrade their RAM anyways.

    The other possible solution is similar-but-different: a switch to eDRAM, which I discussed lo these long paragraphs ago (up near the top). This, however, would require an even bigger infrastructure change, although the benefits might be even greater.
  • One thing I've seen briefly alluded to (such as at the EDTN link - "no exotic materials") but not well discussed is that manufacturing MRAM may be less environmentally damaging than making DRAM. Current DRAM-building practices are pretty vile, which is one reason they tend to be done in export processing zones where poisoning the local flora, fauna and human population is considered acceptable to keep the cost down. Anyone know whether MRAM is really any better in terms of manufacturing process and the effluents thereof? I'd be much more likely to buy into it if it's a more sustainable technology; some of the kind of stuff we've got now in computer production has got to go.
  • by B-Rad ( 66696 ) on Tuesday August 15, 2000 @09:06PM (#852505) Homepage
    The Ars Technica RAM Guide [arstechnica.com] is a good place to start for the technologies that are around now (SRAM, SDRAM, DRAM, etc.). Ars [arstechnica.com] also has a story about MRAM, which links to this Wired article [wired.com] describing IBM's work in the field.
  • It [RAM] has things loaded into it every time you start a program, and whenever programs bring in more stuff. Why would you want, after you rebooted you computer, old information from programs sitting around in your RAM?


    Programs free up memory all the time and it's cleared by the OS and given to other programs. That's part of the virtual memory subsystem. It's been that way for years and years. The only commonly used OS that didn't do that that I know of in the past 15 years is MS-DOS.



    Just to elaborate. non-volitile memory would allow incredibly fast boot-times, since all of your drivers (and even your kernel) could remain resident across power-cycles. Assuming a robust enough OS that can withstand months of virtual-uptime (ruling out DOS-derived OSes), the boot-up process shifts from initializing your drivers to checking for HW-changes / crashes.

    Just imagine the possibilities for OS robustness. Currently when we lose power (causeing an OS crash), rebooting involves checking for consistency on the file-systems (a painful process which usually involves loss of some information). Yes, you're supposed to UPS servers, but this is of no consolation to the millions of home-PC users who potentially lose hours of work. If memory was NV, then a copy of the write-buffer would still exist, and it would be possible to recover failed disk-updates.
  • If you think about it, with a couple GIG of ram on a server why *not* load the entire database to ram? :)

    The simplest approach would be to have a massive ram-disk.
    However, expanding on this possibility, there would be a fundamental change in database structure. Databases are optimized to allow tiny subsets of data to reside in memory. Most queries have to assume that only a couple pages will be addressable / comparable at a time. One of the biggest set-back with this is the lack of "data-pointers". You typically refer to data by it's primary key. Thus referencing data-items requires table-lookups. In most programming languages, you make use of references or even direct pointers (though in DB land, I'm sure references/handles would still be preferred). Thus, if table-joins were based on references (for primary / foreign keys), it would be trivial operation. I know that Oracle supports a sort of reference-type that performs just this sort of activity, but it has to still require disk-index-lookups, since the data is not in a static location.

    Another big problems with relational databases is that it is very difficult to map them to program-code. A big push in DB land is to make Object Oriented DBs. Some systems have had more success than others. The biggest problem (as I see it) is to make these objects available to programming languages in a seamless fashion. In an all memory system, you might very well be able to have a local array of data-objects and use them with all the same performance as local objects. The DB would simply have triggers assigned to data-modification method which update internal relationships and enforce data-integrity rules. This much can already be done in a raw programming language, but it is impossible to separate the rule-set from the code (unlike in DB design).

    This really does open up a whole new world of computing. Ideally, you have at least three completely independent designs (that can be changed independently of each other): The interface, the data-definition/rule-set, and the glue-code that makes it all work. Currently this is possible if the GUI designer (beit web pages or window-design) talks with the glue logic designer, and a relational DB is used. But there currently is not a seamless integration or high-performance connection between the data and the glue.

    -Michael
  • Just how complex is SRAM to build? Please note first that I don't know much about SRAM.

    As far as I know, SDRAM requires 1 capacitor and 1 transistor per cell minimum. SRAM requires about 4. With companies today manufacturing 256M SDRAM, could they easily build 64M SRAM modules for a similar price?

    The latency of these SRAM (not reliant of capacitor discharge) could push memory bus clock rates up to several hundred MHz. Data from an SRAM is available in a nanosecond or less, not 3 to 5 ns.

    The other part of my comment is on a RAM drive. Could a memory manufacturer revive obsolete memory technology (fastpage, 30-pin SIMMs) that is extremely cheap? If so, they could produce an inexpensive 1-2 GB memory module that sits on an IDE interface. You could easily use the full bandwidth of an UATA-66 channel.

    Instead of resorting to (comparatively) super-slow hard drive for virtual memory paging, the OS could just use a slightly slower memory technology. Using RAM negates the seek times that are inherent to hard drives.

  • You will need an exceptional kick-arse interconnect structure to get decent performance out of the various CPUs. This would be especially true if code running on one CPU needed data from another processor.

    A couple of extra points:
    - What happens when you want to upgrade your CPU?

    - Memory becomes more expensive as a processor is tightly coupled to it.

    I shudder to think of the kiddies at primary school arguing over the benefits of a hypercube compared to a mesh interconnect structure.
  • MRAM could be a fun thing. Virii get embedded in the memory and stay in there FOREVER, continually wreaking havoc. Also, hackers could embed flags in your memory so that every time you boot up they know it. Geesh, if it stays in there until the computer says, "erase it" it may be wise to erase every time the machine goes down or boots up so that nasty stuff doesn't stick in there. Catch my drift, or am I way off?
  • Not to be a prick but "RAM Memory". So is that random access memory memory? Kinda like a NIC card, ya know a net work interface card card. I'm a bastard.

    thx
  • by Datafage ( 75835 ) on Tuesday August 15, 2000 @08:52PM (#852512) Homepage
    What matters most in this arena is price and performance, with a slight advantage going to performance. With no data on the speed of these technologies, I can't say for certain, but since the cost of fabricating DDRRAM is only slightly higher than SDRAM, and is a relatively easy change, it's unlikely that MRAM will be even close to it in price, making speed a non-issue.

    -----------------------

  • by Datafage ( 75835 ) on Tuesday August 15, 2000 @08:57PM (#852513) Homepage
    Since the poster did not deign to supply any links, here are a few:

    EDTN [edtn.com]

    Stanford [stanford.edu]

    ABC News [go.com]

    Hope this helps.

    -----------------------

  • by smasch ( 77993 ) on Wednesday August 16, 2000 @12:55AM (#852516)
    First of all, RDRAM, SDRAM, and DDR SDRAM are all forms of DRAM, the only difference is in the interface between the memory array itself and the outside world. SDRAM and DDR SDRAM both accept an address from the address bus and output the data on (or write the data from) the data lines. RDRAM uses a packetized interface which can be more efficient for linearly accessing memory, however, it is extremely slow for randomly accessing data. However, all of these types of memory are forms of DRAM that have single transistor/capacitor cells which can each store a bit. One interesting thing to note about DRAM is that it may not be able to scale down much more: As processes get smaller, capacitances get smaller and transistors no longer completely turn off (meaning charges can leak off). This means that the cells need to be refreshed (recharged) more frequently, limiting the usefulness of the device.

    MRAM is a new technology that stores data magnetically. I don't know too much about this, but I would be guessing it would be quite a while until we see this in every computer. It will probably be available in portable devices in 2 to 5 years, however, low production quantites (and high prices that go along with this) will almost certainly keep this memory technolgy from entering the desktop market for ten years or so. Then again, I could be wrong.

    I have seen flash memory mentioned as a possibility. Flash works by storing (or not storing) a charge on a floating polysilicon gate. The charge is stored or removed by using a high voltage to tunnel through the silicon dioxide insulator. While flash can be read about as fast as any other memory technology, writing flash typically takes a long time (from 100's of microseconds to milliseconds). Also, the tunneling action erodes the silicon dioxide and can wear out flash cells after 1,000 to 1,000,000 rewrites (depending on the process).

    So what is the next big memory technology? For now, I would say it is DDR SDRAM. However, DRAM technology will eventually fizzle out and I am sure that either SRAM (Static RAM), MRAM (if it is available), or some other new memory technology will take its place.
  • RDRAM is FAR too expensive to be the next standard. 256MB of the stuff is something like $800 or $900.
  • Hmm, I guess Fry's Electronics is just ripping off their customers (as usual) then, again. Anyone who's been to Fry's knows what I'm talking about. :]
  • by mrogers ( 85392 ) on Wednesday August 16, 2000 @02:13AM (#852522)
    Who would have thought 30 years ago that we'd all be running a Unix-like operating system on machines with magnetic core memory?

    Just goes to show how much things have changed...

  • It's sad to say, but it's likely that the next RAM technology will be the one chosen by inductry, not necessarly for the best performance (or price), but as the technology maximizing profits (while minimizig risks). Taking a look at the what's going on with Rambus, you see that marketing is more important than technology.

    I don't think the RAM companies are likely to switch from a technology they fully control to another they're less sure about. The only way I see they could do the switch from Silicon to something else is if they really have no other choice, eg. if some technology comes out and increases by at least a factor of 10 the performance/price ratio. Even in that case, I suspect they'd simply try to buy out the company that produces that.

    The only way I see them abandon silicon is when it is no longer feasable to cram more transistors in a fixed area (10 nm? 1 nm? 1 A?).
  • do you really think it will be as easy to make RDRAM at 1.6GHz as it will be to make DDR SDRAM at 266MHz DDR

    I'd like to get a confirmation on this, but I think the "next generation of RAMBUS" will be wider, not faster clocked. As for the rest, I agree that DDR-SDRAM is likely to be the next generation RAM... unless RAMBUS is in bed with more people than we think.
  • I think MRAM will get some major industry backing when Microsoft realizes that it will stop all those complaints about having to reboot Windows continuously :)

    Just why do you think this will help Microsoft at all? The whole reason you had to reboot in the first place was beacuse Windows fscked up in memory and didn't know quite what. Rebooting the computer would mean this: After sitting through your BIOS test sequence, you are presented with a Blue screen.

  • The P6 bus does only about 1 GB/s, but the Willamette bus does 3.2 GB/s. SDRAM can only do about 1 GB/s, while RDRAM can do 2.1 GB/s (I think that's right...), so RDRAM will be much faster on Willamette than on P6 (which is limited by the bus, not the RAM). The limitation today is the bus, but for the Pentium 4 the limitation will be the RAM. RAM speed is overrated; an L2 miss only comes along once every several million cycles!
  • by Crazy Diamond ( 102014 ) on Tuesday August 15, 2000 @09:50PM (#852533)
    The difference between SRAM and DRAM is that SRAM stores bits in a circuit of two inverters in a loop (four transistors not including sense amp). DRAM bits are actually stored on capacitors which through leakage eventually will discharge. Because of the leakage, DRAM requires a refresh which means the value stored on the capacitor is rewritten every so often (measured in tens of milliseconds). This writing and rewriting of every single bit is why DRAM ends up being quite power hungry when compared with SRAM.

    Flash memory is really not like SRAM or DRAM. It actually reminds me more of ROM because bits are actually defined by a single transistor being on or off. The way that flash memory makes a transistor stay on or off is the cool part. Each transistor (one per bit) has two gates. One gate is used when you write to the bit, the other gate is not actually connected to anything. The second gate (called the floating gate) is given a voltage (charge) when you wrote the bit through some interesting electrical effects (remember it is not connected to anything, you have to get the charge on there somehow). After writing the bit, the charge from then on actually stays on the floating gate because it is insulated from everything. The charge ends up making the transistor be on or off. Flash memory does have limited number of write/erase cycles but it is usually measured in hundreds of thousands so I'm not worrying about my Rio failing anytime soon.
  • by psm ( 105737 ) on Tuesday August 15, 2000 @08:58PM (#852534)
    Slashdot questions RAM
    But no links in this story
    Sites stay up today
  • Yes, there's been talk of making RDRAM data paths wider, but this isn't the point. That discussion is mainly about increasing bandwidth, which helps performance when you have lots of throughput but otherwise faster is better. You can make SDRAM paths wider, too, and add additional channels. There's a lot that can be done to improve either RAM technology, much of which has to do with improving chipset and motherboard design--which IMHO are the big bottlenecks today; the fundamental architecture of x86 mainboards hasn't changed much in far too many years. Personally, I'd like to see a pooled architechture like the SGI Visual Workstations, where the rest of the system is basically built around the memory implementation. But in the world of standard vanilla x86 boards, too much performance and stability is sacrificed to old and slow bus architectures and data paths which aren't wide enough.

    But my point about RDRAM is that it has to be clocked at 800MHz to equal the performance of PC133 SDRAM, and that's horrid. Memory performance in SDRAM can also be increased by other means than clockspeed, too, but SDRAM has so much headroom in the clockspeed department that there's no need to worry about that for some time yet, whereas RDRAM clockspeed is horribly fast with so little room for MHz jumps. I mean, most people have CPUs that don't run at 800MHz, ferchrissakes. Even a top-of-the-line graphics processor like the GeForce 2 Ultra runs at a mere 250MHz, with on-card memory running at less than 500MHz DDR. AGP is running at a mere 66MHz. PCI is still the slowpoke at 33MHz, and really needs to be improved because it is a bottleneck. But the point is, the RDRAM is running so fast that it has so little room to increase its clockspeed at all, whereas SDRAM has so much room.

  • by Sir_Winston ( 107378 ) on Tuesday August 15, 2000 @09:09PM (#852537)
    MRAM is probably five or possibly more years away, so it's not going to be anywhere near the "next generation" of RAM tech. Check out the front page of ArsTechnica for some linkage.

    The next generation of RAM is clearly going to be DDR-SDRAM, and will be for some time. Cheap modules will be PC-200, but PC-266 DDR will be out at the same time, with very little use of the "mere" 200MHz (effective) variety. The tech is there right now, it's just that there's no demand yet since there aren't any chipsets out (VIA to the rescue, in a few months); so, regular SDRAM is tying up production right now, but the switch to DDR will probably be fairly smooth.

    Face it, RAMBUS RDRAM is a terrible idea in the first place. When you have to make a new technology like RDRAM run at 800MHz to get similar performance to existing PC-133 SDRAM, that should be a sign that the new technology is worthless--do you really think it will be as easy to make RDRAM at 1.6GHz as it will be to make DDR SDRAM at 266MHz DDR? Hell no. I predict a quick demise for RDRAM within a few months of the release of VIA's forst DDR-SDRAM chipset.

  • by localman ( 111171 ) on Tuesday August 15, 2000 @09:28PM (#852541) Homepage
    I think MRAM will get some major industry backing when Microsoft realizes that it will stop all those complaints about having to reboot Windows continuously :)

    Oh no! Then what will the Linux advantage be? ;)

  • by fluxrad ( 125130 ) on Tuesday August 15, 2000 @09:27PM (#852548)
    the smart money is on the new Dodge RAM.

    with a supercab and a more powerful engine, you just can't beat the deals that most places are offering on it.


    FluX
    After 16 years, MTV has finally completed its deevolution into the shiny things network
  • <cough>CMOS settings</cough>
    <O
    ( \
    XGNOME vs. KDE: the game! [8m.com]
  • I think the next cool idea in RAM is a merging of RAM with microprocessors to create what was once referred to as IRAM. That is, IRAM would have one CPU per RAM chip, like a sort of system-on-chip configuration. With 512MB of RAM on your machine, there are perhaps four to 32 individual silicon chips with RAM memory, depending on the density of the RAM.

    Hence, if that was IRAM, you would also have four to 32 individual processors.

    The idea, of course, is to distribute processing and increase performance by having the RAM and CPU on the same silicon, thus reducing the path length, eliminate the need to go through a motherboard bus and connectors and all that. More power efficient, lower EM interferance, etc.

    The question would remain whether to have a central CPU coordinating all of the individual CPU's, or whether the system would be entirely distributed. I think if there was a central CPU, the system would be easier to make it look like a SIMD machine to software, which would make it easy to program for. That may be possible without the CPU, but the alternative is MIMD.

    Who knows, with a kernel made for distributed processing like Mach, which is may see growing attention because GNU Hurd and MacOS X both use it, then a large part of the computer market may benefit from IRAM.
  • This is a standard confusion about RAM.

    Static RAM requires 2 gates to construct each bit of memory. As long as power is supplied to the gates, the value is held (but when power is taken away, the value is lost).

    Dynamic RAM requires 1 gate to construct each bit of memory. With DRAM, the value stored 'erodes' over time, so a 1 would become a 0 after a certain time period. This isn't what we want, so we have a separate controller chip which keeps rewriting the DRAM cells continuously to keep them in the same state.

    So given that DRAM is a pain and requires a separate controller to work it, why do we use it? Firstly, there's die size - it takes half as many gates to make DRAM, so you can get more on a wafer, which makes them cheaper. Secondly. there's performance - for SRAM to change state, one gate has to change and the other gate follows it, so it takes twice as long for a state change. This is all approximate, of course.

    Neither of these RAM technologies preserve memory after power-off. For that, you need either battery-backed RAM, Flash or EPROM (eraseable programmable ROM), or the new MRAM, which all hold their contents on power-off.

    Battery-backed RAM is fine, except eventually the battery runs down and then you lose your data.

    EPROM is crap - it has to be erased by UV light and it's slow to reprogram. EEPROM (electrically-erasable PROM) is better - it can be erased with a voltage, but it's still slow to reprogram, and it has a limited number of rewrite cycles. Both hold their contents permanently though.

    Flash is similar to EEPROM but has more rewrite cycles and is easier to rewrite. Flash is usually organised into "pages" or "blocks" though, so you can't erase an individual bit/byte, only a whole block of data. The rewrite cycles are still limited on Flash though, so you couldn't use a Flash cell to store a variable - 100,000 rewrite cycles would be up in a few seconds! Plus it does take time to program it - it's still nowhere near as fast as writing to RAM.

    MRAM is a kind of "holy grail" of memory - one that can be changed on-the-fly like RAM, but which holds its value like EPROM/Flash.

    Grab.
  • by legLess ( 127550 ) on Tuesday August 15, 2000 @09:06PM (#852552) Journal

  • I don't actually know (much of anything), but I do know that CMOS gates are pretty awesome at not leaking power. Look up any reference on JFETs or CMOS as a starting place. In my wonderful ascii art, this is my interpretation of a JFET (ignore the periods, they are place holders. You are going to have to cut and paste into a fixed font, I can't make it look right)

    .......|......................
    .......|......................
    .....---------................
    .............|................
    .............|................
    ......|......|................
    -----+|......|................
    ......|......|................
    .............|................
    .............|................
    .....---------................
    .......|......................
    .......|......................

    basically, one of the pins on the transistor acts as a capacitor plate hiding behind a layer of insulator. This sets up an electric field, but allows no current to flow through the insulator. A current passing through the other two pins, however, will recognise the change in potential because of the field. The charge can stay on the plate "forever" (we had leakage in the picoamps in the lab, so as long as the number of electrons is initially large, it won't matter).

    And now, to come back on topic. I think that there really aren't too many limits for SDRAM, as long as you can make a semiconductor that can switch fast with low power consumption. Maybe Si-Ge manufacturing is going to pick up in the future. I know that there are more exotic semiconductors with better properties than Si, but nobody has dumped the money into figuring out a manufacturing process that could bring them within an order of magnitude of the cost of Si manufacturing.

    Hmm, I've seen some sketches for all-optical RAM, but I don't know anything about the research that has gone on in that area. Anyone an expert on optical computers?

  • by Sir Tristam ( 139543 ) on Wednesday August 16, 2000 @09:07AM (#852557)
    the smart money is on the new Dodge RAM.
    with a supercab and a more powerful engine, you just can't beat the deals that most places are offering on it.
    The problem, though, is that it comes with windows pre-installed...
  • Regarding RDRAM, it's actually a really good product ... for what it's designed for.

    RDRAM allows you to design a board with an absolute minimum of components and only a handful of interconnects. It does it's memory interface in around 16 wires, rather than the 80+ which parallel RAM interfaces require. It's a really, really good choice for embedded systems.

    HOWEVER, the idea that RDRAM should be used in PCs is garbage and needs debunking. RDRAM is slower than other PC technologies, and on a PC motherboard an 80-wire memory interface is no problem.

  • A more interesting question is, does this technology mean that on-line storage (currently hard disks) might become faster? If you could have several gigs of magnetic storage in your machine accessible at RAM speeds (lets be sluggish and call it 80ns), then that would mean some very, very, very high-performance systems become viable.

    I'm assuming that MRAM is going to hit the market at around the same price as normal memory, so it's going to be a lot more $/Mb than hard disks, but it still presents some interesting oppurtunities.
  • If you check out the Matrox Users Resource Centre's [matroxusers.com] news story for August 7th, you'll see a some info from within the latest driver release showing the G800 will be using something called FCRAM. Apparently thats "Fast Cycle" RAM, and is more or less a faster SDRAM. There's a short article here [edtn.com] about what FCRAM is. It's built by Fujitsu and is supposedly better for multimedia applications where there is a significant amount of random access. There must be something to the tech, otherwise Matrox would be going with the more standard DDR SDRAM, which must be cheaper to produce because everyone is using it...
  • I've always been of the opinion that RAM that is erased when the computer is turned off is a good thing. The ability to erase all your RAM to me is like "starting fresh", similar to rebooting Windows to regain some temporary stability.
    What would happen if a virus was loaded into your memory and you wanted to shutdown and wipe the virus from memory, but your memory was permanent? I don't see that as a good thing at all.
    There are probably many arguments for why static memory is a good thing, but right now I am definitely leaning toward memory that can be erased by powering down.
  • Erm, flash != static RAM. SRAM consumes far less power than DRAM, so it is more practical to make a 'disk' using SRAM and a tiny battery, but not indefinitely. AFAIK SRAM is also faster than DRAM but more expensive, so it's not used extensively.

    Also, flash memories have a limited number of write/erase cycles, which makes them even more impractical for a RAM.

    --

  • by electricmonk ( 169355 ) on Tuesday August 15, 2000 @11:28PM (#852578) Homepage

    I'm not sure if this has been mentioned yet or not, but Kentron Technologies [kentrontech.com] is developing a technology known as QBM [kentrontech.com], which, to put it in a nutshell, is basically Quad Bandwidth Memory, which means that it transmits twice each cycle, with overlapping cycles, effectively doubling the DDR effect. Their page on it says that memory running at 100Mhz clock could get memory bandwidth of approximately 3.2 Gigabytes/second.

    Heh, that's the stuff I want when I build my Ultimate Gaming Machine (TM).

  • With the PRAM concept, my life is changed forever. I envision vast networks of computers operated by specially designed punch-cards -- made of paper. Each computer will have its own customized operating system, taking several years to develop, allowing it to exceed in one specific task, letting go of this fanciful idea of general programmability and thereby enabling what the world really needs -- a class of devices, each with its unique architecture, each excelling in its own task. By taking advantage of the expedient international postal lanes, researchers worldwide could transmit several tens of kilobytes of data on these amazing punch-cards of paper. In time, the use of this paper could even grow to replace traditional forms of language like clay tablets or grunting. Ryn is not just brilliant, [s]he is a visionary. And you heard it here first.
  • by fudboy ( 199618 ) on Tuesday August 15, 2000 @10:25PM (#852593) Homepage Journal
    " ...Maybe we should optimize what we've got more...."

    You know, this principle holds for software development too... The potential for a LOT of what we do with computers today was present in the humble old 486. Maybe this mad dash for better faster hardware spells our own doom. already people are buckling under the complexities of things like the psx2, x86 extensions, massive ram on video cards, etc. the stuff is going to waste just as fast as it can be invented.

    it's simply too much to work with or take advantage of with the tools we have nowadays (in the time alotted us). I wish software could advance at the same rate as hardware, but it takes years of tinkering and developing new techniques to get anywhere near taking advantage of ALL of a given piece of hardware's potential.

    Just look at an example like 3DStudio: version 3.0 is dramatically more sophisticated and powerful than version 1.0, and v3 runs better (is capable of more, easier to use, faster for certain tasks like modelling low poly stuff?)on p200 than 1.0 would on a pIII. all the hardware upgrades in the world don't help a bad app very much.

    As hardware continues to advance by leaps and bounds, will the gap between it and software be growing much? what are the repercussions of this? lazy and imcomplete coding do seem to be becoming the standard rather than the exception...

    maybe there'll be an 'Einstein' that springs up to turn the software engineering world on its ear. until then, the over-all essence of computer use will grow at a fraction of what the state of the art hardware is capable of.

    :)Fudboy
  • Just fixing your diagram; always remember to use <TT></TT> in such cases.
    .......|......................
    .......|......................
    .....---------................
    .............|................
    .............|................
    ......|......|................
    -----+|......|................
    ......|......|................
    .............|................
    .............|................
    .....---------................
    .......|......................
    .......|......................
    Hope this works; had to add an [slashdot.org]invisible link just to avoid setting off the lameness filter;
    apparently, HTML tags (auto-converted to uppercase) count as lameness these days. :)

    -- Sig (120 chars) --
    Your friendly neighborhood mIRC scripter.
  • ... a new technology like RDRAM run at 800MHz ...
    Just a clarification: when referring to RDRAM, RAMBUS decided that PC800 means 800MB/sec, not 800MHz, so it isn't really running that fast.
    Comparatively, if you look at www.crucial.com [crucial.com], they do the same thing with DDR SDRAM - 200MHz is PC1600 (1.6GB/sec), and 266MHz is PC2100 (2.1GB/sec).

    -- Sig (120 chars) --
    Your friendly neighborhood mIRC scripter.
  • Dean Kent in his August Industry Update at Real World Technologies http://www.realworldtech.com/page.cfm?ArticleID=RW T081000000000 indicates that DDR SDRAM only costs about 2% more to manufacture than the current SDR DRAM, whereas DRDRAM costs 35% more to make. That's a big hill for Rambus to climb, and a bit more than "slightly higher".

I've noticed several design suggestions in your code.

Working...