Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
IBM

IBM Promises More Memory In The Same Space 173

dcallaghan was among the many readers to write with news of IBM's announcement of new memory technology. The upshot seems to be on-the-fly compression in hardware, taking the tack of RamDoubler and other software compression utilities, but moving the actual data sqashing into dedicated (fast) chips. I hope this leaks out of "server only" land soon; I'd love to have 256MB for the price of 128 -- this would be especially nice with pricey notebook memory.
This discussion has been archived. No new comments can be posted.

IBM Promises More Memory In The Same Space

Comments Filter:
  • Ask yourself: how many people do you know that double their disk drives? Probably not very many, if any at all.

    A few years ago that was all the rage. Now drives are cheap and large. I expect the same to happen with memory- with all the new technologies coming down the pipeline, would you really want to hassle with a "ram doubler"?

    Even if it's in memory, you KNOW it's going to cause a bug in some program somewhere :-)
  • Sorry, my first attempt, and apparently a success.

    Feel free to suck away my karma now, I don't think I'll be attempting it again now that i've made first post... heh

    This has been a lame post for a really stupid reason. please moderate down.

    --
    Gonzo Granzeau

  • So the compressing is done on hardware, cool.

    But since once compressed, we can't compress
    them any further, does this mean that any
    software compression to increase mem space is
    no longer usable?

    I like extra MB, but let's hope that they
    actually increase the physical storage space instead of just increasing "logical" storage.

  • Since this is hardware located on the motherboard, don't look for it in the desktop too soon, unless Rambus licensing fees drive it.

    Since it is on the motherboard it would probably require support at the bios level. Also because it has its own caching system, it may only work with certain CPU chips, maybe, maybe not.

    The key to its adoption is if IBM prices low enough it could take the wind out of RAMBUS's sails.

    The speed could be slower than current ram, but if the compression chip is fastest enough, because the data is being read and wrote into the chip compressed it could effectively double ram transfer rates. The bottleneck would be the compression chip.

  • by Sludge ( 1234 ) <slashdot.tossed@org> on Monday June 26, 2000 @12:36PM (#974685) Homepage
    This would rock on the PC, where our bus speeds are slowly reaching 133mhz. If you could send the data compressed across the bus, that is.

    Avoiding the low PC bus speeds are what 3D cards do best. You only upload the textures, and then you just send the vertices of the polygons across every time. Hell, some newer cards are even doing the calculations on some of the vertices once they've made the jump across the bus.

    This also holds true for DVD decoders.

    I wonder how viable hardware decompression is? Would it be a catch all solution for (low end) replacements for all these avert-the-PC-bus hardware cards? Admittedly, I'm not in touch with any relevant benchmarks to this sort of stuff these days.
  • This is kind of a tangent, but would there be any issues behind this if they were using, say, LZW compression? Yes, I now they're not. But if they did, would it be kosher? Would we be allowed to use it while still feeling ethically clean?

    This is similar to an issue that popped up when RMS spoke in Cincinnati. Someone asked if it was ok to use Transmeta's chips if the conversion layer was proprietary. The answer was that he had not heard anything about it (it was a few days after they made their big announcement) but he guessed (I think) that it might be ok, since hardware costs money to distribute (while software does not). My memory may be slightly flawed on this; don't quote me on it.

    As free software gains, we will start encountering this questions more often, with the original, simple principle of sharing software moving into a more general ethical realm dealing with intellectual property in general.

  • by Signal 11 ( 7608 ) on Monday June 26, 2000 @12:36PM (#974687)
    Whoah, wait, backup, slow down.. halt.

    Compression CANNOT guarantee anything better than 1:1 ratio - it is ENTIRELY dependent on the data.

    For data compression in memory to succeed, you MUST have an option to cache the "extra" memory to a swapfile incase the prediction logic fails and you run out of physical ram. If you do not, you will tank your system, bigtime.

    Sorry, but I'm very leery of any "memory compression" - it requires OS support to function. Period. You aren't going to just plug in a miracle DIMM and make it work. I hope IBM is opening the spec (it looks like they are) and that OS development people quickly embrace this, or their hardware will take a nosedive in the market.

  • by exploder ( 196936 ) on Monday June 26, 2000 @12:36PM (#974688) Homepage

    IANAHardware Engineer, but it seems to me that RAM is already designed to do one simple thing (okay, two things: peek and poke) and to do it absolutely as fast as possible. This technology inevitably will degrade RAM performance by a finite amount. Is their chip fast enough that this degredation will be negligible? If so, then this will be Extremely Cool. If not, then no thanks, I'll just shell out for the extra RAM. Of course, the economics on a huge server with 100GB of RAM are most likely compeletely different.
  • With increasing RAM prices due to the Rambus patents, this is a very well-timed announcement.
  • by zCyl ( 14362 )
    What's the world coming to? I heard this on the radio this morning before I got around to checking Slashdot. Amazing that this is radio-worthy...
  • "as memory comprises 40 to 70 percent of the cost of most NT-based server configurations"

    That's because NT is bloatware. Now if everybody would run Linux, there would be no need for this technology, now would there..

    I'm sorry, but I just had to post this.

  • Seeing as how more consumer video cards are coming with 32 and 64 megs of ram on board, is there anyway to use this as auxillary ram? I know the interface to it would be much slower than main memory, but it would still be magnitudes faster than a hard disk. If you could make it a ram drive, it could be a virtual swap drive that would run a heck of a lot faster than your hard disk. Seems like it would be a neat project.
  • and now for a third time: it would much faster than your hard drive!

    doh!
  • There is only so much that you can work with, before you need to start throwing out the old paradigm. Silicon is dying so countless engineers develop life support for it, while claiming that things are fine.

    Parallel computing, genetic algorithms, and this will not solve the problem. they will only prolong the world's suffering to a time when people will have become used to Big Silicon. Then all of the Wunderkinds of the Valley will be too old to execute for gross negligence.

    While this appears to be a good idea, we must understand that we can not grow too secure in silicon. When you get below .1 micron, you can't outrun the quantum boogeyman. Minimize, dis-consolidize, but realize that our future is not that of Silicon.
  • Any real-time compression technology tends to make me want to run screaming down the hallway

    This product does not promise to double your RAM, but "up to" doubling. Note that it says storage was doubled for "most applications".

    On MicroSloth machines where 3/4 of the memory seems to store arrays of zeros, this could be useful. But the more memory-concious the programmer, the poorer the performance of the technology. In other words, I'm not impressed.

  • With the current problems of the SDRAM/RDRAM saga looming, I'm not sure if another compatibility issue is needed. This sounds like it sits on the motherboard, although the article doesn't quite specify this, producing yet another problem into the notoriously unreliable motherboard market.

    At least for home PCs, I'm not seeing this as a real bonus, or a tech that we'll see for quite a while. The market should concentrate on getting a reliable, supported (and preferably non-licensed) RAM tech out and in full production. The more variants of ram being produced, the lower the supply of each one, and therefore higher costs for all.

    tsf.
  • With the disk compression utilities, one of the biggest troubles is that the amount that can fit on the disk suddenly depends on the type of data. You get amazing compression rates if you are story textfiles, horrible ones if you are storing GIFs. (The most common compressed file when disk compression was all the rage.)

    This will be similar. Suddenly the amount of memory an application uses is going to be less predictable. Perhaps this isn't much of an issue as with virtual memory, people are increasingly disconnected from application memory usage.

    Because of virtual memory, this is likely to greatly improve the apparent speed of the system, at least in cases where memory is moderately tight. (<128 Meg on a windows box, for example). Disk access is something like three orders of magnitude slower than memory access. If compression avoids even a few page faults, the lower page-file requirements will more than make up for the extra time to compress.
  • Compression CANNOT guarantee anything better than 1:1 ratio - it is ENTIRELY dependent on the data.

    Precisely what I was going to say. But then I started thinking about it some more. You can't guarantee even a 1:1 ratio; purely random data will compress to slightly larger than original size, as you have to tag it.

    Stepping back for a second, though, is this REALLY a problem for most systems? I'd be willing to bet that 99.999% of the time, multi-user systems run memory images that are at least moderately compressable. The curve of pessimistic memory availability vs. actual usage would yield something akin to a MTBF, which would be a good enough guarantee for those server spaces that are looking to save a few bucks.

    Of course, memory usage limits would need to be in place to prevent some really obvious machine-crash attacks from users, but isn't that true already?

  • The problem with junking Si is the base of manufacturing lines and manhours in tacit knowledge. I think Si still has a way to go (with some of the new technologies), but this old capacitor-RAM has GOT to go.
  • ...is that it varies depending on the data. I like to know exactly how much memory I have. Try gzipping a megabyte of text files, and then a megabyte of executables. Sure, the executables will probably compress somewhat, because there is repetitive data there. Text generally achieves a much better compression ratio than machine code. But what about data from /dev/urandom, which is HIGHLY non-repetitive?

    I don't claim to know a damn thing about this memory technology, but what do they intend to do about the unpredictability of compression ratios? You'd end up with a different amount of RAM depending on the application... No thanks.

    -John

  • For Big Blue, that shouldn't be too much of a problem.

    Of all corporations, IBM would have a sufficient patent pool of its own to resolve such an issue trough cross-licensing.

    No?
  • It would be good for bandwidth, but not for latency I would imagine. As to what the net result would be I have no idea.
  • by QBasic_Dude ( 196998 ) on Monday June 26, 2000 @12:48PM (#974703) Homepage
    Compression CANNOT guarantee anything better than 1:1 ratio - it is ENTIRELY dependent on the data.

    This is true with random data, but most data is not random. A quote from the comp.compression FAQ [faqs.org]:

    - The US patent office no longer grants patents on perpetual motion machines,
    but has recently granted a patent on a mathematically impossible process
    (compression of truly random data): 5,533,051 "Method for Data Compression".
    See item 9.5 of this FAQ for details.

    As can be seen from the above list, some of the most popular compression
    programs (compress, pkzip, zoo, lha, arj) are now covered by patents.
    (This says nothing about the validity of these patents.)

    Here are some references on data compression patents. Some of them are
    taken from the list ftp://prep.ai.mit.edu/pub/lpf/patent-list.

    ....
    9.2 The counting argument

    [This section should probably be called "The counting theorem" because some
    people think that "argument" implies that it is only an hypothesis, not a
    proven mathematical fact. The "counting argument" is actually the proof of the
    theorem.]

    The WEB compressor (see details in section 9.3 below) was claimed to compress
    without loss *all* files of greater than 64KB in size to about 1/16th their
    original length. A very simple counting argument shows that this is impossible,
    regardless of the compression method. It is even impossible to guarantee
    lossless compression of all files by at least one bit. (Many other proofs have
    been posted on comp.compression, please do not post yet another one.)

    Theorem:
    No program can compress without loss *all* files of size >= N bits, for
    any given integer N >= 0.

    Proof:
    Assume that the program can compress without loss all files of size >= N
    bits. Compress with this program all the 2^N files which have exactly N
    bits. All compressed files have at most N-1 bits, so there are at most
    (2^N)-1 different compressed files [2^(N-1) files of size N-1, 2^(N-2) of
    size N-2, and so on, down to 1 file of size 0]. So at least two different
    input files must compress to the same output file. Hence the compression
    program cannot be lossless.



    For data compression in memory to succeed, you MUST have an option to cache the "extra" memory to a swapfile incase the prediction logic fails and you run out of physical ram. If you do not, you will tank your system, bigtime.

    This is not true. Auxilary memory will most likely be stored on the chip itself. Data compression does not predict logic. A stream is compressed by examining it's redundancy and storing pointers back to the original match (as LZSS [google.com] does), or encoding each symbol in a less number of bits (as in Huffman [bham.ac.uk]).


    Sorry, but I'm very leery of any "memory compression" - it requires OS support to function. Period. You aren't going to just plug in a miracle DIMM and make it work. I hope IBM is opening the spec (it looks like they are) and that OS development people quickly embrace this, or their hardware will take a nosedive in the market.

    This is not true. There are a number of hardware data compressors. MPEG is decoded in hardware by the N64 hardware, for instance. "Miracle DIMMs" are known as hardware compression units.

  • To double the ram in his head was RamDoubler?
  • by Signal 11 ( 7608 ) on Monday June 26, 2000 @12:49PM (#974705)

    Strike 1: "IBM Memory eXpansion Technology" BiCapitalization is the first sign of bad tech - it means the marketing people got to it before engineering could get it out the door. It also boils down to yet another meaningless TLA to impress PHBs: MXT.

    Strike 2: fake numbers. "as memory comprises 40 to 70 percent of the cost of most NT-based server configurations" Er, gee, not only is that an absurdly large error margin, but most servers cost, oh, we'll say $2000 and up. 40% of is $800. $800 of PC133 right now is about 640MB of RAM. Most systems in that price range have 256-384. Oops.

    Strike 3: Stating the obvious "and millions of tiny transistors" Oh, and how else would you do it? An analog circuit, perhaps?!

    Strike 4: Not promising: "The new technology is seamless to the end-user because the compressed data can be uncompressed in nanoseconds when needed." Call me a pessimist, but memory right now is around 6ns for PC133. Now, assuming a very conservative 2ns to decode the data, that's 8ns, which is a 25% performance hit. How many admins do you know that would take a 25% hit on performance on their servers to save a couple hundred bucks?

    In short, this new tech is gonna tank.

  • Excuse me, but you're a total bullshit artist and nothing you said has any basis in reality

    umm....what he said is true.....try compressing a .wav file or a .mp3 file versus compressing a .txt or a .doc file.............one is going to compress by about 50% while the other....by maybe 5% if u are lucky......i would guess same thing with RAM......the "compressor" will not be able to compress all of the data.


    Why win9x really sucks [cjb.net]
  • Software compression in RAM, or on disk? Software compression on disk is, of course, unchanged. Compressed data structures in RAM will now bloat...so you will be penalized for using them. That is why I hate forced compression.
  • This kind of technology has been around for ages, and it's been sold commercially. Products like RamDoubler and the like were very popular during the 90's for exactly this reason.

    I don't understand what's supposed to be unique about this technology.
  • Silicon has 20 years, tops. In 20 years, the industrial revolution that is sweeping the world will hopefully be over. Then even the average citizen of the second and third world nations may have computers. but if the entire system is based on Silicon, then the cost of everyone replacing their system would astronomical. In addition, if the new method would be incompatible with Silicon processing, then these people would be isolated by their socioeconomic status once again. and if there is some way discovered to interface the two, speed of the better will surely be compromised by that of the relatively anachronistic Silicon.

    worst case scenario: Mad Max meets Bill Gates
    best case scenario: Mad Max kills Bill Gates
  • Remember how for about a year or so manufacturers advertised that they were selling computers with 200Mb hard drives but when you looked in the small print it said "With XXX installed" where XXX was some disk 'doubler'? I guess we're going to get a year of companies misleadingly trying to sell 1/2 Gigabyte PCs claiming they're Gigabyte PCs. You have been warned!
    --
  • by whistler-z ( 183654 ) on Monday June 26, 2000 @12:52PM (#974711)
    Now drives are cheap and large. I expect the same to happen with memory . . .

    Actually, based on the recent Rambus dealings, it's very likely that RAM prices will go up, not down. Assuming this technology can do what they claim (conceivable), and not have an impact on performance (highly doubtful), this could seriously lessen the impact of Rambus's patent squablings on the end user's wallet.
  • Stepping back for a second, though, is this REALLY a problem for most systems? I'd be willing to bet that 99.999% of the time, multi-user systems run memory images that are at least moderately compressable.

    The real problem is every time a character is written to memory, the entire memory block has to be recompressed.

  • The difference is that while doubling your hard drive gives you more room for stuff, doubling your memory space dramatically improves system speed as you use less disk space for paging.

    (Yes, I know that the "double" part is best case, but given the relative speeds of memory and disk, even small amount of compression will likely improve speed.)

  • well, be prepared for the devil of details! Memory Expanders in hw are cool, but there must still be the hidden gotchas, like zipping a zip makes the file larger not smaller. Not to mention overhead, with additional operations needing to be executed this ram surely can't be faster. Though, I bet its faster than swap. ;)

    kicking some CAD is a good thing [cadfu.com]
  • But those other schemes were in software, and slow...and therefore usless for high-performance machines. This will be in hardware, therefore a lot faster...but still useless for high-performance machines.
  • Now, here is a curious question. At what point does this actually save you money? For all we know this thing could cost more than what the average user would need to spend in order to double their ram. Heck, if this thing cost even $175 it wouldn't be worth it for me to buy it, yet.

    This is great for server news but from what I have read, most of the people here seem to think this will be great for their PC. Maybe in a couple of years when we need more ram to do whatever we are doing or after the price goes down
  • What's so new is that it's implemented in hardware. The article claims approximately 4 orders of magnitude better performance than a software solution. Of course, whether that's enough to make it worthwhile is not addressed.
  • This is so cool!

    I think that Nintendo64 might do this too, as funny as that sounds. The data on the ROM cartridges is compressed, I know that much for sure. I'm pretty sure it's decompressed at real-time when it's accessed, but I don't remember if it's done by hardware or by software routines.

    Can anyone verify this- whether the N64 does it hardware or software?

  • I agree that a RAM-doubling chip would be a great idea for notebooks: apart from anything else, wouldn't a chip that doubles your RAM take up less space than that much more RAM, especially on a 'book with 128MB?

    However, what about heat/power problems? Does this chip use more power and/or generate more heat than the same amount of RAM? You'd have to be careful that you give yourself twice as much memory and half as much time to use it. On a desktop/server, of course, this wouldn't be a consideration.

    Finally, I notice the article says the hardware-based memory compression is "10000 times faster than software-based solutions"... but no mention is made of how it compares in speed to actual RAM. Anybody got any details?

  • holly shit! does that mean we have to worry about leakage?!

    kicking some CAD is a good thing [cadfu.com]
  • We need any compnay we can get to startexploring alternatives in memory technology. Whatever happens in the near future, if Rambus has its way, it really won't matter what platform you are using. RAM is gonna ream you either way.

    So... IBM taking steps to look into and promote alternative memory technology will probably result in other companies doing the same, so that no one will have to pay implant prices for their silicon.

    And... Welcome to the Rambus competition IBM! Really! I appreciate it!!

  • by Guppy ( 12314 ) on Monday June 26, 2000 @12:56PM (#974722)
    "This would rock on the PC, where our bus speeds are slowly reaching 133mhz. If you could send the data compressed across the bus, that is."

    Maybe, but I'll bet there will be a fairly large impact on latency, with the overhead needed for compression/decompression. Bandwidth is more important for some kinds of memory-intensive applications, like database and photo/video editing stuff. But for everyday applications or games like Q3, the latency is going to knock your performance way down. It's one of the problems with Rambus, where the bandwidth improves but latency actually gets worse.
  • Looks like they are just adding a L3 cache, and than putting ramdoubler into hardware.

    Of course, compressing/decompressing data in hardware very quickly (nanoseconds) is a lot better than being forced to go to swap (microseconds to milliseconds). Still, modern processors tend to take very big performance hits whenever a little bit of lag is introduced. The people at IBM aren't idiots though, and I suppose that keeping the lag down is what the L3 cache is for.
  • Hey, memory isn't cheap, but not that expensive. Who are they aiming for? People who don't use their computers much don't need this extra memory. People who need all the memory they can get either are working with large amounts of non-compressible data (huge files like graphics, archives, etc.) or people who would be worried if compression failed.

    And I still don't know how this works. If I ask for n bytes, and the hardware allocates me n/2 bytes, what happens if I load something non-compressible? Something has to give.

    Wouldn't touch this with the 10-foot stick I keep around for these purposes.

    Avi

  • Ha ha. Then what do you suppose to replace silicon with?

    Look at it this way: If a technology existed that was good enough, people would use it. That's the way free market works.

    You could argue that the big manufacturers don't want to retool or are scared of the change, in the same way that the RIAA is scared to death of electronic music. In that case, upstarts with cost of retooling = $0 will take over with the new technology in the same way that Napster, Gnutella, and Freenet are taking over music distribution from the RIAA.

  • Thanks for quoting the FAQ, but you missed my entire point - memory has to be optimized for worst-case. Worst-case means if it tells my board it is a 256MB dimm, it must have the resources to deal with all 256MBs with no compression. If you don't do your numbers by worse-case scenario, then something like encrypting a large file (which makes use of massive amounts of entropy.. atleast we hope) or loading a pre-compressed file into memory.. like, say, an MP3 server with a RealAudio connection to the net might do.. then the system will tank as it has nowhere to put the extra RAM.

    You can't push/pull data out of some mystery void.. it has to go somewhere.. and in a worst-case scenario, one bit takes up one bit of space. Sorry.

  • The article glosses over the performance penalty of using this hardware memory compression. I would venture to guess that the latency would increase on reads from decompression delays. The question is how much would this be? This is an important issue considering the targeted server market.

    It seems that any performance penalty would be moot once the amount of stored data exceeded the true capacity of your memory. At this point the compression would be offering a real benefit by avoiding disk paging.

    --Wiredlogic, Remembering Mac RAM compression (in SW)
  • yes, it does mean software compression is moot. duh. also, it won't "double" your memory - god, how i hate that claim. such bullshit.
  • Call me a pessimist, but memory right now is around 6ns for PC133. Now, assuming a very conservative 2ns to decode the data, that's 8ns, which is a 25% performance hit.

    Well, let's say that the stated 2:1 compression ratio is acheived. Now we're moving twice as much data in 133% of the time, which is a 33% performance gain. (2x the data in 8ns, equivalent to same data in 4ns, as compared to the original 6ns.) The break-even point for 2:1 compression is a 2:1 slowdown in performance. If the compression averages 1.5:1, then the performance must be no worse than 1.5x as slow in order to avoid degrading access time. Can the average performance cost ratio go below the average compression gain ratio, and actually increase performance? If not, then how close is tolerable? I'd say there's room for this technology to be of value, especially as stated in the article, in servers with enormous amounts of RAM.
  • Heh! Not with the proverbial 10 foot pole. Its not that I object to intelligent use of compression but if the disk "doublers" were anything to go by this is just going to be a performance hit and will break stuff. Yeah, yeah I know they are busily telling us it will be completely transparent, but do you really honestly believe that of a hardware solution any more than you believed the promises regarding disk compression? (OK, so maybe you DID believe the promises for disk compression but I bet that didnt last long after it got installed!)

    We're all familiar with the scenario where a prog checks enough disk space is available for something critical and then crashes in a nasty heap when it finds out the hard way that the data didnt compress as well as the OS had assumed it would when assuring us there was enough space there... Compression in hardware could well be worse. Wonder what happens when you malloc() enough storage for your struct and then discover that although every check you can run says that there is sizeof(struct_t) space there you can only fit half your elements in there without stomping on somebody elses pointers? lets not go there.. I dont even want to think about coding on such a platform, much less trying to write a daemon that cant be buffer-overflowed... Hmmm. a whole new range of exploits.. pad the start of some input data with truly random garbage that wont compress well and bingo... what a naasty thought :)
    # human firmware exploit
    # Word will insert into your optic buffer
    # without bounds checking

  • by ucblockhead ( 63650 ) on Monday June 26, 2000 @01:02PM (#974731) Homepage Journal
    . $800 of PC133 right now is about 640MB of RAM.

    Not all memory is in the DIMMS. There are caches everywhere.

    Now, assuming a very conservative 2ns to decode the data, that's 8ns, which is a 25% performance hit.

    You are forgetting what happens when you run out of memory. You page fault, and have to access the block off of disk, which takes something like 9 milliseconds. Getting rid of one page fault for every 1000 memory accesses is going to completely wipe out that 25% performance hit.

    In other words, if you are using any sort of paging file, this will almost certainly improve performance, not hurt it.
  • I don't quite understand your point here...except that you are arguing for the limitations of Si. We have reached and surpassed the "theoretical maximum" of Si several times before. And there are people working on how to do it now. Si is one of the most abundant materials on the planet. GaAs or GaN are neat, but veryvery expensive now both to produce and process, plus they have a high failure (infant mortality) rate. A better way to use Si seems the cheapest route to me.
  • Of course IBM would be able to develop this without being sued by someone else. That was not the issue in question for me.

    The real problem is how the free software movement should react to things ilke this in general. Do the same ethics apply in the hardware areas as well as in the software? Obviously, we can't abstain from using any software that's patented, but we can still fight against ridiculous ones such as the one-click.
  • Not to mention that it can be used in places software can't, like disk or CPU caches.
  • I didn't say what I meant clearly. I do not mean that we would be using another processing medium if it weren't for corporations like Intel. I merely say that it is important to start looking for a solution that will replace silicon when it is no longer as able as we would need. For the most part, this is because we are stuck in silicon. Its cheap, relatively, to produce a silicon chip. Why blow a good thing, eh? Invest billions in researching a medium that could kill them, or spend that money to push the bottom line?
  • How much will it cost to implement this technology? WILL it be worth it?

    Will rambus find a way to sue IBM and force them to pay royalties thus driving the price of this new technology past a cost effective point?
    (i know it's not likely, but shit rambus is trying to fuck everyone over right now)

    If my previous statement is false (and I hope it is) this COULD be good if rambus-ram stays at a high price level. It could leverege existing technologies against rambus and give the consumer a tool to fight corporate greed. (But intel seems to be forcing this down our throat)

    It will be interesting to see what develops. Until then, i'm buying more ram for my machine(s) now before SDRAM becomes 'rambus-forced-expensive'

  • "NT-based server configurations"
    "but most servers cost, oh, we'll say $2000 and up"

    What the HELL kind of servers would those be? Quake servers? $2000 might make a decent Linux server, but $2000 is barely a high end Intel based desktop, much less a server. Good NT servers (at least NT servers that handle a large volume of, well, just about anything) cost two to three times that and have at least a gigabyte of RAM. Now that many Intel server buyers are stuck buying RDRAM to get around the horrible problems of SDRAM on an RDRAM mobo, RAM certainly does eat up that much of the cost.
  • by Anonymous Coward
    Everyone knows that pr0n jpegs are already compressed and cannot be compressed any further. Therefore the technology is useless.

  • I use RAM because it's fast, if we didn't care about speed, we would all be writting to our hard drives and have about 2 megs of RAM and just use swap partitions for all of our memory needs. This won't last because, no matter how fast it is, it will be faster to write straight to the RAM.
  • I can easily see more latency coming out of
    RAM compression/decompression: you have to
    actually do all the computation. However,
    if the data coming across the memory bus is
    compressed 50%, then you can get 2x throughput,
    so that you can double your bandwidth at the
    cost of some extra latency.

    The big question is how bad is the latency? If
    it is too bad, then performance will suffer. If
    it is fast enough, then performance may
    actually *increase*.

    PeterM
  • I believe that most ASIC's (other than moster graphics processors) consume far less power and produce far less heat than the main CPU (or a hard drive for that matter). Any increase in power use and/or heat production of this technology as compared to the RAM it's replacing will be trivial relative to that of the system as a whole.
  • Ok. Even if it's done in hardware this kind of thing has been around for quite some time. The Hobbit processors do this, along with a slew of other embedded processors.

    One thing to think about is how much faster is a hardware implementation really? Time and time again general purpose CPU's seem to kick the butts of dedicated hardware in all but the most esoteric cases (like encryption). If it's done by the CPU then the data you need is already in the L1 cache and possibly in registers, all while avoiding pain for the outer caches. Then add in architectures like EPIC which have nothing better to do with some of their units anyway...

    I don't know, I don't know if IBM's claims will pan out.
  • Imagine the extra latency this creates added to the insane latency of RDRAM. This might have some nice applications, but not in the coming server market of Intel chipset based P-III Xeon mobos that need Rambus modules to avoid the problems that come from using SDRAM in RDRAM mobos. Talk about poor performance.

    The upside is that, when combined with those nice upcoming DDR/Athlon servers, where an extra few nanoseconds won't be nearly as bad, this could be a decent option, if the data one is using can lends itself to compression well.

    The downside is that we sure as hell can't expect to see this used to cut costs on high end video cards. Damn.
  • IANAHardware Engineer, but it seems to me that RAM is already designed to do one simple thing (okay, two things: peek and poke) and to do it absolutely as fast as possible. This technology inevitably will degrade RAM performance by a finite amount. Is their chip fast enough that this degredation will be negligible?

    The saving grace for RAM compression is that DRAM is very slow by logic standards. As long as they're doing something simple like short-run run length compression, the compression/decompression could easily be fast enough to not be noticed compared to the DRAM latency.

    YMMV, though.
  • The upshot seems to be on-the-fly compression in hardware

    ...which immediately brought to mind the "death by frogs" scene from the movie "The Abominable Dr. Phibes" starring Vincent Price.

    In this scene, the victim is persuaded to wear a large frog's head to a costume ball.

    Of course, he doesn't know that the clasp is designed to slowly tighten over time, and cannot be opened...

    ...but I guess that's more like "on-the-frog compression of memory hardware"!

    Also, "chip-based compression" is probably already patented by the makers of Pringle's Potato Chips.

  • I don't think this only goes for NT .. seriously... You ever tried to run a whole bunch of X apps, on a Pentium 90 with 16 MB of ram?? How about 32MB ram?? Dude, it's not just NT that needs ram. Swapping on Linux is not something to look forward to..

  • These "theoretical maximums" you speak have only been postponed. We can only refine the lithographic process so much before it becomes too expensive or too complicated. I do not think we can do much with exotic chips either, though. If it weren't for the complexity of setting up a molecular computation, and its lack of diversity, or the difficulty in producing the utter vacuum nessecary for quantum computing, these methods would be the most appealing.

    As to the validity of my point, it doesn't exist. In fact this post is mostly an off topic rant about my anger that we don't live in a world filled with bacterial computers as promised by AmSci when I was growing up.
  • Why not just tie it into the paging system? Then you have no trouble dealing with going below the actual amount of memory. You just page fault.

    That'd really be the way to do it. A "256 MB" compressed chip acts like it as 256 MB even though it only has 128 MB actual. Once it fills, it page faults.

    In practice, I doubt you'll ever actually hit that sort of case you're talking about. You'd have to assume that everything in memory was already compressed/random. That's never going to be the case given the large numbers of different programs running around, most written without concern about memory. Lots of arrays of zeros.

    Unless you are running some kind of single process OS like DOS, I don't see how you could ever get your memory into a state where you couldn't even get 1:1 compression. There are just too many other processes running around in modern OSes.
  • Faster to write straight to RAM, assuming that you haven't run out of RAM and are actually writing to swap partition...

    So unless you've turned off virtual memory, you might want to rethink your position.
  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Monday June 26, 2000 @01:23PM (#974750) Homepage Journal
    I took a snippet from /proc/kcore and compressed it to see how well it would work.

    dd if=/proc/kcore of=/spare/mem bs=1 count=1000000

    that resulted in a chunk of kcore 1 million bytes long written to the file /spare/mem

    bzip2 -9 /spare/mem

    that resulted in a file named mem.bz2 being 349791 bytes long.

    This was on your typical RedHat system running the usual stuff.

  • by Anonymous Coward
    Well, let's say that the stated 2:1 compression ratio is acheived. Now we're moving twice as much data in 133% of the time, which is a 33% performance gain

    No it is not, you are not moving twice as much
    data.
    The data rate is still the same. what has increased
    is the total amount of memory.
    the interface from memory to cpu is still the
    same, (or has IBM doubled the bus of the Pentium? also).

    Now the question is what is more important to you
    total ram size, or the increased latency of memory.

    (latency not bandwidth as the first poster was also wrong).
  • What I was saying, is that I want my memory fast, if it could take all day to write... I would write to swap, but since it can't take all day, I write to RAM, therefore, I don't want to write it to swap, therefore, I use fast memory, and wouldn't compress it, because this will take up time, and make the system slower. I don't need to rethink my position, since what I am saying is, I use RAM, because it is fast, I don't use virtual memory (when avoidable) because it is slow. Why would I want slow memory? Get it?
  • i recall the days when you could get software to solve any number of hardware problems. not enough ram? get ramdoubler. not enough disk space? get stacker. computer not fast enough? get 386to486.exe!

    a rule-of-thumb i recall from that era was something along the lines of: 'software solutions to hardware problems are impractical'.

    the fact that they do the compression in hardware may have some merit. so i did a bit of testing; i checked the sizes of /proc/kcore, and the size after piping /proc/kcore through gzip and into a file.

    on my 32mb box: (4944k used, not counting cache)

    compressed uncompr. ratio uncompressed_name
    18796861 33558528 43.9% kcore


    on my 192mb box: (144872k used, not counting cache)

    compressed uncompr. ratio uncompressed_name
    99302828 201265152 50.6% kcore


    figures are probably quite skewed, since the core image was not a snapshot. but it looks like the actual used memory compresses better then the bit-soup that is in the dimms when the system powers up.

    who knows... maybe ibm has a few tricks up their sleeves. be interesting to see some linux source to deal with these beasts... i'm assuming that it's os-dependent, and since ibm has been great about linux lately, i'd think they would release whatever kernel patches would be necessary to use these things.

    --

  • A standard hack that on-the-fly compressors use is to keep a flag bit indicating whether a given chunk is compressed or uncompressed. That way you can avoid the problem that recompressing already-compressed data often grows, at the cost of a flag. On the other hand, if most of your data is compressed anyway, like web servers caching lots of JPEGs, you still don't win, and you're hauling it uncompressed on most of the busses anyway.

    I am interested in how big a block of memory this stuff operates on. Compressing a few K at a time, such as a disk block, may be big enough to win, but compressing a 128-bit cache line almost certainly loses. Where's the breakeven point? The hope of this technology is that if you stick it way out in L3 cache, you're usually hauling big enough chunks at a time that the decompression latency for getting the first byte is made up for by the bandwidth of getting the rest of the bytes from a smaller amount of memory.


    Help, help, I'm being compressed!

  • Strike 2: fake numbers. "as memory comprises 40 to 70 percent of the cost of most NT-based server configurations" Er, gee, not only is that an absurdly large error margin, but most servers cost, oh, we'll say $2000 and up. 40% of is $800. $800 of PC133 right now is about 640MB of RAM. Most systems in that price range have 256-384. Oops.

    It isn't an NT server, but a 128 processor Beowulf was recently built at the university of British Columbia, and memory comprised 44% of the total cost.

    I don't think that 40-70% is unreasonable at all.
  • <SARCASM>Suuuuuure! Just like your web browser is in hardware, since it's running on your CPU, which is a piece of silicon. </SARCASM>

    Annnnnnnyway, for those of you that have some sort of interest and/or clue, I'm still trying to dig up some specs on the N64 to see if it decompresses in hardware or not... I just think it's neat when companies make some multi-million dollar announcment about some technology that was pioneered in video games. :-)

  • by ucblockhead ( 63650 ) on Monday June 26, 2000 @01:29PM (#974757) Homepage Journal
    Is a caching system. From the article:

    MXT is a hardware implementation that automatically stores frequently accessed data and instructions close to a computer's microprocessors so they can be accessed immediately -- significantly improving performance. Less frequently accessed data and instructions are compressed and stored in memory instead of on a disk -- increasing memory capacity by a factor of two or more.

    Note two things: They are not compressing everything. They are not replacing the actual memory.

    Most of the criticisms here are based on misunderstandings of those two things.

    (Note that I'm guiltless. I posted a number of times before getting around to read it.)
  • Hey, thanks for not being able to understand what the FAQ said. This brings up a valid point, which is that so many FAQs are just unreadable to the general viewing, hairy-palmed, public.

    So, this leads us to an intuitively obvious suggestion reguarding the moderation system: it can be totally done away with, if only a similarly obscure and inscrutable document must be read and irritatingly followed to allow access to slashdot. The instructions should clearly include several mathematical formulae, at least ten words with four or more syllables, and a few misdirections for good measure. That should keep us good and clean.

  • Let's say you have 128MB physical memory installed, here's an easy way to make sure that nothing bad happens:
    Tell OS you have 256MB available. When that 256th MB is malloc()'d and compression isn't quite doubling the size, page to HD. No performance lost, it would have paged if you didn't have compression anyway. Pointers and everything should work just fine, the same way they do when you page normal RAM to HD.

    It's not like doubling the HD, where when you run out of space, you *really* run out of space.

    -Casey
  • Compression CANNOT guarantee anything better than 1:1 ratio - it is ENTIRELY dependent on
    the data.


    Nothing in life is guaranteed, you're right. But most of the plain old ASCII text files on my hard drive compress 75-90%, depending on length.

    For data compression in memory to succeed, you MUST have an option to cache the "extra" memory to a swapfile incase the prediction logic fails and you run out of physical ram. If you do not, you will tank your system, bigtime.

    Well, it's not like IBM's mandating the removal of virtual memory or swap files in order for this to work. That's what it's there fore, to catch whatever won't fit in RAM.

    Sorry, but I'm very leery of any "memory compression" - it requires OS support to function. Period. You aren't going to just plug in a miracle DIMM and make it work. I hope IBM is opening the spec (it looks like they are) and that OS development people quickly embrace this, or their hardware will take a nosedive in the market.

    Not so. Just look at connectix' RAM doubler on the Mac. It was a 300 to 400k extension that DID effectively double a systems memory, with a 3% to 5% slowdown, and NO extra help from apple. If the compressions now done in hardware, I'd expect 0% slowdown, and still no requirement that each OS explicitly support it. So long as it's built in at the chipset level, as long as the OS can communicate with the memory controller, everything should work fine, right?
  • The vast majority of systems out there today use paging files.

    This would cause a dramatic reduction in page faults on that vast majority of systems.

    Get it?

  • No, a file server for your department doesn't need moby RAM. But a large web server or a enterprise-sized database server both win by using lots of RAM to cache data, because RAM really has a few orders of magnitude faster throughput than disk drives, and the latency is much lower which makes a lot of difference for databases. Even a few years ago, it made sense to toss a GB of RAM into a database server, and now that that's only $1000 or so, it usually makes sense to buy more.
  • As a matter of fact, I am running e2compr, the transparent compression module for ext2 (the Linux filesystem). I have been for over a year. It cheerfully handles compression with far more fine-grained control than most Windows programs (eg, you can choose individually whether to compress a file or not) and performs, so far as I can tell, flawlessly. That is, it has never corrupted data or caused bugs in other programs, and has never crashed the system even under heavy load. There was one crashing problem (that I never triggered) which was recently discovered and fixed; I don't think there are any other known outstanding issues.

    So yes, transparent compression can work just fine. The case of running out of space is a problem, but no program should blindly assume that the amount of disk space remaining is reliable unless it's running on a single-process system, and even then it's shaky. What if, while the program that needs X disk space is running, I download and unpack a file? Worse, what if another user does it on a multi-user system? Programs that break in this way are broken already, period.

    Daniel
  • RAM compression isn't going to deliver compression ratios as good as stream oriented algorithms like deflate and bzip2, because it has to be random-access and the compressor doesn't have as much context to look at.

    But as you pointed out, memory tends to be pretty compressible because of the significant redundancy.
  • by spectecjr ( 31235 ) on Monday June 26, 2000 @02:34PM (#974778) Homepage
    "as memory comprises 40 to 70 percent of the cost of most NT-based server configurations"

    That's because NT is bloatware. Now if everybody would run Linux, there would be no need for this technology, now would there..

    I'm sorry, but I just had to post this.


    Actually, NT's not the problem. The problem is administrators who don't know how to take Exchange and SQL Server out of "Standalone" mode; as standard they pre-allocate a massive chunk of memory (as much as they can get; usually between 60-80%) so that they can run as fast as possible when they're on a dedicated box - which is the recommended way of setting them up on a large network.

    You can, however, turn this off. The registry key settings to do it are documented.

    Si
  • Since this is hardware located on the motherboard, don't look for it in the desktop too soon, unless Rambus licensing fees drive it.

    Since it is on the motherboard it would probably require support at the bios level. Also because it has its own caching system, it may only work with certain CPU chips, maybe, maybe not.

    The real question is, where does this technology live? Is it in the North Bridge, or in some other bizarre location? Or is it on the DIMM?

    See, for it to be truly transparent (Which it more or less has to be) it's going to have to be in the chipset, or on the DIMM. Putting it on the DIMM means there has to be one of these suckers for each DIMM, but then that might be true already. It seems to make the most sense to embed it into the chipset, or to sandwich it between the chipset and the memory.

    If IBM does it right, then that means that you will not need any OS support for this "technology". From the article given, it doesn't really sound like there's anything amazingly new here, but that remains to be seen. I'd want to see the patent involved before I made any calls on this. In any case, hardware data compression has been around for a long time. Heck, take as a [lousy] example the FWB Jackhammer NuBus SCSI cards for the Mac. Those did compression in hardware, though they did have a software component (the driver written to the hard disk.) I'm envisioning this as a more transparent way of handling it.

    It's not unreasonable to say that IBM could get 2:1 compression ratios reliably with a good algorithm and some high-speed logic. It's even likely that it will be cheaper to buy 512mb using this technology than 1gb without it; It's even more likely that it'll be cheaper to buy 2gb with it than 4gb without it, because the larger the DIMM, the higher the cost per megabyte, once you get past a certain point (about 32mb.)

    Anyway, show me the patent. I'd like to read it.

  • The reason I stopped using Drivespace(3) was that MS never made a FAT32 compatible version, not because I decided that it sucked.

    Bias Disclaimer: I think that predominantly transparent compression of a large storage area rocks. I'm also a huge fan of decicated hardware accelleration. Thus I think this is the sort of cool idea I'd want regardless of whether or not it actually works.

    That said, this should provide some performance increase to any system with a swap file, like my little portable (not that it can be retro-fitted) - but I would like to know how the OS will deal with a variable memory size...

  • by orpheus ( 14534 ) on Monday June 26, 2000 @03:15PM (#974792)
    Five major points about system memory compression:

    1. Why do you want to do it
    Is it because RAM is expensive? Okay, but RAM prices would have to climb much higher to make it worth the new boards, new architectures, and intrinsic problems

    Is it because your system will 'run faster with more RAM'? Don't count on it. Trading RAM latency for apparent RAM size will mean that a given apparent RAM size would run slower than when uncompressed (i.e. 64->128MB compressed is slower than 128MB uncompressed), and the performance gain would be variable for a given *true* RAM size (= larger apparent size), and may disappear in certain settings (Is 64MB->128 MB faster than 64MB compressed? Depends.)

    Remember, CPUs are data hungry critters, and feeding them at one end (and emptying them at the other) is already one of the biggest challenges of modern system design.

    2. Transparency is not enough. We need ultra-transparency
    Remember: any general purpose compression yields variable results with different data (and changing a bit will change the 'actual size' of a block, and hence the physical location oif the bytes within the block. Compression confounds the 1:1 correspondence between the physical and logical memory address, and the relationships between different memory addresses, so we'll need to de/compress entire blocks and cache them (more on this later)

    Without ultra-transparency, optimizing low level code becomes fraught with emergent effects. the most important thing about RAM is the capability for RANDOM access a lot of people have forgotten there was any other kind -- serial memory, bubble memory, etc. -- bucket-brigaded bits demanded very different algorithms for efficiency!

    Think of the pitfalls of straight CPU/mobo caching designs. 'Cache thrashing' can bring some fast algorithms to a molasses crawl, precisely because caching disrupts the relationship between between contiguous bytes: A slower algorthm that reuses bytes in cache is preferred over a 'mathematically faster' one that relies on massive sequential reads. Compression thrashing can do the same, and will have (multiple) cache problems on top of this..

    3. Where did I put that block?
    There is no assurance that you will be able to return a block to the same DRAM location you got it -- change a few bytes, and it may be larger (in physical length) than it was, despite having the same virtual length. This implies RAM fragmentation -- and all the associated housekeeping. And where are you going to store the record keeping? In *another* local cache? In RAM?

    You'll need lots of hardware housekeeping here. It's do-able, but without a level of sophistication that approaches predictive brancing and pipelining, you can count on extensive unexpected 'emergent effects' -- code that's slow for non-obvious reason, or bugs.

    4. Strangely, on-chip cache may be the best place to use hardware RAM compression!
    a) Hardware compression would be a small addition to the CPU circuitry, and can be run at the full multiplier speed.
    b) No need for separate chips or mobo redesign
    c) on-chip cache costs hundreds of dollars a meg (price a 512K PIII vs a 2MB PIII Xeon of the same clock speed) so extraordinary measures can be taken. It will probably improve chip yield, too.
    d) integrating compression with the prediction/pipelining/cache management/etc of the CPU, can make it more transparent
    e) L1/L2 is where you will get *huge* payoff, by 'keeping baby fed'.
    f) there's much less latency (chipset, PC trace, L1, L2, L3 cache) on-chip vs. off chip, so Adding an off-chip layer adds more latency than adding an on-chip layer

    5. The fundamental performance rule is: trade excess performance in one place to improve inferior performance elsewhere
    RAM is no longer a source of 'excess performance', it's a pressure point. Every extra 10MBps in RAM throughput shows up in the benchmarks (unlike, say HDD busses like ATA33/66/100, where doubling or tripling speed does little for system performance.

    Where's the excess speed in modern systems? It's inside the CPU -- which runs at a multiplied clock speed, has vast optimization, and is always starved for data. it's also the place where adding RAM does the most good.

    However (!) using compressed on-chip cache will require an intensive study and redesign of cache theory, unless this possibility has already been explored in conjunction with the development of the current CPU features like predictive/pipelining/VLIW. 2-,4-, 8-way associative simply won't hack it when the cache doesn't have 1:1 virtual/physical data correspondence!
  • Time and time again general purpose CPU's seem to kick the butts of dedicated hardware
    Only if that's the only thing they're doing. Most of my portable's CPU time is taken up playing MP3s. If I could farm that task off to dedicated hardware then I'd have more CPU time for whatever, disk compression maybe (or D.net or SETI@home ;).

    Also, I'd tend to think that the dedicated hardware accelleration in today's video cards is not especially esoteric...

  • by Azog ( 20907 ) on Monday June 26, 2000 @03:34PM (#974797) Homepage
    Memory compression (and disk compression too) are useless to me, with one exception: texture compression on video cards.

    Think about this. I have 256 MB of ram and 50 GB of disk space on my main machine. Why not, it's cheap! What I want is lower latency storage. Right now, that is expensive or unavailable, but would make a difference to the performance-sensitive apps I run.

    Furthermore, the large files that I would like to compress are already compressed: video (MPEG), sound (MP3), and image files (JPG). On my 40GB server, I have 17 GB of MP3s, 1 GB of other files, and the rest is empty space! Why bother compressing that 1 GB just to get half a gig back?

    From what I hear, even people who do non-linear video editing are editing compressed MPEG directly these days, rather than using uncompressed AVI or other formats.

    So... What applications could actually benefit from this?

    Maybe web servers with thousands of text files in RAM? Maybe people working with extremely high resolution bitmaps or uncompressed video?

    Torrey Hoffman (Azog)
  • Do the same ethics apply in the hardware areas as well as in the software?

    Yes and no. Yes, the same ethics apply, but no, the ethical objection against the LZW patent does not apply. Consider the enormous capital that a company would have to expend in order to manufacture these chips (the fact that it's IBM should give you a hint as to the scale), as well as the months (years) of planning and design that go into them. Does a patent search (and negotiations) seem like an overwhelming part of the development budget? (Compare this to a programmer who can inadvertantly infringe upon a dozen software patents per hour of his work.)

    Furthermore, this is allegedly transparent, where systems that it interfaces with are not required to also license the patent. Compare this to a protocol or interchange file format (e.g. GIF). If you use GIF, you have to license the LZW patent. If you use these chips, you won't.

    I don't see the patent doing any damage here.

    Same ethics always apply, but different conclusions depending on circumstances.


    ---
  • It's already standard practice with tape drives, no need to remember way-back-when. Every single tape drive on the market is advertised as having twice as much capacity as it really has. (One of my pet peeves.) Yes, it would suck if that happened to the RAM market too.


    ---
  • A powerful general purpose CPU is all well and good, but they're expensive (and power hungry). If, for some specific tasks, a much cheaper bit of dedicated hardware can perform just as well then makes sense to farm off the task to that cheaper hardware.

    A good way to get a quick feel for this is to look at portable MP3 players v's playing MP3s on a WinCE/PocketPC unit. A dedicated MP3 player gives you 64-96MB of RAM for around US$300. A WinCE/PPC unit that can play MP3s will typically cost US$600-$1000 and only include 16MB-32MB. And the battery life is 12hours v's 3 or less.

  • This really has nothing to do with disk caching, it compresses memory, not disk space. And it WILL increase latency because a section of memory must be decompressed before it can be sent. Even with dedicated hardware it will still take a few clocks.
  • It's not just the new drivers from nVidia. What nVidia does is DXTC (DirectX texture compression) which is a form of S3TC (S3 texture compression.) 3DFx is doing this to with FXT1 (I think that's right)
  • Opps. Disregard my previous comment. I was listening to the topic instead of reading the article. You're right about everything. I'm sorry. Forgive me. ;)
  • Tape drives are advertised with two numbers - uncompress/compress, ie; 4/8 for DDS2 or 12/24 for DDS3. This tells you that there is a hardware (or simply transparent) data compression system somewhere in the pipeline. I typically expect a 1.5x increase on data of Office97 or later vintage (Word and Powerpoint 97 save images compressed, Pre-97 versions save images uncompressed). Thus I would consider a 4/8 to give me about 6Gig, a 12/24 to give about 18Gig.

    If there was a similarly transparent (including no noticable performance hit) RAM compression technology built into a system or a SIMM then I wouldn't object to a 64/128 or similar description.

    However, advertising just the compressed size is immoral and deceptive...

  • mmap() a large encrypted file. Start decrypting.

    When you do that in the background the OS will have to start paging in the encrypted file. It will stay in memory until that memory is needed for something else. But the encrypted file is a worst case scenario for the compression algorithm.

    If someone has an encrypted filesystem this will actually be an extremely common case!

    Really, your "Oh we will never hit that in the real world" is exactly how programmers f*ck up time after time again. You may not see how it will happen, but it will happen eventually and people will get hosed.

    Deal with your corner cases before turning your code lose on the world, please.

    Regards,
    Ben
  • One of the arguments for using software disk compression back in the day was that it would increase transfer rates. It was faster to move less data across the disk bus and then process it with a fast processor than it was to move the uncompressed data directly to memory.

    1) How true was this claim, and if it was true, why isn't it still true?

    2) Will the same effect be apparent in harware memory compression? Furthermore, would more performance be seen if the compressor was moved onto the processor so that not as much data had to be moved across the memory bus?
  • yeah, what he said!

    I haven't had a look at the press release, only the AP article. The one place this would be useful is in the 2nd or 3rd level cache. If you can compress fast enough then the extra size of the cache would come in handy.

    The reason why this technique would work on a cache but I can't see how it would work on main memory is that you don't ever see the cache, so you never make assumptions about how large it will be. So the encrypted data sitting in the cache fills it up after 1 Meg, while the empty matrix full of zero can have almost 4 Megs cached.

    Cache is transparent, so you can never be bitten by it changing logical size on you. Unlike main memory.

    For main memory, like sig11 points out, the OS will make assumptions that it can store exactly that much info. And since the hardware has no provisions for asking the OS to page out memory (and has no business asking, either) eventually havock will be wrought.

    So I'm strongly suspecting that this will turn out to be used for 3rd or 4th level cache.

He who has but four and spends five has no need for a wallet.

Working...