Intel Tests Show PC133 SDRAM Bests RDRAM 113
SteveM wrote citing an Semiconductor Business News article which begins: "SANTA CLARA, Calif. -- Here's a surprise. Benchmark test results from Intel Corp. show its new 815E chip set with PC133 SDRAMs beating the performance of its 820 chip set with Direct Rambus memories. Moreover, Intel has posted those unexpected test results on its Web site, not intending to show PC133 SDRAMs beating the Direct Rambus memory format, which is favored by the Santa Clara chip giant." The results actually show some fairly unspectacular differences, but those differences lean overwhelmingly in favor of the SDRAM. Surely someone will come up with a benchmark that always makes RDRAM look better.
Oops, posted to early. (Score:1)
--Shoeboy
Re:Not performance... (Score:2)
No, I fully understand your point and agree with what you are saying. All I'm trying to point out is that benchmarks that deal exclusively with performance and do not mention cost are necessary. In this case there is a clear winner and a clear loser, but that isn't always the situation, so as a general approach it doesn't work....
Re:Title (Score:1)
Re:Title (bests) (Score:1)
Re:Intel's conspiracy? (Score:2)
The fact that intel decided to use SDRAM and/or DDR SDRAM in its next generation of chipsets instead of RDRAM outright shows that intel knows a bit better than to push technology that is at best marginally better at 5x the cost. The conspiracy really isn't much, I quote from Tom's hardware:
When Intel 'decided' to go for Rambus technology some three years ago, it wasn't out of pure believe into technology and certainly not just 'for the good of its customers', but simply because they got an offer they couldn't refuse. Back then Rambus authorized a contingency warrant for 1 million shares of its stock to Intel, exercisable at only $10 a share, in case Chipzilla ships at least 20% of its chipsets with RDRAM-support in back-to-back quarters. As of today Intel could make some nifty 158 million Dollars once it fulfills the goal.
20% os the market is quite a bit, but intel doesn't have to be a RAMBUS zealot to pull this off. If RAMBUS really does work better, for, say, the server market, this is acheivable without incredible loads of propeganda that we've seen from them last year and much of this year.
The fact that intel itself would come out and say DDR SDRAM is better than RDRAM pretty much ends the conspiracy theory. But that doesn't mean they're still not biased twords it.
Re:Title (Score:1)
Re:This is sensitive to many things. (Score:3)
Adding a second RIMM channel also reduces the likelihood you'll take a "bank hit" in the RDRAM, and it allows the chipset to prefetch on the second channel if it thinks there's going to be a subsequent access over there when it sees an access on the first channel.
Of course, CPU and chipset designers have never been all that good at ESP. And, as on-chip caches grow larger, the traffic at the CPU boundary looks increasingly random because all of the redundant and predictable traffic has been absorbed/filtered by the cache, making ESP all the more important. (And yes, I mean Extra Sensory Perception, as in the chipset needs to psychically know where the CPU's going next.)
The other comments about making the channel wider rather than deeper to reduce latency also apply.
--Joe--
Re:Not performance... (Score:1)
However, in this case it doesn't matter until RDRAM gets cheaper, or it gets a killer app that works massively better with it. I think it could make a pretty good long-term, frequently accessed data cache; maybe something like a BIOS shadow?
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
[OT] Logic Error, GN (Score:1)
Keeping
How does pointing out an error after the fact keep
Re:Shame on this Anonymous Coward, (Score:1)
My goal in life is not building up Karma. Better crack is available in my area, and it's also a lot cheaper than $3 stuff I keep hearing about.
My goal in life is also not cleansing Slashdot of bad grammar, bad spelling, or pretty much anything else. I just can't help but notice that a guy like myself, who isn't a native English speaker, doesn't live in an English-speaking country, and has poor formal training still can spell better than some of the participants of this forum. Some, but not all! Indeed, almost everything that I know about the English language I've learned here.
I'm not supposed to be your enemy, and the reasons of my picking this nick have nothing to do with my spelling-correcting posts. For spelling is the area where I, well, duh, rule. Grammar (English grammar, mind you) is not. I truly appreciate that (apart from "poeple" which is supposed to be a typo, right?), but then you've picked the name "* nazi" which, you should realize, may lead you to problems with some folks of European, and especially Jewish, origin. I don't have this problem because I realize that the name is not supposed to represent those real nazis. Be warned that not everybody is like me. Try to avoid assuming such names in the future, and you might save yourself from a lot of trouble.Thank you for your time. I hope you didn't find it terribly difficult to finish reading this long, dull submission (which I'm even bothering to spellcheck right now...ok, no tyops).
Actually, I never said... (Score:5)
> AMD from your ire. While they are surely less evil than Intel, they are
> still evil for contributing to the continued existence of x86.
Actually, I never said that I personally think x86 is bad, evil, or otherwise undesirable. I used the phrase "since everyone here hates the x86 architecture so much"--and generally they do, but I'm an exception. In a recent post [ http://slashdot.org/comments.pl?sid=00/06/29/2227
So, I never said Intel was evil for pushing x86 for so long, I said that it's dumb for people to hate x86 but not fault Intel for creating a better ISA long ago. That leaves AMD in the clear as far as I'm concerned, since I'm glad they're going to extend x86 to 64bits and maintain backwards compatibility and maintain an open, freely usable ISA--putting the next big ISA into Intel's licensing control is a very, very, very dangerous idea--I'll keep incurring that 1% penalty in exchange for keeping an open chip platform, thank you. The reasons Intel is evil include its sloth, especially in keeping the P6 core so long, and its predatory M$ like nature. I congratulate AMD for starting out as having really crappy inferior processors, but making honest and huge leaps with almost every generation, almost every year, while Intel sat on its hands with the P6 core *for 5+ years*. AMD processors are now at least equal to their Intel brethren, and most benchmarks put them at a slight edge now that cache is all on-die, and in price/performance they whomp Intel completely and mercilessly.
> Quality, high-performance workstations from Sun, SGI, and Decompaq can
> be had for less than USD 5000
Yes, I agree that the PC architecture is lacking woefully, but the oppenness of that platform is what allowed the Internet boom and Information Age to happen. Cheap commodity hardware that even people who live in trailer parks can afford, but which scales up to performance powerhouses which equal the horsepower (for most applications, but obviously not all) of a RISC unix workstation for a fraction of the price. The sheer brute force and clockspeed of a commodity x86 processor, even on the hobbled buses of the PC platform, make Alphas and Ultrasparcs unnecessory for all but the highest-ed uses. It may take an 800MHz Athlon to get the FP performance of a 400MHz Alpha, but when the Athlon and its mobo are so inexpensive, there's no contest as to which is most useful. Why in God's name would I pay $5000 for a DEC or Sun box which won't run most things any faster than a $2500 x86 box I could build myself? For the elegance? Fuck elegance, give me just as fast for half the price and I'll take x86 ugliness any day. Depending on which processor the DEC or Sparc has, either an Athlon Tbird or SMP P!!!s could get equal performance for between $1600 and $2500 total, not near the $5000 for a non-x86 workstation or server. If you need those big caches, the 500MHz Xeon with 2MB cache goes for between $700 and $900, though for most applications regular P!!!s at higher clockspeed/smaller cache would be better, or a regular 1 GHz Athlon Tbird. Jeezus, one could build a Quad Xeon for less than the price of a typical DEC workstation: mobo $2500, processors P!!! Xeon 733MHz $500 each, add a hard disk and video card to taste. Unfortunately, AMD is still behind with its multiprocessor solutions...
Most PC platform problems could be cured by moving to faster and wider buses, and a Unified Memory Architecture like SGI used on its short-lived line of Wintel workstaions. And, most existing operating systems and the software which run on them would work fine with just a minor OS patch, like the one SGI used to get NT 4.0 to run on its UMA Visual Workstations.
Deja Vu (Score:1)
I remember hearing about the same problem in the beginning of the AGP era.
This is not only a matter of technology but also of driver, ROM/BIOS routine, etc.
This problem might disappear as soon as some "tuning" is done.
BTW, benchmarks usually involve some very specific tests where some low-level aspects are considered more important than ones related to ergonomy : user comfort, etc.
This could be a good thing to know what they actually found.
--
Re:RDRAM vs. SDRAM (Score:1)
Re:RDRAM vs. SDRAM (Score:1)
Re:And the topic of the day is: MDMA (Score:1)
Re:[OT] Logic Error, GN (Score:1)
Re:Actually, I never said... (Score:2)
Anyone remember Intel's lawsuits against AMD for implementing this open ISA? Huh. How quickly we forget. The only reason Intel gave up is that they have something supposedly better now. On the other hand you can buy a license to manufacture as many SPARC chips as you like for $99. Total, not each. SPARC is an open ISA. x86 is only open because Intel no longer cares to defend it.
which according to Ars Technica only adds about 1% penalty to the processor's speed.
While I doubt this number, I have no other so I will not contest it. Regardless of the performance penalty, there are most certainly much larger a) power penalty - power consumption is proportional to dies size, b) heat output penalty - ditto, and c) elegance penalty. To me, the elegance penalty is the killer. It's cruft. It's a nasty hack to try and get performance from something that was never designed for it. It's a marketing decision laid down in silicon. Even if you don't care about elegance, consider this: how much faster would the CPU be if the extra silicon were a) cache, or b) logic directly related to processing, not translation. It's inexcusable.
Fuck elegance, give me just as fast for half the price and I'll take x86 ugliness any day.
I'm sorry you feel this way. I don't think I could live without an appreciation for beauty.
If you need those big caches, the 500MHz Xeon with 2MB cache goes for between $700 and $900
Uh...the street on a 2MB Xeon 500 is $3000-4000. That is, higher than a 400 MHz 4MB UltraSparc II and significantly lower in performance as well.
Unfortunately, AMD is still behind with its multiprocessor solutions...
As is Intel. The practical effects of inelegance.
Most PC platform problems could be cured by moving to faster and wider buses, and a Unified Memory Architecture like SGI used on its short-lived line of Wintel workstaions.
Sure. But that's where all the cost is, not the CPU. And if you're going to spend the money on a nice architecture, why not put in the extra $100 for a better CPU as well? Then you can kick Intel's kiester for their anticompetitive behaviour as well.
And, most existing operating systems and the software which run on them would work fine with just a minor OS patch, like the one SGI used to get NT 4.0 to run on its UMA Visual Workstations.
The operating systems that are used to actually get things done already run on the CPUs that don't suck. Linux runs on virtually everything. As does NetBSD (no SMP though). You can get realtime OSs for nearly every CPU, and there are vendor Unix OSs that work fine for most platforms. Who cares about enntee? Nobody who values his job uses it anyway. And all the useful OSs already have code to handle the I/O architectures. Why patch when useful OSs are already available?
Re:Intel's conspiracy? (Score:3)
I really think Intel wasn't backing Rambus out of any sinister conspiracy scheme - I think they really thought that PC100/PC133 wasn't going to hold up long-term in their roadmap and they needed something better. They had the Rambus investment, and didn't forsee DDR SDRAM. That's why they got caught flatfooted with the i810 being their only non-Rambus chipset and what opened the door to both Via and AMD.
I bet if they could do it all over again they would have started with the i815 as the low-end chipset, which would have both closed the window of opportunity that Via and AMD used to get business, and it would have eliminated the demand for SDRAM support on the i820 (and we all know how that worked out...), since there would have been an equivalent performing SDRAM chipset.
Even Intel screws up sometimes, though.
- -Josh Turiel
Re:Intel's conspiracy? (Score:2)
Ahemmmm, can you enlighten us on that one a bit? I'm sure it'd be an interesting topic.
And something offtopic, but it crossed my mind when I read this comment:
OTOH, I think AMD is shooting itself in the foot by making claims that it'll stick to the x86 architecture for many years to come.
Re:Intel's conspiracy? (Score:1)
If you don't believe it - go look at RMBS's performance over the last 6 months. I sure wish I was morally soiled enough to have bought 1000 shares of that in March. .
if it ain't broke, then fix it 'till it is!
Re:Intel's conspiracy? (Score:1)
No, Rambus is a publicly held company with part ownership by intel.
The same profits AMD is receiving with THEIR license from Rambus. Chipsets define the RAM, not chips. No, Rambus and Intel have no been 100% aboveboard in the way they've approached the market, but Rambus has designed a product with great potential for technical superiority. Anand Tech [anandtech.com] has two interesting articles discussing the ramifications of RDRAM, DDR SDRAM and SDRAM.
Re:You've managed to overlook one major problem... (Score:2)
Look, I really hate to use buzzwords (is it ok if they have fallen out of use?) but you need to think about total cost of ownership. If you have to pay someone $50 an hour plus benefits and taxes to fix things when they break that $3500 peecee suddenly looks pretty expensive. Real workstations are much more reliable, and when they do break it's just a matter of pulling out the broken piece and popping in the new one. If you've ever worked in real hardware you know what I mean. Any repair job is 2 minutes, and there are no bloody hands and extra screws to deal with. If we're talking about individual use systems, then the TCO depends on how much you value your time. I consider playing around in cramped, cable-rat's-nest-ified, sharp-edged, poorly labeled peecee cases to be a complete waste of my time. It's well worth the extra money to have a machine that always works; and even if it doesn't, it's trivial to fix it. If you've never owned a real workstation, you can't really argue with me. Try it; you'll never go back.
Pipeline length (Score:2)
Intel P4 (aka Willamette) has 20 stage pipeline, and it remains to be seen whether the high clock rates this enables makes up for the hits it'll take due to latency and branch mispredict penalty.
The interesting part... (Score:1)
I guess Ballmer's not as intimidating as Gates - or maybe Intel's just counting on a kindler, gentler Microsoft in the the wake of the Justice Department's bitchslap.
Re:[OT] Logic Error, GN (Score:1)
WWJD -- What Would Jimi Do?
Mincraft Strikes Again (Score:2)
As if we couldn't just take Intel's word for it!
Re:You've managed to overlook one major problem... (Score:1)
Oh yeah. I fixed an Indy once for a friend (small problem, PSU fan died, very easy to fix), and I couldn't get over how wonderfully easy it was to pull apart the system (once I'd figured out how it was held together) and get at everything.
It was like the difference between an AT-layout x86 and an ATX-layout x86. Only better. Lots better. No more digging through little scraps of ribbon cable connecting on-board serial ports to the connectors on the back of the card cage.
Actually, it was almost as good as working on a Mac G3/G4. (And, even then, Macs hold stature only because of familiarity.)
Re:RDRAM: The Big Lie! (Score:1)
Curious,
(jfb)
Perhaps they're trying to scare Rambus (Score:2)
Perhaps Intel is just doing this to keep Rambus on their toes, make sure that they are always using notch 11 on the 10 notch amp, for that little bit of extra energy.
Re:Conspiracy Theory... (Score:2)
Come on people, need more conspiracy material!
OK, just for the hell of it, I'll bite. According to Tom's Hardware, Intel stands to make about $158 million off of Rambus. That's pocket change for Intel -- they probably spend more money than that on offices cleaning supplies. But by buying into RDRAM, Intel gets to confuse AMD and forces it to spend money licensing the technology that could be better spent on research. In the meantime, motherboard manufacturers scramble to license RDRAM and incorporate it into their products. Only a small number of mavericks try to stick with SDRAM after mighty Intel has spoken.
Then suddenly Intel does some benchmarks and plays innocent -- "those Rambus bastards lied to us!" So Intel does an about face and bring back SDRAM. Maybe it even buys out a couple of those mavericks (who are probably hurting for cash) and stick Intel labels on their mobos to get them out the door quickly.
Where does this leave AMD and competing mobo makers? Up a creek that's where. The big PC makers want to follow Intel's lead and go with SDRAM mobo manufacturers can't afford to switch back to SDRAM quickly enough -- Intel wipes out a bunch of competitors and solidifies its grip on the mobo market in one fell swoop. AMD is pushed away from the PC mainstream and relegated to the extreme low end and hobbyist markets -- again. And Intel thaws out Elvis in time for the launch of Itanium.
Conspiracy Theory... (Score:1)
Maybe Intel secretly favors the Direct Rambus memory format. Maybe they're creating competition for themselves to build momentum? Maybe it's part of a top-secret strategy to kick AMDs butt?
Come on people, need more conspiracy material!
Re:I'm not surprised. (Score:2)
From what I recall SynchLink was 800Mb/s per pin (that is,a small 'b' as in Megabits per second). So, you'd need a 16-pin interface to reach the same bandwidth as RAMBUS. (Hey wow, that's the same number of pins as RAMBUS uses. Think that's a random coincidence? Think again.) I remember hearing about SyncLink before they'd added the 'h' to become SynchLink, and when their bandwidth per pin was still 400Mbit/s. From what I recall, they upped it to be competitive with RDRAM.
--Joe--
Did you even read Tom's review? (Score:4)
Shame on this Anonymous Coward, (Score:1)
By the way, the correct term in this case is effect.
Re:This changes nothing (Score:1)
I'm probably being picky, but is it 'the benchmarks make you step back and think', or 'the benchmarking results make you step back and think'.
You're doing a good job CMiYC, but please be a little more careful with grammar.
Re:[OT] Logic Error, GN (Score:1)
I told our friend, CMiYC, to be more careful with his grammar. Surely he will take this advice and use it before all of his future posts. I can only cleanse Slashdot one user at a time and, even then, I can only do it in a preventative manner. I wish that there was a better way. Do you have any suggestions?
Re:RDRAM vs. SDRAM (Score:1)
It has no real position in today's market, since it is too expensive to be used for personal workstations, and is too slow with multiple chips, which rules out the lucrative server market (notice how Intel's new Xeon-style solutions recommend SDRAM).
Plus, RDRAM and PC133 SDRAM are in two totally seperate leagues. It would have been better to compare RDRAM to DDR-SDRAM (PC200?) which is proven to smoke RDRAM in both latency AND bandwidth.
Re:Not performance... (Score:1)
I made some other comments on this same topic as well. However, I believe my point was "RDRAM is too expensive *and* it doesn't offer a real performance boost, for general-purpose memory". Do you see why price/performance would be an important metric here? (or even some consideration or mention of price?)
Also, benchmarks are fundamentally flawed in the first place. Depending on how they are conducted, and the *exact* components, software and hardware, for the entire system, plus configuration tweeks, the result can vary by a huge amount! So I wouldn't argue that performance doesn't change. The system I buy won't be anything like the one they benchmarked; I might not be using the same chipset, operating system, or bus, let alone tweeked settings in my nonexistent "Windows Registry". So performance can be just as artificial as price.
Your other point about letting the reader compare for themselves is valid, but I wasn't intending to advocate eliminating performance metrics entirely; I just wanted to see someone mention how *#@$ expensive RDRAM is now, and how useless it is to buy it for performance as system RAM. Also, any decent benchmark should have full disclosure as to how the performance numbers were achieved, and all the information possible about the testbed, so that people can recreate the results, or change another parameter and compare to those results. In a perfect world, that is...
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Re:Yes, Intel thinks users will remain dumb foreve (Score:5)
There are plenty of options out there; many are surprisingly inexpensive. Quality, high-performance workstations from Sun, SGI, and Decompaq can be had for less than USD 5000, often less than half of that, which do not use x86 nor the peecee architecture. You'd better hurry, though, before everyone drops their quality architectures for IA64 and gives Intel the market chokehold it has been lusting after for years.
Fight the power; insist on quality; boycott the peecee!
Re: (Score:1)
Benchmark similar cost boxes to be accurate... (Score:2)
Comparing similar boxes based on i815 and i820, I can get an i815 based box with 256 megs RAM for cheaper than a 128 meg i820 box, and if I even wanted to go to 256 on an i820, it'd cost me an extra $500 or so. And -- you have to be really careful. Dell has apparently been shipping PC600 or PC700 with many of their units, to keep costs down. And PC600 RDRAM should *really* be called PC534 but it's been rounded up. If it doesn't say PC800 in the "configurator," be suspicious.
Bottom line, screw minor benchmark differences, when it comes down to it, RDRAM cost is prohibitive and if you compare boxes of the same cost with the SDRAM based box loaded up with extra RAM, you'll be better off with SDRAM.
Intel to switch (Score:1)
It gets worse (for Ramtel) (Score:2)
As others have pointed out, it's a big difference in price for almost the same performance.
Worse (or perhaps "better"), we're expecting DDR SDRAM to hit the market in the fall, adding a big performance boost on the SDRAM side of the equation, with very little increase in price.
RDRAM is dead. Or at least it would be if Rambus and Intel weren't doing everything in their power to cripple the competition.
--
Re:Intel's conspiracy? (Score:2)
Intel -- Just short of intelligent.
Intel's conspiracy? (Score:2)
Re:Yes, Intel thinks users will remain dumb foreve (Score:1)
Re:Benchmark similar cost boxes to be accurate... (Score:1)
This is sensitive to many things. (Score:4)
Which is better? It depends on both the montherboard configuration and on what you're doing.
Intel's high-end RDRAM motherboard beat the hell out of SDRAM systems. It had two interleaved RIMM slots, doubling effective bandwidth.
Intel's more recent SDRAM offerings have generally been pretty bad. Via chipsets put out a good effort, but were still beaten out by the high-end RDRAM systems and the BX board.
The best SDRAM offering was a 440 BX board overclocked to 133 FSB. Tom swears it's stable. YMMV.
As far as load is concerned, RDRAM is optimized for throughput, SDRAM is optimized for latency. Something that hits many cache rows in more or less random order taking only a little data from each will work well with SDRAM. Something that processes large amounts of data in more or less linear order will work well with RDRAM. It depends on what you're doing.
My personal opinion? RDRAM is a bad implementation of a good idea. In five years we might see something better. For now, by DDR SDRAM. YMMV.
Re:Shame on this Anonymous Coward, (Score:1)
Re:Intel's conspiracy? (Score:4)
Other than the fact that they own Rambus? How about profits from licensing Rambus technology? How about using patents to put the squeeze on SDRAM manufacturers? How about designing future CPU's and chipsets so that rambus is the ONLY memory that is supported?
We love to bash M$ because we are visibly affected by their evilness on a daily basis, but I think most people would be suprised by the kind of nasty stuff that Intel gets away with (just ask intergraph!)
Heh. Opps. (Score:2)
---
I don't believe this one... (Score:1)
Any company that walks away from the x86 processor business is a dumb company. That'd be like leaving money on the table. Buckets full of money. And x86 is really the only market to be in right now... hmmm? Should i go after 10% of a 100 million units a year or 50% of 5 million units??? Which way does the math work best?
Re:Intel's conspiracy? (Score:2)
Could Intel bring out a PC133 based mobo and show us that it's better? Why is the PC133 model number i815 and the Rambus model i820 when the i815 is NEWER than the i820 (AFAIK that is)... oh well.
Conspiracy theroists of the world unite!
Re:16 bits? 286? (Score:2)
The 8086 was short lived for cost reasons, so most people (i.e. you) associate the 16 bit bus with the 286.
yeah (Score:1)
Re:16 bits? 286? (Score:2)
That's almost true
>The 8 bit bus was actually from the 8080 (or the >competing Z80), which was an 8 bit processor.
I remember those quite well. I even have an 8080
>The 8086 was short lived for cost reasons,
short-lived? It was in wide use by almost everyone except IBM until the 286 became commonplace. At that point, it fell largely out of use, and the 8088 was used in budget machines.
>so most people (i.e. you) associate the 16 bit >bus with the 286.
Uhh, no. Aside from that I remember all of this from when it was happening, I most certainly do not make any such association.
However, the *particular* 16 bit bus that was being discussed is the IBM PC/AT bus, which was introduced with the attached 286, and extended the 8 bit bus of the IBM PC which used an 8088. There were several other 16 bit busses at the time, including Olivetti's and Vector's, which extended the 8 bit pc bus, and an extension to the S-100 favored by companies such as Compupro.
hawk
Everyone has missed the damn point on this one (Score:1)
apps and stuff.. buuuuuuuut...
THAT'S NOT WHAT INTEL IS DEVELOPING IT FOR!
Seems their whole desire is to push it into
desktop machines as soon as possible...
Rdcram may be good for some things.. but
personally I don't see any reason for spending
that much extra cash for a mb that supports it..
let alone the cost of the ram itself just for
my desktop machine... I mean.. let's get real folks
SDram still has a ways to go... figure 200mhz will
be enough to hold us out for a few years at least..
then in the meantime the server boys can suck up
the developmental costs of Rdcram and push the
prices down to a reasonable level for us desktop
users and hopefully get the bugs worked out of it
in the meantime.
Make sense?? (thought so)
Re:yeah (Score:2)
This changes nothing (Score:1)
Call Mindcraft (Score:1)
Re:[OT] Logic Error, GN (Score:1)
Re:This changes nothing (Score:1)
The simple truth is that in most real-world applications, Rambus handily outperforms PC133 DIMMs, and is worth the extra expense.
Yeah, that's a well supported assertion
--Shoeboy
It's more than just the numbers... (Score:1)
Now lets come back to reality. According to the benchmarks I've seen SDRAM comes out close to and usually beats RDRAM at a much lower cost. I wouldn't be suprised if some tech-head came and showed me RDRAM spanking the benchmarks for big server apps, but why do I care?
I would like to see a benchmark comparing Linux with SDRAM against Windows with RDRAM. Of course we'll have to get mindcraft to do the tests.. as they have experience in this area..
In the end, it depends on your perspective... Do you spend extra cash on the memory on the processor or the memory? I'd live to see someone put 600MHz RDRAM on a 500MHz processor...
OTOH, SDRAM has been around for a bit longer than RDRAM (it's also an..."open standard"...). Do you want to pile your cash into a STILL umproven technology that could easily be squashed in a few months?
I've been worried recently that SDRAM prices will skyrocket and RDRAM prices will plummit for the ONLY reason of big buisness pushing around the little consumer.
Fuck it.
Re:Pipeline length (Score:2)
11 or 15. Six for decode (FETCH, SCAN, ALIGN1/MECTL, ALIGN2/MEROM, EDEC, IDEC/Rename), or maybe five depending on how you feel about IDEC/Rename. For integer instructions (at least direct path ones) you then have SCHED (which can take multiple cycles, depending on how long it takes for all inputs and an appropriate functional unit to become available), EXEC, ADDGEN, DC-ACC, RESP (DC-ACC and RESP are cache accesses, I'm not sure where the write back is -- they may have left the retirement out of the document I'm looking at). The FP pipeline (FP instructions, MMX and 3D Now! instructions as well) is longer, 15 stages (including the first 7 above), more for FMUL.
That's all taken from Appendix B (page 191) of the AMD Athlon Processor x86 Code Optimzation Guide [amd.com] (it's a PDF).
I thought the Willamette's was more like 25 pipestages for the integer unit, and an undisclosed (I assume higher) number for FP operations. The PPro's is pretty long allready, like 18 or so (that may be for the FP). I assume about the same length in the P-II and P-III since they are the same microarcheture.
no conspiracy? lets get one started! (Score:1)
I'd like to read some nice conspiracies of how Rambus is controlling Intel and how Aliens from [insert name of distant planet] are controlling the whole thing from above.
Yeah, it makes sense that the i815 is newer and based on the i810.. still, they could have called it the i-gothitovertheheadwithacluebat chipset and they might have gotten it a bit closer to the truth
Just like they changed the processor serial number (PSN) to be the WPSNYTB? (What PSN are You Talkin' aBout?)
;)
Re:You've managed to overlook one major problem... (Score:2)
And you're welcome to spend $1000+ for a Creator 3D card that's probably no faster than a $200 PC video card.
I paid $80 for mine. FFB2+. Very nice.
And then you can get screwed when you need a patch for Solaris that's only available to contract customers.
So don't use Solaris. It isn't very good anyway. Linux runs exceptionally well on Sun hardware, much faster and more reliable than on peecee hardware.
You think everyone would rather spend 3 times as much because you're too lazy to work inside a computer for a few minutes longer?
Laziness has nothing to do with it. I was discussing cost. If something takes longer, it costs more. In an environment where you're paid to do so, the costs are immediate and direct. In other environments you must evaluate the worth of your time. Personally, I'd rather just use my computers to do the work I want to do and not spend lots of time screwing around trying to get broken, misdesigned hardware to function. YMMV of course.
Re:Not performance... (Score:2)
It's not easy to do that. Everyone has different weightings that they put on those categories. A honda gives fantastic price/performance, but if you want to win the Indie500, it is definitely not the right choice. Some people will pay a lot more for a small gain in performance because they need all the performance they can get. Others will take a significantly inferior product for even a small price drop because they just don't have the extra $100, period.
Re:RDRAM: The Big Lie! (Score:1)
spot on. i just bought a Tyan Thunder 2500 (based on the ServerWorks IIIHE chipset), mainly because it has proper support for 133MHz SDRAM, without any hacks or kludges. that, and the fact that it has 64 bit PCI slots, 8 SDRAM sockets, and of course dual CPU support.
if you can actually find one of these boards, its a pretty mean piece of bad azz mofo hardware.
Re:Shame on this Anonymous Coward, (Score:1)
Are you supposed to be my enemy or something? Why did you pick the name, 'The Grammar Jew'? As a grammar nazi, I'm trying to cleanse Slashdot of bad grammar. I have no problems with poeple of differing races/religions. If you wish to be my enemy then please start using lousy grammar. If you wish to coexist and team up against the lousy english on Slashdot, then welcome to the club!
Re:Not performance... (Score:2)
While many end users are actually more interested in price/performance than they are in performance per se, the idea of listing price per performance is still a bad one. There are two main reasons for this:
Both of these factors suggest that rating by price/performance is a bad idea, and that rating just by performance is much better.
Re:It's more than just the numbers... (Score:2)
In some servers youll be dealing with large chunks of data, not quake 3. If you can move one bit from a to b really fast, but if its a half gig of data your fucked with current SDRAM setups.
RDRAM has its place. Mostly its 'cause most RDRAM implementations are fucked up.
What has been measured? (Score:1)
If RDRAMs are indeed faster than SRAMs, you'll see that in situations where high memory bandwidth is essential. For example, a comparison on a large SMP database server could be really interesting. Usually, desktop applications do not have these memory bandwidth requirements. Even for so-called multimedia applications, the PCI bus bandwidth is the limiting factor most of the time.
Yes, Intel thinks users will remain dumb forever. (Score:5)
First off, Intel has been in the process of developing standards for the PC architecture for some time, as well it should. However, they've doing it the same way Microsoft has been "contributing" to Internet standards. For example, they developed AGP up to 4x, which has proven to be very useful; however, rumours are churning out from reputable sources discussing an Intel project to create a successor to AGP 4x, and this successor is to be limited to Intel chipsets and chipsets made by select Intel partners--i.e., anyone who annoys Intel will get left behind. Intel developed PC-100 memory standards--a great service, but...then it refused to develop PC-133 standard or DDR-SDRAM specifications, because of its own interest in RDRAM as a wholesale replacement for all SDRAM.
Many have questioned that Intel has much to gain from Rambus becoming the new standard instead of DDR-SDRAM; after all, contrary to popular belief Intel doesn't completely own Rambus, and their deal with Rambus would only give them compensation in the tens of millions, which isn't much for a company whose revenues are in the billions each year. But what Intel has to gain isn't direct monetary compensation by Rambus, it's *control* over the standards for memory and memory controllers--and the rights to manufacture and license those memory controller technologies. This is exactly what MS did with IE--it didn't directly make a profit by developing a new web browser and bundling it with Windows; it gained market control and the ability to manipulate the Internet protocols so that all its products, from IIS to Frontpage to NT Server and the rest, had an advantage of guaranteed interoperability and increased functionality over competing products.
Intel wants to do the same with RDRAM and its new IA64 architecture, and its new forays into the emerging appliance market. Intel will make royalties on all chipsets which support RDRAM. Intel will make direct profits on its IA64 processors and has probably been hoping to licence the ISA to competitors once x86 plateaus. Intel has purchased the StronARM and other embedded/appliance hardware companies, hoping to leverage its market dominance to push it into every area. And, let's not forget that they tried and tried and tried to force their way into the graphics market, but failed there due to too-short product cycles and competitors with much more graphics experience.
It's clear that Intel wants to be the Microsoft of the hardware world. If they leverage enough tech patents on all fronts, they can force use of their products in the same unfair ways Microsoft leveraged itself into every crevice: big OEMs unable to get the best prices on Intel desktop processors unless they agree to use StrongARM in their embedded/appliance products instead of Transmeta or MIPS, or unable to get hold of ahort-supplied IA64 for workstations/servers unless they use P4 in their desktops, VIA unable to make the most advanced RDRAM chipsets unless they cut back on DDR or agree not to pursue QDR, etc. Don't think it won't happen, even with M$ as an example: there are many sneaky, below-the-board ways to hint at such matters without bluntly making demands.
And, since everyone here hates the x86 architecture so much, why the Hell are so many
Tile (sic) (Score:1)
Re:Shame on this Anonymous Coward, (Score:1)
It's only a matter of time before the soup-nazi turns up too I guess.
Henrik Teigen
Re:This changes nothing (Score:2)
Ouch.
In realworld applications, Rambus does perform better.
Where is your fucking PROOF asshole? Is Bryce 4 not a 'real world application'? How about CorelDraw 9? Naturally Speaking? Quake III? Netscape Communicator? Paradox 9.0? Photoshop 5.5? Powerpoint 2000? Windows Media encoder 4.0? Word 2000?
I think all of these are 'real world applications' and guess what, the 440bx at 133 smacks the i820 all over the fucking place. The only real world app where I saw Rambus with an advantage was excel 2000.
--Shoeboy
Re:Yes, Intel thinks users will remain dumb foreve (Score:1)
Re:yeah (Score:1)
Re:Actually, I never said... (Score:5)
I don't see how they came up with the 1% number. Here are a few counter arguments...
The x86 has reached some pretty impressave speeds. 1Ghz is shockingly fast. Even 800Mhz is quite speedy. Intel has done this by using extreamly long pipelines. Some 15-22 pipestages depending on the operation. AMD has done the same. A longer pipeline increses the latency of many operations, and makes sequential dependencies in code cost more and more. And branch penailties, and load cache misses. IBM has the PowerAS running at 600Mhz with a 5 pipestage machine (that is fewer pipe stages then the AMD uses to decode instructions!). It smashes the PPro-P-III and AMD in anything that has lots of poorly predicted branches. Like DB code. It does better on code that does lots of pointer chasing (like linked list walks).
(the PowerAS has a zero to one cycle peanilty for "misprecidted" branches (it's prediction method is "allways taken", or "never taken" I forget which); the Intel has a penality of more like 11 to 20 cycles, with a maximum peanilty of 44 or so cycles of work discarded from the ROB, the Intel has a very good branch prediction scheme, for predictable branching patterns, when it gets to bad to predict code it sucks big time)
The P-III and AMD managed to decode 3 instructions per cycle, quite an acomplishment with an irregular sized instruction set. They have finally gotten to this point. The SuperSPARC in 1992 or 1993ish decoded four instructions per cycle. That means the best the x86 can do over the long term is to execute three instructions per cycle (because even if they have spare functional units, they will run out of instructions in the reorder buffer if they manage to execute >3 instructions per cycle for long). RISCs have grown a few more decoders in the intervening 8 years. Some of them at least.
If the x86 is only one percent slower then RISCs, why is the anchent (2 year old?) Alpha 21264 at a mere 667Mhz still turning in better SPEC2000 FP numbers then the "shipping only to select OEMs, and not many units either" 1Ghz Intel part?
Try to get a stream benchmark number in the same ballpark as a real Alpha (not one baised on the PC chipsets), with a Xeon. Intel hasn't made a memory system that can compete. And the memory system is half the price of the damm Alphas.
RISC may have lost the comercial war to CISC, but there is no need to stomp on it's acomplishments. There are really impressave RISC CPUs for a fraction of the research dollar Intel (and AMD!) spend.
Oh, but they have. The i960 is a diffrent ISA, I never used it, but I'm sure it is quite diffrent from the x86. The i860 was also very diffrent. It had a pretty nice ISA as long as you didn't put it into streaming mode. The VLIW mode was a bit odd to me, but it wasn't a huge deal.
Peopel even used them. Just apparently not enough people used the i860. I donno what the deal was with the i960. It was extreamly popular 5 years ago, but doesn't seem to be now.
Re:yeah (Score:3)
I think the big deal is the fact that RDRAM is suppose to be so much better in terms of performance than SDRAM. The very fact that SDRAM matches or beats or loses by so little causes one to wonder why spend the extra $$$ for RDRAM. So, no... in terms of performance only a few percentage points don't matter. But if you look at the overall picture: price, availability, compatbility, APPLCATION.... which technology do you really need?
---
Re:This changes nothing (Score:2)
RAMBus is over eight years old. This is something like the fourth major revision (in '92 it was a 400Mhz 8bit interface). This is not a repeat of the Celeron story.
Re:It's more than just the numbers... (Score:2)
If you fully interleve the SDRAM it has pretty impressave bandwidth numbers too. Of corse that takes (about) four times as many pins. In fact 8-way interleved PC100 SDRAM excedes the bandwidth the Intel and AMD can get off their CPUs, so the only thing that will matter is latency, which'll make the SDRAM a better choice...
If you can come up with all those pins. If.
Low pin count is one of RDRAMs few remaining advantages (RDRAM systems with no CPU L2 cache run about as well as systems with small L2 cache and a normal memory system -- but that's not a good deal with L2 caches so large these days...I can list other obsolete advantages if you like). You can four way interleve RDRAM with (about) the same number of pins you need to interface to stright (not interleved) SDRAM. So if IBMs high density packaging catches on, RDRAM loses that (as more pins will be cheep). If DDR SDRAM really uses a 16 bit interface RDRAM loses it's advantage.
Of corse I don't see many chipsets using this advantage. Where are the motherbord chipsets with four RDRAM controlers?
Not performance... (Score:2)
Now, for some special-purpose applications, RDRAM might be an excellent choice, just like in some circumstances, a P-III might work out better than an Athlon, or an 8086 might be the better choice than a G4, or a hammer might work better than a screwdriver. But for general purpose, plain old RAM, RDRAM is underwhelming.
...now watch the price of RAMBUS drop. I can hear the screams from here.
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
There goes 'X' amount of dollars down the drain (Score:1)
Dual channel Rambus test (Score:1)
RAMBUS ram is completely out of control when it comes to price. It's nearly impossible to track down (if you're building your own system), and I can't see it competing well against dual channel SDRAM solutions that are due out soon. If a single channel configuration with 800mhz RDRAM can't beat out a PC133 solution, there's no way a two channel RDRAM solution will beat DDR SDRAM.
The big problem we run into is system stability. Call me crazy, but I simply can't compromise by going to a VIA chipset anytime in the future to avoid RDRAM. I've seen too many problems with them in the past, and I simply can't take the chance on it. I need stable machines at my shop, otherwise my life is miserable (plenty of Lusers here!)
Intel's primary reason for sticking with Rambus is the amount of money they've made off of Rambus stock. I don't expect them to abandon it anytime soon. The big question is whether or not we'll see a dual channel SDRAM solution coming out of Intel.
What's really pissing me off is that Rambus is going to make money regardless of the memory technology. They are winning lawsuits against DDR SDRAM manufacturers, and it looks like there won't be a stick of ram produced that won't have a royalty fee going to the big R. Can you smell antitrust?
Title (bests) (Score:1)
Re:It's more than just the numbers... (Score:2)
Really? It would shock the hell out of me. RDRAM latency degenerates rapidly as you add chips. Get a Quad Proc system with 4GB of RDRAM and you'll see some truly abysimal benchmarks. That's why intel was trying to position RDRAM as a desktop/workstation tech for Williamette (the P4) while pushing SDRAM for Foster (the P4 Xeon).
--Shoeboy
Re:This changes nothing (Score:3)
I think that the benchmarks make you step back and think. Do you really need to spend the money on Rambus? Think of it this way, if you were about to invest in a Rambus system just because you thought it was faster than PC133... you might be surprised to find out that whatever your application is, SDRAM performs just as good.
So, think of it in that respect, it all depends on the application and if the application warrents the cost. If your specific application won't gain anything out of it, why spend the money? On the otherhand, you might be able to rest assured that the money is well spent.......(which I know most people here won't think that way, they'll just look at the numbers, but hey that's life).
---
Re:Intel's conspiracy? (Score:1)
I think that it's obvious (as kirkb mentioned above) that Intel has a lot of financial incentives to back Rambus. It's a good strategy, really: buy 10% of the company for dirt cheap, then force the technology down the throats of users. Now sales are up, the stock price skyrockets ("well, Intel says it's the next big thing and who would know better than Intel?") and their investement increases tenfold.
It's a classic scam... just not usually pulled by a company the size of Intel.
-rt-
Re:Not performance... (Score:1)
Benchmark:
RDRAM vs. SDRAM
General Purpose RAM as a memory system for a PC
Rated on Price/Performance and Performance.
Since RDRAM, if anything, tends to be slower, *and* it is massively more expensive, it loses.
Any other uses for it are just that--other uses. i.e. not what I would be benchmarking, and not what I was talking about.
Also, I'm going to buy a new computer, and I'm going to get an Athlon with PC133 SDRAM, both for cost and for performance. If you could find me an equivalently priced and performing Pentium III with RDRAM, I'd buy it. Do you see the relevance of this metric now? If their performance was *significantly* better for the general tasks I'd perform, then we could change the weights on Price/Performance. Until then, it's a sucker bet.
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Re:LDRAM vs the rest (Score:1)
Perhaps a vector processor on every chip would have value, allowing SIMD operations to be performed with much more efficiency than if a a central vector unit had to do it, like AltiVec or whatever name Intel has come up for their technology this week.
That strikes me as being a pretty smart technology... cheap and effective, hopefully. Add RAM, add a small about of processing power. We will see that eventually, but it would be good for some company to do that now and get an early lead in the "iRAM" field.
Re:Pipeline length (Score:2)
I'm not sure. That one PDF from their web page says 11 for (most) int instructions and 15 for (most) FP/MMX/3DNow instructions. It also leaves out any pipestages that look like retirement/register writeback, so unless they are folded in with other stages that are unlikley I think they left something else.
AMD has pretty decent tech docs on their page, go look. My memory said more then 11 min, so I was just as supprised to see the 11 as you are by the 15 (which is also listed).
Re:Yes, Intel thinks users will remain dumb foreve (Score:2)
Open one up sometime. Look at the chips. You will see that they use standard peecee components like ATI graphics, IDE, and Goldstar (yes, Goldstar) CD drives. In my book, that makes it a peecee.
I wonder if you use Matlab, which doesn't seem you do.
It's not my primary application, no. I use my systems for development. Since I know for a fact that matlab does not run on sparc-sun-linux (I do admin matlab, I just don't normally use it), I would strongly suggest that your disappointment with your Suns is the fault of your choice of operating systems, not hardware. Solaris has a reputation, backed up by benchmarks for whatever they're worth, for offering poor performance, especially on fewer than 16 processors.
I once worked on a project to translate a matlab program into C. I do not know whether the original program played to matlab's strengths or weaknesses, but I can say that my portable ISO C program averaged 23 times the performance of the matlab version. The point? I don't think matlab is a very good benchmark. Obviously, it's your application so it's the only benchmark you care about, but I suspect that in the grand scheme of things matlab doesn't necessarily mean much. It also has no way whatever to test things like disk I/O and internal bandwidth which are nearly irrelevant to matlab but of critical importance for virtually every other application, areas in which peecees lose to any real workstation, often by a factor of 5 or more.
My point is if I have an Alpha or Sun with a 500 Mhz processor and they cost 5000 and I can buy an Athlon 900Mhz for 2500 I pefer the Athlon and I KNOW it has very good chances of outperforming the others.
Good for you. I'm glad you've found systems that work well for your application. I'm sure you'll enjoy repairing them numerous times in the six months before they stop working completely. *shrug* It's your maintenance nightmare, not mine.
You've managed to overlook one major problem... (Score:3)
The other part of your argument ... AMD is evil because they make wintel-class chips? I think not. AMD would be out of business if they made some little off-brand CPU architecture. With more than 90% of the intalled base of desktops and workstations running under the PC architecture, you'd be a fool not to consider making hardware for it! Even SGI has been moving their software to the PC platform because there's just more of it out there and they know they can't keep up when it comes to price vs. performance.
I don't know what planet you're from if you consider US$5k for a workstation (even a high-end workstation) "surprisingly inexpensive" either. I can build a pretty damned sweet workstation by any standard for US$3.5k and that's including a monitor better than my current 21" and some very nice (if expensive) input devices. You said it yourself, the PC architecture wasn't planned beyond build something that "works"(?) as cheaply as possible. Until other architectures can deliver as much or more performance at a comparible or lower cost down in the mid- to low-end workstation range as well as the high-end and our respective mothers can still play Solitaire and Minesweeper... The resurgance of unix and unix-like platforms, especially those which are developed portably and openly with such a focus on ease-of-use may as they mature make it easier to throw away the tired PC architecture. That time just ain't here yet. Until then, AMD looks like a mighty promising choice the next time I build a box.
I'm not surprised. (Score:2)
PC133 SDRAM: 1GB/s
DDR SDRAM 100Mhz: 1.6GB/s
DDR SDRAM 133MHz: 2GB/s
RDRAM 800Mhz: 1.6GB/s
Synchlink DRAM: 800MB/s PER PIN***
*** Reminder, this is per pin so in theory if there was 2 pins there would be a 1.6GB/s transfer rate. This RAM today is expensive and is meant for workstations/servers.
Re:Perhaps they're trying to scare Rambus (Score:2)
16 bits? 286? (Score:2)
hawk
Re:This is sensitive to many things. (Score:5)
Wrong.
The PIII has a 64 bit memory bus operating at 133Mhz. That's 1.06GB/s. Adding a second channel of PC-800 RDRAM -- theoretical max bandwith of 1.6GB/s -- does not give you 3.2GB/s of effective bandwith, you're still limited by the CPU. A PIII can't handle any more bandwith than PC-133 delivers. The reason the i840 outpowers the i820 is that it reduces latency. RDRAM latency gets worse the more sticks you add. So a system with 2 Rimms on two channels will have lower latency than a system with 2 Rimms on 1 channel.
--Shoeboy
Re:Yes, Intel thinks users will remain dumb foreve (Score:2)
Re:This changes nothing (Score:2)
They present it as if it is a straight upgrade path, the same as upgrading from a 486 to a pentium. They "forget" to mention that the technology is completely different, and will perform differently (sometimes radically) under different cirsumstances.
That being said, I also think that RDRAM may not be dead. Look at the celeron. The first celerons were crap. Now it is just about the most common low end processor out there. There may be a little more lag time since RDRAM isn't being developed directly by Intel, but I think that Rambus will do whatever Intel tells it to. (At least they better!)
As far as the SDRAM patent issue, I don't think Rambus has a chance in hell, and they are wasting their time and resources trying. The way US patent law is written is that something that has been in common use for over a year cannot have a patent put on it retroactively. I don't know all the details of the case, but the question rambus is going to have to answer is "Why didn't you deal with this earlier?"
-----------------------------