Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Mamba: Athlon And DRAM Get Together 76

scottnews writes: "Tom's Hardware has posted this story about a new chipset for the AMD Athlon processor with 8MB of embedded DRAM in the chipset for 9.6 GB/s of sustainable bandwidth." Thatsa spicy meatball.
This discussion has been archived. No new comments can be posted.

Mamba: Athlon and DRAM Get Together

Comments Filter:
  • by Anonymous Coward
    This is about the 4th article I've read about AMD and DRAM on /. Honestly, the higher-ups are rejecting stories on something else to give you the same shit. It was napster, then Cuecat, now AMD and DRAM. How about some variety??
  • by Anonymous Coward
    Seems like this is a good alternative to putting the memory controller on the CPU. Offers a way to increase performance without going the RDRAM/Integrated controller route.

    Anybody know why there was so much available space on the chip? Is it b/c the die size is set by the number of pins needed to connect?
  • by Anonymous Coward
    If you knew anything at all, you'd know that the problems with the Athlon/Geforce stemmed from motherboard incompatabilities, and the older Athlons work JUST FINE with newer motherboards. The reason they didn't work well at first was because the GeForce draws more power than AGP spec allows, but the P-III doesn't have a problem with that, so the cards worked fine in most motherboards.

    So in fact, it was actually nVidia's "fault" for putting out cards that were not up to the offical AGP standard spec. But then again, if you weren't so full of M$-FUD you'd know that.

    As for the Knowledge Base article, it simply states;

    Cause: Memory that is allocated by the video driver is being corrupted.

    So, isn't that the Operating System's fault, since it is allowing memory that the driver has allocated to be corrupted? I think perhaps it is...

  • Micron actually does an excellent job of building chipsets. They bought a company that did motherboard work for exactly this reason (I can't remember who, but it was someone who specialized in high end stuff). The Samurai chipset was the result of this, and it was one of the early adopters of 64-bit PCI. Indeed, this would have been THE high-performance chipset if Intel hadn't come out with AGP at the same time.

    Trust me, these guys are very good at what they do.
  • What?!? In a few years? How about today?

    Because I said "on my desk", not "on the desk of somebody who can offord to pay $2500 for a machine".

    These days I buy slightly behind the bleeding edge, and replace my machines more often. It works out cheaper and keeps my performance levels up better than buying bleeding edge every 5 years.
  • I've bought two machines in the last three years, and both were pretty darn good machines, not bleeding edge but close. Both of them cost about $1200. The first one was bought as a gaming machine, and had a Celeron 300A overclocked to 450MHz, 128Mb of RAM, 10Gb HD, Diamond Viper V550 and SBLive. At the time, one of the gamer web sites(Tom's, I think) called that a "high end gamer machine" (a year later it would have needed a Viper V770 to equal a "low end gamer machine" - that's life). The second one was a replacement for my poor old Alpha UDB web/mail/news server, and it was a 450MHz K6-3, 256Mb RAM, 2x10Gb HD, crap video card, no sound, and a SCSI card so I could plug in tape backup. At the time, a 600-650MHz P-III would have been bleeding edge, but cost twice as much for bugger all increase in performance. Do I really care if kernel compiles take 15 minutes instead of 12? No. Not while I've got minivan payments and my daughter's college education to save for.
  • My Handspring Visor has 8Mb of RAM and cost about $250. My 1983 computer had a 10Mb hard drive and cost $4000. My current home computer has 256Mb of RAM and cost about $1300. My 1992 computer had a 220Mb hard drive and cost about $3000.

    I like this trend. I expect to see a computer with gigabytes of ram on my desk in a few years.
  • by Mr. Neutron ( 3115 ) on Thursday October 12, 2000 @07:20AM (#712412) Homepage Journal
    Has Micron made a PC chipset before? While they may be great at designing DRAM, and even logic, it is very difficult to produce a stable chipset. Look at the history of VIA and SiS. Lack of chipset stability is one of the main drawbacks to the Athlon. If Micron goes forth with this venture, expect it to be a while before they can get the chipset to an acceptable level of reliability.

    I hope they do well, and I hope this come to realization. I also hope that VIA and AMD can produce better chipsets (like the AMD760), so that there are no more drawbacks to using a great CPU.

    "How many six year olds does it take to design software?"

  • by stripes ( 3681 ) on Thursday October 12, 2000 @07:22AM (#712413) Homepage Journal
    What happens if there's a fault in that "new" 8 meg that's being used? Do they just do a regular chipset version, like intel's 486sx with the disabled co-proc?

    It is pretty common for large regular structures like cache's to have a little extra that can be used if some of the memory is bad. I don't know if that is the case here, but it may be. It is also possiable that they can just map out part of it (so there might be a 7M version).

    If not, then faults in this 8 meg space, is going to cost them more money because of the added complexity.

    Even if so it will cost extra. There is extra time on the tester, and tester time isn't cheap. Also if it is better (or though to be better) there will be more buyers which can result in a higher price...or in research and devlopment intensave products lower prices, so who knows :-)

    I'm sure it took longer to design as well. So the 8M of RAM isn't free, but it should be a lot cheaper then it normally would be, or at least more profitable to them.

  • Think AGP, now add this to the FSB, we are getting there on the bandwidth side. Also, this is probably a 266 MHz bus.
  • I like how Taco decides he needs to bypass any load balancing software Tom's Hardware might have and link DIRECTLY to a server in the farm. :)
  • *grin* My vidcard's got 32mb. My first computer was a 386 (yeah, I'm a young'un), with the HD factory-upgraded from 20mb to 40. And the card cost less than the drive upgrade. Maybe in 10 more years, I'll have that much memory on a scsi controller (having a hard time justifying getting one at all right now, much less a really cool one with a big cache).
  • > A P4 with a whopping one pound heatsink and that requires a new power supply?

    Oh. I first read that as "and a heatsink that requires a power supply". That would be one heck of a heatsink, eh?

    Give me a candidate who speaks out against the war on drugs.
  • > I think Intel is doing just fine. Why? Name recognition. It's as simple as that.

    However, the longer the current situation persists, the more erosion into name recognition Intel will suffer. Arguably, that is costing them more than the lost sales.

    Give me a candidate who speaks out against the war on drugs.
  • > Intel suits must be laughing their arses off as they hand AMD the "enthusiast" crowd

    But as we said, that status quo is eroding every day so long as Intel sits on their fat arses and lets AMD get further ahead in performance (and further behind in recall rate). Businesses are dumb, but lots of them catch on eventually. Lots of those enthusiasts are also sysadmins.

    If Intel lets it go too long, there is likely to be a catastrophic "landslide" shift in business's buying habits.

    Give me a candidate who speaks out against the war on drugs.
  • Sounds like people with there PS 2 systems as well. nothing new. people think it is allegiance or a game. Hell, i'm suprised slashdot doesn't forward your browser elsewhere if you don't run linux :) I like amd, i like Sega, i also like burger king more and i don't run linux. So there, i seem to have broken every taboo of slashdot! hehe is there another news for nerds that is for nerds and techies and not linux users who want a ps2 and drool of beowould clusters?
  • While I agree that the Athlon is by far the superior chip, I think Intel is doing just fine. Why? Name recognition. It's as simple as that. It's always been Intel's strength; even back in the days of the Intel/AMD/Cyrix 686 'wars'. As long as AMD refuses to run TV commercials, Intel won't even sweat.

    JoeSchmoe: Hey, I just got a new computer.
    Geek: Oh yeah? What'd you get?
    JoeSchmoe: It screams! It's an 800MHz Pentium with 256 Megs of RAM, 20 Gig hdd, ...
    Geek: Nice, but I would have gone with an Athlon.
    JoeSchmoe: A what?
    Geek: An Athlon. You know, AMD?
    JoeSchmoe: AMwho?

    Sad but true.
  • yes Micron has make chipsets before, I remember seeing Samurai chipsets for pentiums (pro?) workstations/servers that had 64 bit pci slots.

    Seeking; proceeding by inquiry.

    A specious but fallacious argument; a sophism.
  • by svirre ( 39068 ) on Thursday October 12, 2000 @06:52AM (#712423)
    With the high pad count on what's essentially a dataswitch, your core will easily be pad-limited (Note however that modern bonding techniques can get considerably more pads in on a die than before)

    That said; it isn't free to utilize extra 'filler' silicon on the die, as this will lower yield as defects that prevously didn't drigger a fault since it was on whitespace, now causes a defective unit.
  • Micron also asserted that the added cost for the eCache is minimal - "virtually free" are the words Micron's Dean Klein used.

    Ordinary Athlon - $600
    Athlon with Mamba(TM) - $850

    Cutting edge technology, now matter how "free", tends to overly inflate prices. 'Twould be nice if it didn't though.

    you may quote me
  • Seriously, how did this happen?

    Are there too many external connections so that the peripheral space for pin connect is at a premium? IIRC the EV6 protocol needs 64 data pins, 48 address pins plus who knows how many control signals. Say 140 per device, of which there are at least 3-4 (CPU1, CPU2, RAM & AGP/PCI). 560 plus powers/grounds.

    Yup, they could be short of pin pad peripheral space, and have free silicon. Note that they cannot just simply manufacture in bigger process because it won't be fast enough.

  • I know you are a troll, but can you "invent" and incompatibility you have ever had with the Athlon? And not with any of the VIA support chips, I'm talking about the processor itself.
  • Dude, the MP forum is this week. All sorts of computer related stuff going on. After it's over there won't be a ton of MP related crap on the front page. Deal with it.
  • Really neat idea.

    L3 cache integrated into the chipset. Except it doesn't really COST anything, because it's only taking up what used to be wasted space!

    And in the process they cut L3 latencies (off the norm anyway) by 50% and are sending back to the CPU hella bandwidth.

    I like.

    I want.

    Give! Sorry, tech widthdrawl...I'll get over it in an hour or two ;)
  • You've got bandwidth going to/from the processor. You've also got the extra bandwidth going to/from main from the L3 cache memory. Then you've got the bandwidth going to/from the southbridge, agp bus which I'd be willing to bet they're have pass through the L3 cache...

    It's still more bandwidth than required, but more of it can be used than you think.
  • Performance estimates currently peg it at about a 15% improvement, with the increase in chipset cost being minimal.
  • The fact that it is an L3 isn't the cool part.

    As you said, the fact that it's on the chipset is cool as hell. However, there are additional advantages to putting it on the chipset over traditional L3 caches. Lower latencies and higherbandwidth are the big ones.
  • by Keeper ( 56691 ) on Thursday October 12, 2000 @05:43AM (#712432)
    Why do we have so much bandwidth in the L1 and L2 cache? Because if you have data in it, hitting it, things are much MUCH faster than hitting main memory.

    Just because the memory talks to the chipset at 100 or 133mhz doesn't mean that the chipset talks to the cpu at that speed. In fact, that's one of the things that makes the EV6 bus design so flippin cool. The chipset on the K7 talks to the CPU at an effective 200 or 266mhz. Lots of bandwidth.
  • by Keeper ( 56691 ) on Thursday October 12, 2000 @05:46AM (#712433)
    Erm, this isn't an AMD design.

    This is a chipset created by Micron. You know, the guys who are really good at making embedded SDRAM. There was a big noise about it about a year or two ago before it fell off the charts. Looks like they did something usefull with it this time ;).

    So when you get right down to it, AMD can only gain from this. They can't loose, because they're not the ones making it.
  • by Keeper ( 56691 ) on Thursday October 12, 2000 @05:51AM (#712434)
    Oh I guarantee you that they notice it's not being used. The question they probably ask themselves is

    "can we make this any smaller or rearrange it to use space better?"
    "Nope, sucks don't it."

    The Micron guys didn't try to do that. They thought "what else can we fit in there" *evil grin*
  • I would assume that this is ultra-high speed memory that would be used to synch the two processors... -Ben
  • It's already happening. Almost all, if not all, of the enthusiast market has acknowledged the power and performance of the Athlon. Intel has lost major face in this community. And what the techies and geeks know slowly trickles down. Many of the people buying high performance may not know name, but they know price. When you see at $400-$500 disparity, they will almost enquire as to the difference. And who do they ask? A techie or geek.
  • This isn't part of the processor. The processor will *NOT* change.

    The L3 cache is built into the motherboard. Specifically the Northbridge which interfaces the CPU to the memory, among other things. Previously the Micron chipset, designed to use DDR SDRAM, did just that. And apparently they found a lot of empty space on the Northbridge design and decided to put some extra cache on it to reduce the memory latency.

    A good thing. And likely at little to no cost.
    Because. The silicon was going to be used anyway, now its just actually going to have DRAM cache etched onto it now. read: no extra materials.
  • The special thing about this that many people don't seem to grasp.

    The L3 is *not* buffering the CPU to core logic. It's buffering core logic to main memory.

    Very cool. Makes bandwidth go boom and latencies go flop.
  • What we really need is a usefull os, not faster hardware.
  • I had somehow messed it up on Slashdot's user config pages, I think. Try again. ;^)
  • The OP (evanbd [] ) said: "The FSB can't sustain it anyway.", so you're kind of missing the mark, there IMO. Do the calculation: at 200 MHz effective clockrate, a 128-bit wide bus can transmit 2.98 GB/s. And IIRC, the EV6 bus isn't 128 bits wide, but only 64. That's not even a third of the bandwidth handled by this memory controller, so it does seem a bit too hefty. For one processor, that is... I never understood if the Mamba chipset is for SMP or not. If it is, then things look differently.
  • It's kinda funny to see AMD back to the old L3 cache game that they left behind after the K6-series. This one, of course, is much faster, and I would imagine it runs SETI like you can't imagine, as the whole program could reside in that 8MB cache.


  • Got it, thank you.


  • OK, would you mind telling me where youre doing this? I'd love to get in on that kind of deal.


  • Disclaimer: this is a "I heard from a 'reliable source'"-type rumour.


    That said, a good friend of mine worked at Intel c.1998 as an intern. He states that Intel at the time lost money on every Celery they sold. How does the processor division stay profitable? The Xeon processors. If this was indeed the case (he didn't work for the finance area, he worked as a geek there), and if it still is the case, then Intel suits must be laughing their arses off as they hand AMD the "enthusiast" crowd, and they take the server arena. Most businesses I know still won't trust AMD with their servers, so even the current crop of 2-way-only Xeons (which are probably just regular PIII's mated to the slot-2 package) can recoup the losses in the consumer arena.


    That said, the Athlons seem to be great processors (though I've never used one--the newest box here is a Mobile PII-366 =P)). Maybe that's why AMD is so keen on getting in on the server market?

    Just looking at the books of both AMD and Intel can be rather revealing; in The Register's words, AMD is still a chimp compared to Intel's 800lb gorilla.

  • Still good for AMD, as this is a chipset designed to support the AMD Athlons. Yeah, it's a Micron chipset, but AMD has said repeatedly in the past that they're not in the chipset business. Of course, after the 760MP chipset announcement, I tend to take that with a bit of salt.

    Then again, this may be a killer uniprocessor chipset, but I would've loved to see a 4+way MP chipset with specs otherwise similar to this, taking full-advantage of the EV6-protocol that the Athlon uses.

  • You like Burger King more? Get out!

    Everything else was forgivable, but that is sacriledge! Out with you! Shoo!

  • What they don't mention is the increased price of chipsets this is going to cause. Silicon wafers don't fall out of the sky. They are fabricated in factories under stringent controls. They are speed rated and the dies that can't hit the required frequencies without errors are trashed. It costs money every time any core is fabricated, working or not, so every core that is wasted increases the cost of production of the working cores, which in terms raises the cost of the chipset.

    On-die caches are more dense than logic, and cause sharp decreases in yields for silicon wafers. The larger the proportion of the cache on die, the easier it is to cause the chip to fail. By just throwing huge amounts of cache onto a chip, you do four things:

    1. Decrease yields.

    2. Increase power consumption, which of course increases waste heat.

    3. Increase die size.

    4. Decrease clock speed ramping possibilities.

    This huge amount of cache will not only cause larger power consumption, higher chipset temperatures, and higher chipset prices it will also reduce overclocking possibilities.

    If it was just that easy to throw 8mb of cache onto a silicon wafer, Intel would have done this to their processors ages ago. The problem is that yields are poor (part of the reason Xeon processors are so expensive) and they don't ramp up very well (the reason why Xeons with larger caches are harder to get in higher speeds, and basically don't overclock worth anything).

    What I want to know is what in GOD's name was the transistor budget for the chipset? 8mb of cache is going to cost easily over 100 million transistors! That means their transistor budget as at least 250 million transistors...which is about 8x as big as current CPUs...something doesn't make sense...

  • Gosh, is that Bob from Accountemps? That guy's amazing! :^)
  • Will features like this get put into the 760 reference chipset that AMD is going to publish, or do we have to wait for another chipset from AMD to see this one.
  • Im all for using the space available for more L3 cache, esp if it boosts performance that much..
    but the one question on my mind, HOW THE HELL DO THEY NOT NOTICE 40% of the die NOT BEING USED!!!!!!!!
    But better yet, are we to expect a dual(or more)-CPU version of this chip?
  • Your .sig doesn't work in gcc. Whats it meant to do?

  • Time has passed, the prior dream (dual 1ghz athlon, ddr sdram) was weak. New dream:

    Dual-1.5ghz Mustangs
    Mamba board
    Nvida's Next-card

    That ought to be enough workstation for a while.

  • Good for AMD, they are starting to build higher end machines, and are will give Itel a run for the money in the lucrative "server" market. This modes well for all "server" users. As more competition will eventually lead to tighter margins and lower prices for the consumer.

    The specs look pretty good too The AMD 760 MP chipset is a DDR SDRAM solution that can support two 266MHz FSB Athlons. The chipset has advanced buffering to enable maximum transaction concurrency

  • I was going to email you directly but your email address is not readily available. Hope you read this followup.

    The machines are custom-rolled by, a vendor local to the Triangle region of North Carolina. We've been very impressed with their performance.
  • $2500 is a very normal price for a desktop machine. I paid more than that for a Pentium 60MHz machine back in 1993. What's the big problem?
  • I expect to see a computer with gigabytes of ram on my desk in a few years.

    What?!? In a few years? How about today?

    We are buying 1GHz Thunderbird machines with 1GB of SDRAM for $2500 ea. And that's in a rackmount chassis. Put it in a regular desktop case and use the savings to get an AGP video card and LVD SCSI hard disk.
  • I wasn't aware that there were any taboos on /. (M$ aside) but on that subject, I like AMD and BK and am a Sega zealot- but I love using Linux but if it's taboos you want I have a friend who runs Corel on his 750 K7 (2.2GB RAM), we are trying to educate him....
  • I was thinking more of the lines of if the chipset has to be scrapped because of bad sections of the cache, that without the cache, wouldn't have had to be.
  • What happens if there's a fault in that "new" 8 meg that's being used? Do they just do a regular chipset version, like intel's 486sx with the disabled co-proc? If not, then faults in this 8 meg space, is going to cost them more money because of the added complexity.
  • good for them. With the recent troubles that Intel has been having, I wonder how long AMD is oing to be able to keep up the pace without a stumble. More power is always good, but is quality suffering?

  • by plastickiwi ( 170800 ) on Thursday October 12, 2000 @05:25AM (#712463)
    The article reads in part:
    In designing the Samurai, Micron noticed that 40% of its die was unused white space.

    Heh. They "noticed" that 40% of the die space was unused?

    ENGINEER BOB: Hey Steve, I just noticed that we're only using 60% of the die space on the Samurai.
    ENGINEER STEVE: Hmm.... Damned if you're not right! How did that slip past us in the months we spent designing it? Good thing you're on the job, Bob!

  • Let's wait first 'till this hits the market huh... Besides, technologies like this are not targeted at the consumer market, and AMD still has a lot of catching up to do in the server market...
  • And can you name a single incompatibilities due to the CPU?

    And you bet that the FPU in the Athlon will smoke an Intel chip computing the lightmap. In fact send it to me, ill run it.

  • No way, Intergrated graphics always suck compared to the state of the art AGP slot 3d cards. Why would you want a crappy intergrated graphics on a high end DDRSDRAM mobo?
  • umm, DEC Alpha mobos have had 8MB of L3 for quite a while, and K6-3's had integrated L2, which made the on-mobo cache L3... Of course, no one's put it in the chipset directly before, so that part's cool as hell. But L3 cache itself is old news.
  • Mmm, what wonderful days these are when I can pay less for RAM with twice as many megabytes as I could for my first hard drive... when it's cheaper to buy an entire motherboard with eight megs of *cache* than it was to buy eight megs of RAM.

    Without any additional RAM, you could play a MEAN game of DOOM with that thing. You could even run Windows 3.11 with Word and Excel open AT THE SAME TIME...
  • Good for AMD?

    wtf ARE you on?

    Go read the article instead of rushing to post early, and you'll notice that this design is not from AMD at all, but from *MICRON*.
    You know, the (second?) largest manufacturer of DRAM in the world, and the first company to stand up to RAMBUS's bullying tactics - and now this!

    Go Micron!

  • "I would've loved to see a 4+way MP chipset with specs otherwise similar ..."

    Be careful what you wish for...! :)

  • by Limecron ( 206141 ) on Thursday October 12, 2000 @05:34AM (#712471)
    All I can think right now is that I'm glad that I don't own stock in Intel.

    What is Intel going to come out with to top an Athlon at 1.5ghz (and that's with AMD's current core)...

    A P4 with a whopping one pound heatsink and that requires a new power supply?

    You know, I won't buy one, if I can buy an Athlon that I may even be able to keep my A7V for. (And for a few hundred dollars less, as well.)
  • The FSB can't sustain it anyway... so why 9.6GB? I can understand the lower latencies, those will be useful, but that bandwidth??? even FSB and IO together can't.
  • Some clarification of the original post: (also a general response) The L1 and L2 caches are so much higher bandwidth because they run at core speed. Main memory runs at FSB speed (133MHz doubled -- its a DDR board). The board is NOT SMP, so I still don't see the point. I think syncing the cache with the FSB in terms of clock and width would provide the lowest latency, and extra bandwidth doesn't matter anyway. Granted, DMA devices can use SOME bandwidth, but not much. I think even 3GB/s or so would be PLENTY.
  • by evanbd ( 210358 ) on Thursday October 12, 2000 @05:40AM (#712474)
    Ummm.... read the article...

    This isn't about that at all. This is Micron's new chipset, Mamba. It is the successor to Samaurai, their DDR reference chipset. When they built Samaurai, they found they had not used 40% of the die space, so they added an 8MB DRAM cache to it. The cache is 50% lower latency, with a 9.6GB/s bandwidth; it is completely different from the 760MP buffer, which is strictly a BUFFER not a cache, and only allows some reordering to improve performance.

  • L3 cache in the core logic is really overkill, but if you have free space you might as well use it.

    But, what would be real cool would be adding a decent graphics core with enough embedded memory to still run fast.

    Ah, now that would be nice....
  • I don't have to "invent" any incompatibility, there's already two blaring examples out there: the original GeForce running on the original Athlon (so incompatible that Maximum PC did an "Athlon and GeForce" article stating that it was a lost cause). This was due to the GeForce needing an "Intel Pentium III or 100% compatible" processor. The original GeForce had major issues running on any Athlon system, so it can be argued that the Athlon is NOT 100% compatible with the Pentium III.

    Also, this Microsoft Knowledge Base article [] chronicles the AGP memory addressing problem with the Athlon chipsets. This has been logged as an official "errata" by AMD. This includes the VIA and AMD chipsets.

  • If you took a good look at the architecture of John Carmack's engines, you would know that they were at the forefront of technology. The Quake1 engine was the very first, entirely 3D game rendering system. Unlike other games released then, there were only sprites for explosions, air bubbles, and an ornamental sphere. Everything else was modeled in 3D. Quake 2 carried on what the Q1 engine started by adding extended OpenGL support, colored lighting, and 16-bit rendering. The Q2 engine was so efficient, the core technology was used in other successful games, one of the most prominent of them being Half-Life. The Quake 3 engine added on native 32-bit texturing, model animation interpolation, cube environment mapping, and an incredibly high quality texturing engine. It only seems fair to honor such gems of development by using them to push computer systems to their limits.
  • To go a little further on this, the videocard in my current computer has more memory than the total amount of Ram in my last machine. On the other hand, I really did stick with that Pentium 90 for a really long time. I guess that's what they mean by "loyalty goes two ways".
  • I don't know if anyone's actually mentioned it (I haven't had time to read any replies), but...

    Could you imagine a Beowulf cluster of these babies!!!

    Gimme Karma!!!!!!
  • I had a micron chipset based board for quite some time that I bought off ebay. It was a micron "grizzly" board. The model was mtsam64gz. It was a very cool board, having 8 interleaved dimm slots, 2 64 bit 66 mhz PCI and 3 64 bit 32 mhz pci, dual slot 1, with new voltage regulators supporting coppermines. It also had intergrated u2w scsi, and 10/100 nic. I sold it for $300, because I couldn't live without an agp slot and the ability to overclock.

In English, every word can be verbed. Would that it were so in our programming languages.