Become a fan of Slashdot on Facebook


Forgot your password?

AMD Packs Six-Core Opteron Inside 40 Watts 181

adeelarshad82 writes "Advanced Micro Devices has launched a low-power version of its six-core Opteron processor in time for VMworld, a key virtualization show that opens on Monday. The six-core AMD Opteron EE consumes 40 watts, and is designed for 2P servers, among the most popular in the virtualized server space."
This discussion has been archived. No new comments can be posted.

AMD Packs Six-Core Opteron Inside 40 Watts

Comments Filter:
  • Not a good idea... (Score:5, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Monday August 31, 2009 @12:10PM (#29261987) Journal
    But with a 40 watt chip you could get that into a laptop, if you felt like it. Not the thinnest, lightest, or quietest laptop around; but plenty of 14-15 inch units under two inches thick(though often not far under) were running P4s at least that power hungry back before P-Ms became cheap enough for common use.

    If you were willing to deal with the size and weight of those high-end gamer laptops, the ones with quad core i7s and SLI, you could probably build a 17-inch dual socket system....
  • by Anonymous Coward on Monday August 31, 2009 @12:15PM (#29262065)

    Anyone here know enough about CPU design who can guesstimate what the lower bound on CPU energy consumption is? I think I understand that you can lower the operating voltage of the chip, but this leads to more computation errors due to thermal noise. Or lower the clock speed of course... but flops/W would stay the same. Or use a lithography process that produces smaller size features. Then if you get too small though things don't quite work the same due to quantum effects etc. Does using more cores help? How is AMD going about this problem?

  • by morgan_greywolf ( 835522 ) on Monday August 31, 2009 @12:24PM (#29262243) Homepage Journal

    Compare to the 2377 EE, 40-watt quad-core @ 2.3 GHz: approximately 1/3 more performance from the new six-core chip.

    Depends on what kind of server. If you're talking about a Web server, IIS 5.1 and later or Apache 2.x and better with multithreading on, yes. If you're talking about Apache 1.x or 2.x without multithreading, or some older versions of IIS, no.

  • by JWSmythe ( 446288 ) <> on Monday August 31, 2009 @12:40PM (#29262509) Homepage Journal

        I had this argument with someone once. They didn't quite get it. The machine they were using was a 4 CPU 700Mhz server. In their logic, 700Mhz * 4 = 2.8Ghz. I wanted to move them to a 2 CPU 1.4Ghz machine, which I promised would be blazing fast. In their mind 1.4Ghz * 2 = 2.8Ghz, so there was no difference.

        There were a bunch of reasons for the move. The hardware was old. The form was huge (like 5u tall) and power hungry. The OS needed to be updated badly, and we couldn't take it offline for a day to do that. One day there was a fault of some kind (it's been a while, I don't remember specifically), so we moved it over to the new machine that I had wanted to move them to. They were amazed. Their $40,000 server had been replaced by a $2,000 server (original costs for both), and it was running faster and better than before. After the move, I repaired their old server, upgraded the OS, and made it ready. I offered to move them back, and they refused. :)

        About a year later, we had a 2CPU 2.4Ghz machine ready for them, and I offered again, "May I move you?" This time there wasn't a complaint. We just scheduled a window and did it. I set a 3 hour window, and we had it completed in about 15 minutes.

        I agree, I'd rather have CPU speed AND cores. I'd sacrifice extra cores for more speed. CPU speed has stagnated, while they're growing cores. I remember this happening in the past too, around the time CPU's were 200Mhz. You could get motherboards that supported one CPU, then 2 CPU, then 4 CPU, but the speeds weren't going up. You could give me 100 CPU's at 200Mhz, but I'd rather have one at 10Ghz.

        I'm sure people will throw a bunch of excuses of why. I remember back when the 50Mhz CPU was the fastest available, there were all kinds of reasons thrown around of why CPU's would "never be faster". People were very insistent that they were right. There were RF interference issues. If CPU's got to RF speeds, radio and TV would cease to work. If we got up near 2.4Ghz, people would be cooked because it's the same frequency as microwave ovens. There was no way to deal with the thermal issues, and computers would be ovens requiring liquid cooling (like liquid nitrogen or helium, not water cooling). Blah, blah, blah, blah. As we've seen, we did get well beyond 50Mhz. It's just a matter of time. I'm just disappointed that we end up stagnating. It's probably financial issues. The market will support a slower multicore CPU, but people won't spend the money on faster CPU's right now.

        I always love the "latest greatest" craze. It's entertaining. People will spend mad money on latest greatest, and I'll wait 6 months or a year to buy the same thing at a fraction of the cost. Maybe I'm part of the problem there. I won't drop $500 on a CPU, but I'll drop $100 on last years model that's only slightly slower.

        At least right now it's nice, since I can buy older and older hardware, and really not be far behind the curve. :)

  • by idiotnot ( 302133 ) <> on Monday August 31, 2009 @12:45PM (#29262573) Homepage Journal

    The important information FTFA is here:

    "AMD also estimated that the power consumption for a fully populated 42U rack would be 9.2 KW using the six-core Opteron 2425 HE, a 55-W part. Replacing those chips with the 2419 EE would require 7.5 KW, about an 18 percent power savings."

    That's just in the rack consumption. I would imagine these probably run cooler, too, which will help with HVAC costs.

    AMD seems to be doing a better job shrinking down dated designs at this point. While Intel is selling the Atom, which is undoubtedly cooler and less power-hungry, it's still based on a very old CPU design, which isn't up to heavy computing tasks. AMD, OTOH, has now established a pretty good record of taking mainline processors, and developing lower-power versions. They scaled down what used to be a pretty hot Athlon core (Thunderbird) to the Geode (as used in the OLPC). They followed that with a 45W Athlon 64 X2. Now the Opteron. Intel does have a 35W Conroe, but it's in Celeron cripple-mode badging, a shadow performance-wise, of the original C2Ds that initially came out on that core.

    I hope that AMD does release a desktop version of this, but I don't know if they could keep it profitable ($900+ eek.)

  • by morgan_greywolf ( 835522 ) on Monday August 31, 2009 @12:46PM (#29262581) Homepage Journal

    So your comment ignores the fact that this CPU will probably be running 6 (or more) VMs, which could just as well run single-threaded code....

    Clearly that's the market that this chip is targetting. I'm simply pointing out that 1) even if you're not in that space, this chip compares favorably to a 2.3 Ghz Quad Core or even a 3.2 Ghz Dual Core, so long as you're running multithreaded apps on any OS that uses a sane threading model. Just understand that single-threaded apps won't compare unless you're running them virtualized.

    OTOH, separate blades are going to give you to better performance, no matter how you look at it. On the gripping hand, if you have 10 blades, each running this 6 core CPU (conceivable in 40 watts!) and a VM hypervisor... sounds delicious.

  • Re:Hardware (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Monday August 31, 2009 @02:41PM (#29264347) Journal
    The K6 series were very hot compared to Intel equivalents. The reason the reputation persisted is probably due to the fact that they didn't add themal throttling on-chip until a lot later than Intel. The P4 was much hotter than a t-bird, but if it overheated it would throttle, while the Athlon would just catch fire. I had quite a few t-birds burn out due to the stock fan not being adequate. We had the opposite problem with our cluster; the P4s were throttling due to uneven cooling, so nodes were all running at different speeds and job scheduling got messed up.
  • Re:Hardware (Score:3, Interesting)

    by sjames ( 1099 ) on Monday August 31, 2009 @02:48PM (#29264461) Homepage Journal

    Indeed. It's been a very long time since AMD has been the hotter running CPU. It was Intel that introduced us to heatsinks that could hurt you if you dropped them on your toes.

    It's hard to believe that at one time CPUs didn't have heat sinks at all.

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday August 31, 2009 @02:48PM (#29264465) Journal
    If you really want to go overboard on "dubiously suitable for laptop use", you can get one of these [].

    Dual UltraSPARCs, up to 16 gigs of RAM, full sized 64 bit PCI slot, 3GbE ports. Of course, it's 22 pounds, and they don't even say what it costs.
  • Re:Hardware (Score:3, Interesting)

    by hairyfeet ( 841228 ) <bassbeast1968 AT gmail DOT com> on Monday August 31, 2009 @03:17PM (#29264901) Journal

    They also couldn't take the abuse that the Intel chips could. At the last shop I worked the boss had what he called "dead buckets" where we would chunk dead parts-one for CPUs, one for RAM, etc, don't ask me why as Doug never did give me a straight answer on that one. Anyway in the dead CPU bucket it was almost entirely AMD chips. It had a total of two Intels, one of which got surged during a lightning storm, the other surged on a blown PSU. We found if the AMDs fan clogged or failed they would often blow.

    So while I haven't tried to abuse a newer AMD I still build Intel chips for the SMBs that are doing construction or other jobs where I know the box is gonna get seriously funky. I have switched to AMD for my home builds, as the bang for the buck just can't be beat (just built a REALLY nice quad for a customer that only set him back $700) but on those jobs where I know the box will be subjected to construction grime, low ventilation, etc I stick with Intel. After all it is MY ass that has to replace it if it blows during the warranty period. But I can understand why someone who might have been "burnt" by AMD in the past might be a little gunshy of them now.

  • Re:Hardware (Score:3, Interesting)

    by PitaBred ( 632671 ) <> on Monday August 31, 2009 @03:42PM (#29265293) Homepage
    Why should your job scheduling be at all dependent on how fast things finished? That seems like a horrible assumption to make when you're talking about a network of computers. All kinds of things could cause slowdown, and even failure.
  • Re:Hardware (Score:3, Interesting)

    by evilviper ( 135110 ) on Monday August 31, 2009 @03:59PM (#29265491) Journal

    What reputation? Since the days of the original Thunderbird core (which still ran cooler than comparable P4s, though admittedly didn't have meltdown prevention circuitry), AMD has consistently given Intel a run for their money in that regard.

    AMD was extremely sloppy on power management before the K8/Opteron days.

    See my old /. Journal on the subject: []

    In short, while the maximum power of AMD CPUs was about the same as their P-III equivalents, AMD chips (Thunderbird and Athlon XP) would run at their maximum power ALL THE TIME, even when there was NOTHING to do.

    This didn't get resolved until the very end of the Socket-A days, when AMD finally REQUIRED all motherboards be tested to work with the S2K bus disconnect feature of AMD CPUs. Before then, AMD CPUs were undeniably hot. However, Intel screwed-up so bad with the Pentium-4 that they leapfrogged AMD's power management problem with a CPU so hot no ammount of powermanagement could save it...

  • by Anonymous Coward on Monday August 31, 2009 @04:47PM (#29266213)

    Bullshit. I've personally written games (in python, multiprocess, though not multithreaded, because hey, it's python) that scale to more cores than I can afford right now (although if I had enough money, I could conceivably get enough ; and yes, in a few years such hardware will be affordable). And while I don't know how many cores each of my desktop applications can scale to, I know my desktop overall scales to more cores than are currently for sale at any price (that fit into a single ATX formfactor box; I'm not counting networked supercomputers).

    Your "software doesn't take advantage of current hardware" argument is so 2004. Time to upgrade your software.

  • by tietokone-olmi ( 26595 ) on Monday August 31, 2009 @07:23PM (#29267953)

    Except that with multiprocess concurrency (i.e. non-multithreaded Apache on Unix), you actually gain in a NUMA setup like the Opterons have been from day 1. See, in the optimum case in a NUMA environment, the server process that handles a request gets an entire memory bus for itself. That's far more scalability than with multithreading in the absence of memory duplication, which AFAIR Linux doesn't implement on a per-thread basis in the same address space.

    This is why Opterons practically own the 4-socket x86 space: unlike with Intel's older "hub-style" busses, on a NUMA system aggregate memory bandwidth goes up as sockets are added because the number of memory busses increases also.

  • by mikehoskins ( 177074 ) on Monday August 31, 2009 @10:32PM (#29269325)

    > That's just in the rack consumption. I would imagine these probably run cooler, too, which will help with HVAC costs.

    I understand that for every "power watt," it takes 1-2 additional "cooling watts" additional power, in a server room.

    So, if a rack takes 10KW, expect an additional 10-20KW of electricity to cool the server room.

    I'd, then, estimate 30KW total for a 10KW rack, just to be safe.

    So, an 18% savings on 10KW (1.8KW saved), is really saving you on the order of 3.6KW to 5.4KW, when you include cooling!

    At $0.10/KWH, you save a bunch of money in electricity.... Just an 18% savings on 10KW (20KW to 30KW total with cooling), means a total savings of $259.20 to $388.80 a month!

Perfection is acheived only on the point of collapse. - C. N. Parkinson