Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD

AMD Finally Unveils Barcelona Chip 118

Justin Oblehelm writes "AMD has finally unveiled its first set of quad-core processors, three months after its original launch date due to its "complicated" design. Barcelona comes in three categories: high-performance, standard-performance and energy-efficient server models, but only the standard (up to 2.0 GHz) and energy-efficient (up to 1.9 GHz) categories will be available at launch. The high-performance Opterons, together with higher frequencies of the standard and energy-efficient chips, are expected in the out in the fourth quarter of this year. But it's far from clear that this is the product that will help right AMD's ship."
This discussion has been archived. No new comments can be posted.

AMD Finally Unveils Barcelona Chip

Comments Filter:
  • Benchmarks (Score:5, Informative)

    by eebra82 ( 907996 ) on Monday September 10, 2007 @09:21AM (#20537431) Homepage
    Here's some benchmarking done by Anandtech [anandtech.com].

    And a performance preview for Barcelona desktop as well [anandtech.com].
  • by ZachPruckowski ( 918562 ) <zachary.pruckowski@gmail.com> on Monday September 10, 2007 @09:23AM (#20537447)
    Barcelona is a different architecture from K8 (the architecture of the current X2s). It's overclocking performance is currently unknown. Just as Intel's overclocking potential improved as it went from Pentium -> Core 2 Duo, Barcelona may increase or decrease AMD's overclocking potential.
  • by Stentapp ( 19941 ) on Monday September 10, 2007 @09:28AM (#20537493) Journal
    "The delay puts the chip maker a full generation behind its archrival in terms of chip manufacturing processes. Intel's quad-core processor, which was launched in November last year, melds two of its duo-core processors into a single package."

    Heh, shouldn't that be "full generation ahead" since AMD manages to put four cores on a single die?
  • Techreport (Score:5, Informative)

    by Eukariote ( 881204 ) on Monday September 10, 2007 @09:29AM (#20537499)
    The Techreport also has a review up: http://techreport.com/articles.x/13176/1 [techreport.com]. Barcelona is similar to Core2, clock for clock. It has better energy efficiency and SMP scaling. But the clock frequencies will need to come up in order to beat Intel's highest clocking chips in absolute performance.
  • Re:I'm curious (Score:3, Informative)

    by llirik ( 1074623 ) on Monday September 10, 2007 @09:35AM (#20537573)
    * 2347 - 1,9 GHz, $316
    * 2350 - 2,0 GHz, $389
    * 8347 - 1,9 GHz, $786
    * 8350 - 2,0 GHz, $1019
    * 2344 HE - 1,7 GHz, $209
    * 2346 HE - 1,8 GHz, $255
    * 2347 HE - 1,9 GHz, $377
    * 8346 HE - 1,8 GHz, $698
    * 8347 HE - 1,9 GHz, $873

  • by Anonymous Coward on Monday September 10, 2007 @09:40AM (#20537621)
    "The delay puts the chip maker a full generation behind its archrival in terms of chip manufacturing processes.

    Emphasis mine. Reading comprehension 101: Read the whole sentence. AMD is at 65nm, Intel is at 45nm, just as when AMD was at 90nm, Intel hit 65nm. This qualifies them as being "a generation behind" in chip making processes.

    Whether or not their architecture or their core design is better is completely irrelevant to that sentence (but relevant to the next, which is why it's so odd they'd put those two sentences together in the first place; even AMD admits if they could go back in time they'd do an MCM).
  • by Anonymous Coward on Monday September 10, 2007 @10:19AM (#20538169)
    vs. the old days. Until not too long ago, they charged based on "power units". What's a power unit you ask? 1 MHz on x86 was 1PU, 1MHz on sparc was 1.5PU, etc. (So for example your departmental e450 with four, 400 MHz cpus would be 4x400x1.5 = 2400 PUs.) How much did a PU cost in licensing? Well, you see, said the oracle salesman with a gleam in his eye, that all depends... That they've shifted to a flat rate per core is actually a big win over the old model for their customers.
  • by TheThiefMaster ( 992038 ) on Monday September 10, 2007 @10:24AM (#20538275)
    It depends on three things:
    1: Whether the software CAN use multiple cores.
    2: How efficiently it uses the extra cores.
    3: Whether the program is currently limited by cpu power or by something else.

    For "1:", if the program can't use the extra cores, then you'll only see a speed improvement from the fact that the cores are 15% more efficient. i.e. A 2GHz one of these quads performs the same as a 2.3GHz (+15%) dual core from the previous generation for applications in this category.

    For "2:", if the program can use the extra cores, but not as efficiently as the first, then you'll see a speed increase equivalent to this. e.g., if the program does two tasks at once, one that takes 70 seconds and one that takes 30, then on one core it'll take 100 seconds. On two cores it would do the 70 second task on one core and the 30 second task on the other, reducing the total time to 70 seconds, a ~40% speed improvement.

    For "3:", if the application is limited by something other than the cpu, e.g. "how quickly it can pull data from the hard-disk", you will likely see no improvement whatsoever.

    In conclusion, depending on what applications you use, you will see anywhere from no improvement up to 2.3x the previous speed (x2 for double the cores and +15% from the improved efficiency).

    Note: As these cpus also have an extra instruction set extension, applications that make use of this could exceed the speed improvements I noted above.
  • More Barcelona (Score:2, Informative)

    by bigwophh ( 1100019 ) on Monday September 10, 2007 @10:56AM (#20538813)
    Specs of the entire new Barcelons line-up, more details, and pricing are available here as well:

    http://www.hothardware.com/Articles/AMD_Barcelona_Architecture_Launch_Native_QuadCore [hothardware.com]
  • by struberg ( 757804 ) <strubergNO@SPAMyahoo.de> on Monday September 10, 2007 @11:20AM (#20539199)
    Intel and AMD are using different production technologies for their dies. For what i know, AMD is using IBMs SOI (Silicon On Insulator) which has much less drain current and therefor is much better at the same size. But it seems also more complicated to shrink this technology to 45nm.
  • by Chris Burke ( 6130 ) on Monday September 10, 2007 @11:37AM (#20539491) Homepage
    Ah, my bad, thanks for clearing this up...so that explains Intels ability to suddenly have lower power chips...so it is they that are playing with the numbers this time, interesting :)

    To some extent. The Pentium 4 is where this started. The Netburst architecture was very power hungry normally, but it's maximum power was insane. The graph of power consumption vs benchmark had a long "tail", which Intel sought to chop off. See, TDP is a real-life number, since it's used by OEMs and others to design thermal solutions for the parts. If the thermal solution is insufficient, then the parts fail. So it's not actually possible to fudge TDP numbers.

    What Intel decided to do was implement an on-chip thermal diode and some logic that halved the effective clock cycle* if the temperature went above a certain threshold. What this meant is that based on how they programmed this logic, they could guarantee that the chip's power consumption would never go above a certain level no matter what code you were running. They had effectively lopped off the long tail. The downside is that if your application does draw more power than the limit, then you'll see vastly reduced performance because of the clock throttling. Most of the time this is transient so it's not that noticeable, but there were benchmarks out there that showed this effect very clearly. Like a certain game benchmark would get lower scores at 640x480 than 1600x1200 because at the lower res the game was cpu bound as was crossing the thermal threshold.

    So theoretically with this feature Intel could fudge the numbers however they wanted and claim whatever TDP they desired. In practice they don't have that much flexibility because if they set the bar too low then their effective performance would suck, and their TDP numbers are set at average power + several standard deviations.

    The main reason why Intel was able to suddenly have low power chips is because they ditched the Netburst architecture and went back to a design that was more balanced between high clock speeds and high IPC.

    They kept the clock throttling logic, though, since it does still give them some benefit in reporting lower TDP numbers. AMD doesn't have this feature, so their TDP is truly the maximum power (as determined by running a "power virus") that you would ever see, even though it's unlikely. Since power has become ever more important as a marketing feature even outside of mobile, I'm not surprised that AMD would decide to start touting expected numbers vs maximum.

    * Actually a 50% duty cycle of full speed for some number of microseconds followed by completely off.
  • Re:Cool (Score:3, Informative)

    by afidel ( 530433 ) on Monday September 10, 2007 @12:24PM (#20540293)
    You are charged per core and can only go below the number of physical cores in the machine if the architecture has hard partitioning of resources, for instance a zone with hard resource limits is acceptable but a container with soft limits is not (well, it is but you need licenses for the max possible resources the container has access to).
  • by Wavicle ( 181176 ) on Monday September 10, 2007 @03:13PM (#20543091)
    When only measuring single core performance, clock for clock, Barcelona is on par with Cloverton.

    Unfortunately processors are not generally sold "clock for clock." If you're on par clock for clock, but the other guy is clocked more than 50% faster than you... that could be trouble.

    What good is an Intel chip that has fast floating point but the bus cannot feed it data fast enough?

    Plenty good if the data can fit in cache, in which case the unit can be fed fast enough. For instance, say you're running LinPack [wikipedia.org]. But then, who uses LinPack as a benchmark [top500.org]?
  • by Wavicle ( 181176 ) on Monday September 10, 2007 @06:41PM (#20545907)
    I simply want to use the chip that gives me the greatest floating point throughput I can get.

    Define throughput. At some point you need to decide if you are solving equations like LinPack or equations like spec_fp. One causes lots of cache misses and benefits from memory bandwidth, the other does not.

    Right now that chip appears to be Barcelona.

    Well that's a hypothetical statement based on perception of your needs and their marketing.

    I'm not interested with hypothetical arguments

    That explains why you're making them (???)

    I am looking forward to using Barcelona processors because they will get my mathematical computations done faster.

    Hypothetically. Are you going to hypothetically switch when Intel's Penryn with SSE4 comes out? What about Intel's Nehalem?

    By the way, check out number 2 and 3 on your top 500 supercomputer list - they're Opterons.

    And?? They were designed and built before Core 2 was released. Do you think I'm going to argue they should have used Pentium 4's? Those systems also make solid use of NUMA through a custom Cray crossbar (Seastar), and Intel doesn't have that. If they made them today I see no reason for them not to use Opterons. Do you have a computer with lots of Opterons and a Cray Seastar router on order?

    The performance of those systems is measured using LinPack. As I mentioned at the beginning, declaring a 2.0 GHz Barcelona as having faster fp throughput than 3.2 GHz Core 2 depends wholly on which types of calculations you are doing. spec_fp does calculations that are memory bound, LinPack does not (at least not as much). Barcelona's faster fp throughput is not due to markedly superior fp unit (though it may be marginally better) but its onboard memory controller. If you need that sort of thing, great, go with barcelona. If you need raw speed on smaller units (under a couple of megabytes) chances are good that the higher clocked Core 2 with huge cache will win.

With your bare hands?!?

Working...