Forgot your password?
typodupeerror

Intel Pushes Back with Xeon 5100 140

Posted by Zonk
from the giving-amd-a-gentle-shove dept.
conq writes "BusinessWeek has a piece on Intel's newest chip, the Xeon 5100, which many consider might be the chip that will llow them to stop losing ground to AMD. From the article: 'During the presentation, Intel ran the now-standard comparison test against AMD's highest performing chip, handily beating the system in a speed test. And in a jab at AMD execs, who handed kill-o-watt meters to analysts at the outfit's recent technology day, Intel execs used the same device to measure the new Xeon 5100 system's performance — gauged to be 7 watts better than that of the AMD-based system.'"
This discussion has been archived. No new comments can be posted.

Intel Pushes Back with Xeon 5100

Comments Filter:
  • Keyword... (Score:2, Insightful)

    by parasonic (699907) on Thursday June 29, 2006 @02:59PM (#15630029)
    ...the keyword is might :)
  • by Tenareth (17013) on Thursday June 29, 2006 @03:10PM (#15630141) Homepage
    One of the largest costs in IT is Electricity...

    The cost of procurement of a server is a tiny percentage of its TCO.

  • Fantastic (Score:5, Insightful)

    by Ajehals (947354) <a.halsall@pirateparty.org.uk> on Thursday June 29, 2006 @03:10PM (#15630150) Homepage Journal
    Now we wait for AMD's next move..

    Now I have no preference in the whole AMD vs Intel debate, I just use whatever seems to give me the most value for money / required performance. I am currently using AMD chips in kit 4 years old or younger and Intel chips in some of my older hardware, and haven't yet even looked at AMD64 or IA64 chips). but it is really good to see some serious competition between two industry giants. Long live the competition, its better for the consumer.
  • Shilling (Score:1, Insightful)

    by Anonymous Coward on Thursday June 29, 2006 @03:19PM (#15630243)
    This story does not stand up to scrutiny. The power was not measured at the wall were it matters. Also, no one outside of select reviewers running Intel-selected benchmarks have seen this chip. The Intel chip was supposed to ship on Monday but it was only a "paper launch". Intel is only taking orders at this point. I'll wait for objective analysis when the chip is actually shipped before jumping to conclusions about the performance of this chip compares to AMD offerings.
  • Competition (Score:2, Insightful)

    by spykemail (983593) on Thursday June 29, 2006 @03:20PM (#15630250) Homepage
    This is how capitalism is supposed to work people - multiple businesses compete in the same market and when one lags behind it begins to lose market share (and therefore money) - then it comes up with its own new product or service to compete.

    That's how you get good products at low prices - comeptition, plan and simple. The thing that is unfortunate with markets like PC and server processors (or even operating systems) is that there are only two major market share holders, and one of them is much larger than the other making it tough for them to be competitive due to lack of volume.

    But as Apple and AMD have proven, you don't have to have the largest market share to innovate, and you can make a serious dent in the Microsofts and Intels of the world - even if all it accomplishes is forcing them to put more effort into their products both of their customers win.
  • Ah.... (Score:5, Insightful)

    by theheff (894014) on Thursday June 29, 2006 @03:28PM (#15630304)
    Nothing like a little competition! Whatever brings me faster chips...
  • by Nom du Keyboard (633989) on Thursday June 29, 2006 @03:28PM (#15630306)
    ...gauged to be 7 watts better than that of the AMD-based system.

    Does this include the required Intel Northbridge chip (22W), or are we only looking at the CPU itself? And does the NB need a fan?

    Or is this the entire system motherboard, in which cases this is hardly an apples-to-apples comparison.

  • Re:Numbers skewed? (Score:3, Insightful)

    by Wesley Felter (138342) <wesley@felter.org> on Thursday June 29, 2006 @03:39PM (#15630451) Homepage
    "Fair" comparisons (like 65nm vs. 65nm) are interesting to academics, but what matters to customers is what you can buy from Intel now vs. what you can buy from AMD now.
  • Re:Duh (Score:2, Insightful)

    by Brian Stretch (5304) * on Thursday June 29, 2006 @03:40PM (#15630476) Homepage
    Sorry I dont follow... are you saying Intel (or AMD) shouldnt compare their newest chips with anything until the other releases a chip after that? Or are you saying it's unfair to compare 90 micron vs 65 micron chips together?

    I think what he means is that we should compare Intel's not-buyable new chips with AMD's not-buyable new chips. When end-users start taking delivery of Woodcrest servers in, what, August maybe?, then maybe Intel can boast for perhaps even several weeks until AMD's new server chips are out.

    Attempting to tank the entire market until Intel's next-gen chips are out just because everyone knows Intel's current "Netburst" chips are overheating crap is lame.

    BTW, which Opteron CPU was Intel using in their comparison? Power consumption varies quite a bit even before you consider there are regular Opterons, Opteron HE, and Opteron EE series. A mere 7 watt advantage at the wall despite having started their 65nm transition earlier (AMD waits until they've figured out how to get mature yields before making a rapid switch to the next process node, very unlike Intel) tells me that Intel is going to get leapfrogged big-time in short order.
  • Re:No. (Score:4, Insightful)

    by happyemoticon (543015) on Thursday June 29, 2006 @04:06PM (#15630799) Homepage

    What a thoughtful and insightful post. Clearly IBM does not have production and yield problems, because they are courting three major game console manufacturers with their wonderful, efficient chips.

    Oh wait. Of these three, only two of them are actually available. Hrm.

    Oh yeah, and I seem to recall something about a shortage of XBox360s. Something about a chip company not making as many chips as they promised. Must've been the wifi card or something.

    WAIT, I DO recall a time when a company - think it was IBM - didn't produce enough G5 chips and people were backordering their Power Macs for months! Perhaps there is something to this after all.

    What's that? Your XBox360 consumes so much power that the PSU caught fire and burned a hole in your carpet? Guess there is a performance-per-watt issue after all. You know, that really does matter to a lot of people. There are data centers, especially in downtown locations, that can't grow their business any more because the power company won't sell them any more wattage. And if you remove the excess thermal paste, MBPs aren't all that hot.

    So yeah. Troll somewhere else.

  • True (Score:2, Insightful)

    by SlowEmotionReplay (822314) on Thursday June 29, 2006 @04:08PM (#15630820)
    IBM was indeed bad for Apple's bottom line, but Motorola was disastrous.
  • Details? (Score:2, Insightful)

    by NihilEst (976138) on Thursday June 29, 2006 @04:16PM (#15630932)
    I read the article at Tom's Hardware. Very interesting.

    But the peripheral requirements -- particularly FB-DIMM -- are interesting, too. And maybe a little scary. Anybody got a clue how these FB-DIMM units are gonna be priced per GB? We haven't seen any details on mobo pricing, either.

    I like the idea of lower power consumption and greater throughput. But if I can't afford to build the system, it doesn't do me much good.

    This announcement does sorta smell like marketing hype; I guess the implementations will tell the tale. Intel finally recognizes in public that they're getting their asses kicked by AMD, though, which is a good thing, IMO. Now if they'd just focus on price/performance competitiveness, they might even get me back as a customer.

  • by cyngus (753668) on Thursday June 29, 2006 @04:40PM (#15631222)
    There are reasons for this growing similarity, density and cost (somewhat related to density). Laptops have always had to pack more into a smaller space, and heat was therefore a big concern. This concern has come to the server world because of racks and blades. Previously, servers were towers, you stacked a bunch in a room, not very dense, fine. Now you pack a rack full of "pizza boxes" and end up with an oven pretty quickly. Cost, I would say, is a secondary factor. Previously you needed computing power, damn the cost, you had to have it. Now you can have almost more than you'll ever need, so now people want it to not run their electric bill through the roof. Cost is also related to heat, because just expensive as the hardware or electricity needed to run the computers can be the cooling system or electricity to run it! In some sense, server have become more like laptops in their requirements. You'd like them to be small (so you can pack them together, not for transport) and you'd like them to by stingy on electricity (for cost, not battery life).
  • by Anonymous Coward on Thursday June 29, 2006 @08:48PM (#15633129)

    Actually single-core Opteron processors running at 3 GHz do exist (Opteron 256 and 856). I think what you meant is that there is no dual-core Opteron running at 3 GHz, and this is true since the fastest dual-core Opteron processors stops at 2.6 GHz (Opteron 185, 285 and 885). This is why I said that only the high-end Xeon processors make sense (Xeon 5140, 5150 and 5160). Such processors are at least as fast as the fastest dual-core Opteron in any non memory-intensive workload. However, as soon as memory latency becomes a critical factor a 2.6 GHz dual-core Opteron becomes at least as fast as a 3.0 GHz dual-core Xeon, as shown in this Apache benchmark [gamepc.com] where Opteron outperfoms Xeon by 15-25% at the same clock frequency (and 3.0 GHz is only 15% faster than 2.6 GHz).

    So in the end, Woodcrest or Opteron, which one is better ? It all depends whether the workload is memory intensive or not. No matter how fast Woodcrest runs, its memory latency will always be around 100ns while it is about 50ns for Opteron. Of course Intel tries to hide this latency by using huge 4 MB caches, but as seen in the Apache benchmark, sometimes even 4 MB is not enough. It also means that 4 MB Xeons are more expensive to produce than 1 MB Opterons, becauche L2 cache takes more than half of the die space. This is why Intel is forced to use a 65nm manufacturing process, to try to gain back this "lost" silicon wafer space.

    Regarding your second question, a dual dual-core Opteron 285 (4 cores) with 8 GB of RAM with a high-end mobo would costs about $3200 (1062*2+8*70+500). Add a $400 chassis and 2 x $150 harddisks and you end up with a total of $3900 (compared to your $5K). So yes, Opteron is definitively cheaper. What are the exact tech specs of your system ?

    - great dude

For every bloke who makes his mark, there's half a dozen waiting to rub it out. -- Andy Capp

Working...