Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Intel's Core 2 Desktop Processors Tested 335

Steve Kerrison writes "It's early morning here in the UK, but that doesn't stop us from being around to see the launch of Conroe and friends, Intel's newest desktop chips. Even a $180 Intel CPU can beat an Athlon FX-62 in a number of tests. Now that's bound to get the fanboy blood pumping, right? We've also taken a look at a pre-built system that's powered by the Extreme X6800 CPU, along with an nForce 4 SLI chipset. As you'd expect, it's quick."
This discussion has been archived. No new comments can be posted.

Intel's Core 2 Desktop Processors Tested

Comments Filter:
  • Loss Leader? (Score:5, Insightful)

    by Breakfast Pants ( 323698 ) on Friday July 14, 2006 @02:04AM (#15717099) Journal
    Gotta wonder if intel can legitimately deliver at this price or if they are going with loss leader tactics to try and regain marketshare.
  • Re:Loss Leader? (Score:4, Insightful)

    by Anonymous Coward on Friday July 14, 2006 @02:17AM (#15717129)
    Loss or not, they at least gave stockholders a little more confidence than the slaughtering over the last two years. This is good news even if they take a loss.

  • by doormat ( 63648 ) on Friday July 14, 2006 @02:20AM (#15717138) Homepage Journal
    One of the things with HardOCPs review shows one extremely interesting thing...

    If you have a single high end card (7900), there isnt a whole lot of difference between the FX62 and the X6800, or even the E6700. Most games are GPU limited now, and will be until the next generation of cards is released in 3+ months (FEAR is really the only exception to this).

    They didnt run any benchmarks at 800x600 or whatever, because those results are more or less useless. Who spends $500+ on a processor and $500 on a video card and plays games at that low resolution.

    What matters if you're going to buy a new rig now is the price performance ratio. If you're a midrange gamer, your best bet is probably a E6600 and a $250 video card. Or an AM2 setup, it all depends on the prices AMD cuts their X2 line to. We'll find out closer to the end of this month what the deal is. Come August 1st we'll have a very good idea of which platform is on top.
  • by Anonymous Coward on Friday July 14, 2006 @02:28AM (#15717152)
    Playing games at high resolution is limited by graphics card. CPU plays minor role. Film at 11.

    However, if you _do_ have tasks that are heavy on CPU and not GPU, Core 2 owns AMD.

    So what's hype about a CPU that's 1) cheaper 2) plays games just as well 3) can handle the occasional DivX rip or MP3 conversion much faster?
  • by CCFreak2K ( 930973 ) on Friday July 14, 2006 @02:44AM (#15717191) Homepage Journal
    Even if it's just a shot at getting market share back, the fact that great things like this are being sold at lower prices only mean good things for the consumer. This, for example, is GREAT for me as a system builder because everything besides the Pentium D 805 was expensive. Now, with something like this, I can offer a (possibly) better CPU for not that much more.

    More good stuff is coming from both camps, I predict.
  • by Heir Of The Mess ( 939658 ) on Friday July 14, 2006 @02:47AM (#15717197)
    I've been holding my breath waiting for AMD to respond. Anytime now would be a good time for them to announce how they are going to counter the Core Duo. But the reality might be that they need to recoup their costs from developing the AM2 platform before they can make any changes.

    I think the competition has been good, but if Intel returns wearing the performance crown then I think there is a real potential that the CPU market will be dominated by Intel more so than it has ever been before, with consoles being the main holdout. If these benchmarks are true, then the introduction of the Core Duo will be a real turning point I think. Keep in mind that these speeds are introductory and that in the past Intel hasn't had much trouble progressing to higher performance out of the same architecture.

  • by extra the woos ( 601736 ) on Friday July 14, 2006 @02:58AM (#15717221)
    What the benchmarks mean is that if you do a lot of media encoding, compiling, etc, you would probably benefit from conroe. HOWEVER, if you play games, regardless of whether or not you are on an AMD/INTEL system currently--if your system is pretty new--Do not upgrade at this time, as you are GPU limited, not CPU limited. Basically conroe: Large performance gains in cpu bound applications Little performance gain in gpu bound applications, obviously. This is good for intel. My systems for the past 7 years or so have been AMD. My next one very well may not be. The good news for everyone is that AMD is now the underdog again. Remember what happened last time they were the underdog? We got the athlon. The cpu speed wars went into a frenzy. For the last several years (5 or so) Intel has been sucking balls. Their chips have not been performance competitive. Clock speeds in both camps have stagnated. AMD chip prices have went way up compared to how they used to be. This is good news, AMD will go into overdrive developing their next-gen chips. Amd chips will become dirt cheap again. We'll see a new performance war. This is something i've been waiting for anxiously for a few years. I am very excited. Another thing is that the new intel chips take much less power than the old ones . (thank god)
  • by YesIAmAScript ( 886271 ) on Friday July 14, 2006 @03:02AM (#15717228)
    (cribbed from my post in anothe rplace).HardOCP are complete AMD whores here.

    They do the power tests with power saving settings turned off. This gives AMD the edge at idle, mostly due to a lower transistor count. As other sites have shown, turning the power saving settings on (as one would expect) puts Intel far out front at idle.

    How do they end that article?

    " I would highly suggest keeping your eyes on AMD low wattage / energy efficient processors for those projects that require a noiseless solution."

    So they make Intel look worse than they are, and yet Intel still wins at under load. What's the takeaway? Buy AMD.

    In the gaming, after the Intel gets done smoking the FX-62, what do they say?

    "It is very interesting that in all of our testing, both "what is playable" testing and "apples-to-apples" testing, the Intel Core 2 Extreme X6800 and Intel Core 2 Duo E6700 are very close in performance. In fact, in some games they are dead even. The price difference between the two is very extreme with the Core 2 Extreme X6800 costing $999 and the Core 2 Duo E6700 at $530. Does it look like the price is justified between the two for gaming? We can safely say "no" as far as gaming goes with this gameplay testing we have performed."

    Then, when speaking of AMD, do they mention even the E6700 ($530) beat the FX-62 and the FX-62 costs over $800? Nope.

    "As for the AMD Athlon 64 FX-62, all of our testing shows that it does trail the two new Intel CPUs in gameplay performance. So, if you wanted to point one out as being a "winner" then for sure it is the new Intel Core 2 X6800 and E6700. But, if you look at the amount of difference between the AMD and Intel CPUs, you will see that it isn't enough to amount to anything. The only game that we saw any real-world difference in was Oblivion, and even that was tiny. A little overclocking would clear that difference up."

    Any mention of overclocking levels and how the Core 2 Duo overclocks well? Much better than an FX-62 usually. Nope.

    What's their takeaway from the gaming section where a $530 Intel beats out AMD's fastest chip (at $800)?

    "We have proven here that the flurry of canned benchmarks based on timedemos showing huge gains with Core 2 processors are virtually worthless in rating the true gaming performance of these processors today. The fact of the matter is that real-world gaming performance today greatly lies at the feet of your video card. Almost none of today's games are performance limited by your CPU. Maybe that will change, but given the trends, it is not likely."

    and then

    "Lastly, I would advise everyone that is thinking of rushing out and purchasing their latest upgrade that we are sure to see HUGE pricing slashes out of AMD before the end of the month."

    Way to go HardOCP. Rig your tests, ignore Intel victories and make your summary "buy AMD".

    You have zero cerdibility, HardOCP.

    Also, you used bullshot wrong. Bullshot is a term for fake screenshots designed for games (like EA uses). It doesn't fit here.
  • Re:Loss Leader? (Score:4, Insightful)

    by Anonymous Coward on Friday July 14, 2006 @03:09AM (#15717247)
    Bearing in mind that for the first time ever, the silicon for the cores of their laptops, desktops and servers are capable of coming off the same wafer, and I'd say they're onto a cost-saving exercise here. Heck, even the Core Solo is a Duo with a core disabled.

    Thing is; these are miles ahead of AMDs current crop, Intel could double the prices on them and they're still good value for money. If they're a good product, market share will come without trying.
  • Look at his username. I don't think English is his primary language.

    I think what he was trying to say is that Core 2 isn't a magic processor that just makes everything faster, but can also be leveraged by programmers for even greater gains with some optimization. Of course, this isn't different from any other processor, and I could be completely wrong about what he was saying.
  • Energy efficiency (Score:5, Insightful)

    by kjart ( 941720 ) on Friday July 14, 2006 @03:40AM (#15717313)
    Everything else aside, that was the one thing that interested me the most about the review - the fact that the new conroes are allegedly going to be consuming about half as much power as current desktop chips. Why is this important? Well, if such gains can be made on the desktop, I'm _really_ looking forward to the laptop chips. Maybe the 7hrs claimed battery life by laptop manufacturers will actually be accurate in the near future.
  • by treak007 ( 985345 ) on Friday July 14, 2006 @03:42AM (#15717319)
    Even though the benchmarks show that the intel conroe beats the amd fx, the real question still remains, the value. Would you honestly notice a difference in fps when both processors were running relatively close in frames per second. Maybe the conroe can get 20 more fps per second, but is that worth the extra money. Amd is notorious for being less expensive then Intel. Either way you could run the top of the line games, its just a question of which allows you to get more bang for your buck. If Amd sets a signifigantly competitive price, then it really doesn't matter how well the processors perform, people will choose whichever one provides the most performance per dollar value. While the Conroe beats the Fx in the performance battle, it still has not yet won the war. Let the price battle begin.
  • Sadly.... (Score:1, Insightful)

    by Khyber ( 864651 ) <techkitsune@gmail.com> on Friday July 14, 2006 @04:15AM (#15717388) Homepage Journal
    There are Athlon-FX 64-bit processors that still beat this benchmark by 2-3 thousand points on CPU-Z. I'm no Intel nor AMD Fanboy (I'm a Cyrix fan from beginning to end, and if you can't understand why then compare Unreal Tournament under a P2+MMX 233 against my Cyrix MII-233MX processor with the same RAM (Type+Amount,) and video+sound card. (Hint: The Cyrix beat the ever-loving shit out of the PII by a blazing 25 FPS (I don't use large abstract numbers, I use real-life performance tests/observable results, like the rest of you overclock geeks should use.) So my question is (Compared with my personal RL observations against Intel and AMD's claims,) is who the hell is bothering to rate their processors by what REALLY counts? By this I mean MIPS (Millions of Instructions Per Second [performed]) as opposed to GHz? Actually, I'd like to see MIPC as in Millions of Instructions per Cycle the processor is capable of. I don't care how many times it can do the same thing over again - how many times can it do the same thing, using a more efficient algorithm (like the divide by 10 thing, instead of dividing by 10 we multiply by .1) over and over again, and how can people finally realize the more true and better optimizations for their processors so we can have a far more accurate measure of performance? Benchmarks, AFAIC, (As Far As I'm Concerned,) are just an e-penis waggling contest. Gimme something I consider to be real results if you want to market your processor to someone that has even a modicum of a clue (And I admittedly have a very low clue about processors, but I'm still not a n00b when it comes to them. I've played with processors since I owned an Intel 8088 Packard Bell that I ran Jill of the Jungle and a few BBS servers on, funnily enough I was only 7-8 at that time and had full command of DOS and most of the standard BBS door-game system. Yep, that's sad, and my father was around computers FAR more than I ever was at that age.)
  • by rikkus-x ( 526844 ) <rik@rikkus.info> on Friday July 14, 2006 @06:22AM (#15717607) Homepage
    Personally, I don't care about processors costing USD 400 or gaming performance, where CPU doesn't matter too much anyway. Are there any comparisons of the cheapest Core 2 processors with similarly priced AMDs?
  • Not yet available (Score:3, Insightful)

    by Hackeron ( 704093 ) on Friday July 14, 2006 @07:15AM (#15717688) Journal
    Has anyone noticed the processor is not yet available for sale and won't be available for a while? - I was very impressed by the benchmarks until I tried to find it for sale and saw that expected street prices will be far higher than those listed in the review sites and in fact will rival AMD prices.
  • by SirWinston ( 54399 ) on Friday July 14, 2006 @07:51AM (#15717779)
    I stopped reading [H]ardOCP soon after they switched from "real" benchmarks (equal settings for both machines) to their oh-so-flawed allegedly-but-not-"real-world" tests (different settings for each platform, to get a similar framerate). It's bullshit and tells me *nothing* except what I'd get if I copied their settings directly--if I prefer to play at different resolutions, and/or different levels of AA/AF/etc., their reviews become worthless compared to traditional ones which give head-to-head benchmarks with more datapoints for me to extrapolate from.

    [H]ardOCP in my mind is for stupid or lazy gamers--they can their benchmarks and corresponding reviews for audiences who can't or won't draw good conclusions from traditional datapoint-intensive head-to-head benchmarks and reviews. Not to mention, the great possibility for abuse to twist the results, which is what happened here--when the playing field is unlevel at the discretion of [H]ardOCP, whoever wins is up to them rather than the relative merits of the products.

    The fact is, Intel has a lineup of real winners here but [H]ardOCP made the playing field unlevel to avoid acknowledging it. Sad fanboyism.
  • by moosesocks ( 264553 ) on Friday July 14, 2006 @09:01AM (#15718082) Homepage
    Given that I stopped reading this sort of nonsense long ago, I was surprised to see that people are *still* using Low-Resolution games as a benchmark.

    Games have been predominantly GPU-limited for the past 6 years (or in layman's terms --- as long as GPUs have existed in the form they do today, the nVidia GeForce being the first such chip). It made no sense in 1999 to use Quake 3 running at 640x480 as a benchmark, because the game looked a *lot* better at higher resolutions, and the harware was able to cope with it.

    This isn't even taking into account the fact that virtually nobody has a monitor capable of displaying more than 100 FPS, nor could one perceptually distinguish any frame rates above 50fps -- most people still run their monitors at 60Hz.

    Don't try to fake a real-world benchmark. If the only way to see any difference in game performance when comparing across CPUs is to lower the resolution to the rock-bottom value, then the conclusion should be that the CPU is not a contributing factor to game performance. ATI got into a lot of trouble a few years back by optimizing their drivers to 'fake' Quake 3 framerates over 120 fps (or some really high number like that). Nobody noticed for *months* that the frame rates were artificially inflated.

    Gamers are quickly approaching audiophiles in my book in terms of their level of insanity.
  • Re:Loss Leader? (Score:3, Insightful)

    by kill-1 ( 36256 ) on Friday July 14, 2006 @11:53AM (#15719374)
    Don't forget that Intel uses a 65nm process and can put almost 2x more dies on a wafer than AMD. AMD's SOI process is more expensive, too.

    I think the new Intel CPUs are priced very aggressivley, but Intel is still making money with them. And they put a lot of pressure on AMD.
  • MythTV (Score:3, Insightful)

    by leoxx ( 992 ) on Friday July 14, 2006 @12:26PM (#15719667) Homepage Journal
    My MythTV box spends MOST Of its time doing exactly that. If these chips are as fast/cheap/cool (and therefore QUIET) as they appear to be, my MythTV box will be running Conroe by the end of the year.
  • Unless you are doing simulations or rendering or something I just can't see the minor increase as worth it.

    I agree with your post, but this part stuck out. Why is it, that on a slashdot geek site, nobody ever references CPU performance to programming desktop/work-station use. Photoshop, servers and games seem to be the main reasons people justify the highest-performance machines. But ever since I was in high-school (in the 80s), I've always overtaxed my machine... I've never had a machine and said "for what I do, this is sufficient".. And it never mattered what it was I was doing.. Beit running windows (in the 90s) and experiencing the dreaded context-switch pause... Beit Linux and running evolution + spam-filtering + "grep -r" on something - I've always gone "man It sucks working on this machine at work, v.s. the machine I have at home collecting dust". And my work machine has never been a slouch (currently a AMD 2800+ Barton with 1.5Gig of RAM).

    Now if a linux programmer uses vi all day and ssh's to a beefy build-machine, then I can understand them not needing a beefier desktop/work-station. But I use to use xemacs and THAT was slow on many machine in the late 90s. Now I use a java editor named idea (commercial counterpart to eclipse) and every ounce of horse-power I can muster I long for. With intelligent real-time code analysis, there is no longer such a thing as idle-time on the work-station. Then add the fact that you can put your application server on your desktop, along with a full project-management suite (evolution, time-tracking tools, dozens of terminal windows, dozens of browser windows...). Waiting 20 seconds for the mouse to regain focus because I'm deploying a new application is just painful. Now do this 5 times in quick succession because you're iteratively debugging something.. Multi-CPU is a god-send for this type of environment.

    My point is that I almost never hear people referencing this type of multi-application work-environment as a justification for cheap-but-beefy machines (except for the ubiquitous reference to photoshop, which can kiss my *@@).

It is easier to write an incorrect program than understand a correct one.

Working...