Intel Core 2 Extreme QX6700 Quad-Core Benchmarks 162
Slimpickin writes "Intel gave access to quad-core Kentsfield-based systems to select members of the press at IDF. The embargo has been lifted on a preview of performance numbers with the new 2.66GHz Core 2 Extreme QX6700 processor. HotHardware showcases Intel quad-core performance from a few different angles, from digital video processing and encoding, to 3D modeling and rendering, along with a few of the more standard benchmarks. the new Intel quad-core puts up performance numbers, depending on the application, at nearly double the performance of a 2.93GHz Core 2 Duo processor based system. Core 2 Quad will also drop right into existing motherboards that are compatible with the Core 2 processor line."
Already tested: Two Quad-Cores in a Mac Pro, makin (Score:5, Informative)
A few weeks ago Anandtech already tried to plug two 2.4 GHz Quad-Core Clovertons (Xeons) samples into the new Mac Pro [anandtech.com] featuring two LGA-771 sockets. Worked like a charm, a nice eight core machine. And since dual socket motherboards are quite expensive, the Mac Pro might even be a cheap version.
Re:Already tested: Two Quad-Cores in a Mac Pro, ma (Score:2)
Me, I might try it anyway, but if I do, I definitely won't shell out $249 for an AppleCare warranty I'll be voiding soon after purchase.
Re: (Score:2)
The only way you void your warranty is if in upgrading your computer, you damage it or the parts you add damage the computer. Any damage you do is not covered, but anything else should still be covered.
Re: (Score:2)
But what I'm really saying is that you don't automatically void your warranty if you upgrade your computer.
Re: (Score:2)
And so the warranter will declare the damage was caused by the non-factory parts you added and not investigate further.
What I want to know is if the two firmware updates Apple pushed to fix Boot Camp problems also blocked this type of upgrade, like they did to the Blue & White G3s to prevent their easy up
Re: (Score:3, Insightful)
My interpretation of Magnuson-Moss is that it prohibits bundling, like say, Apple requiring you purchase Apple-branded CPU's to upgrade. Pulling out your own CPU is probably still a warranty killer. They just can't automatically call it void if the problem is obviously unrelated and a defect in the merchandise, like oh, the
Re: (Score:2)
Re: (Score:2)
Nah, you just buy a 1U case, dual woodcrest mobo, and build yourself a *buhlazing* little server. Rent a partial rack somewhere, plug in a 10 Mbps line, and serve up a million pageviews a day without breaking a sweat.
- Greg
Re:Already tested: Two Quad-Cores in a Mac Pro, ma (Score:2)
Re: (Score:2)
Re: (Score:2)
Although the mobile Core 2 Duo and the desktop Core 2 Duo are identical in most ways, the do not use the same socket.
Re: (Score:2)
Well, All I have to say is... (Score:4, Funny)
Re: (Score:2)
(Q3's "quad damage" really only tripled the damage, if I recall.)
Summary for the lazy: (Score:2)
Intel FSB vs. AMD Hypertransport? (Score:3, Interesting)
I know on the face of it this chip is a kludge (two dual-cores connected to one FSB in a single-socket package, as opposed to AMD's forthcoming 'true' quad-core CPU), but if it performs well, so what?
Re: (Score:2)
I just wonder if in place of the fat L2/L3 cache multimedia extensions, x64 and legacy components, if we just get MORE cores it will be better. The Ultrasparc T1 has good performance figures with oracle and the Cell sounds like a workhorse enough for IBM to releas
Re: (Score:2)
For servers? Sure. For desktops? Probably not. Server tasks are typically (though not always) more parallelizable. doesn't mean the desktop Apps can't be made more parallel, but it's harder and it will take longer. Then again, maybe all these multicores coming out will lead to motivation to develop new tools to make threading easier.
Re:Intel FSB vs. AMD Hypertransport? (Score:5, Interesting)
It can equally well be argued that AMD's solution is a "kludge". Intel has four processors arranged in two pairs, within each pair the processors are connected by shared L2 cache, but the pairs are connected by the FSB. AMD on the other hand have all four processors communicating over HyperTransport links. Shared L2 is clearly better than HyperTransport links, and the HyperTransport links are better than Intel's current FSB.
The physical packaging simply doesn't tell much about the quality of the interconnect. Sure it is harder to make a truly great interconnect with separate packages, but looking directly at the interconnect tells the much more accurate story.
Either way, it is not an all that great suprise that the dual-FSB design of modern Intel platforms manages four cores decently, but yes, AMD probably still has a clear edge on 8 core systems.
Re: (Score:2, Informative)
Re: (Score:3, Informative)
Calling either solution a "kludge" is of course wrong. However, just running everything across HyperTransport is an obviously worse approach for core-to-core communication than shared L2. The trick about sharing cache though is that it stops making sense to talk about the cores "sharing access to main memory", since any memory fetches go into the shared cache. Plus that Intel isn't stupid, their current platform has two separate front-side buses, so there is quite a bit of bandwidth to work with.
On the ot
Re: (Score:3, Insightful)
The real question is how important is core-to-core communication versus core-to-memory for "regular" workloads?
My gut says that for consumer-level workloads, memory is more important than inter-core communication because most consumer-level parallel processing is of the "embarassingly parallel" type - specifically codec processing - video, audio and "photoshop plugin" types.
My
Re: (Score:2)
That does not change the fact that shared cache is a strictly good thing though. There are many other advantages. Sharing cache means more cache overall (since no data needs to be duplicated when both processors need it, a huge saving for common workloads), and more cache means a lot less memory accesses. Shared cache also means that such common data only needs to be read from memory once, where the reads would need to be duplicated when cache is not shared.
On the other hand, the only thing I replied to w
Re: (Score:2)
AMD has a better solution for that at the moment, but it is not due to some kind of trade-off, they would be better off with shared cache and HyperTransport.
Oh, and one more thing: As has already been pointed out, this is indeed what will happen with the K8L.
Re: (Score:2)
I think you've just ignored everything the poster said - that the common case for desktop use doesn't need to share data and that shared cache has costs that are effectively the result of each core stepping on the other core's memory accesses - false cache-line sharing, bus contention and stuff he didn't mention like the fact that the bigger a single cache, the slower it is to access.
I didn't read the poster that way at all, and if that is what the poster meant to say I simply disagree. I read his post
Re: (Score:2)
Re: (Score:2)
Are you ASKING the chips to fight? Sheesh.
Re: (Score:2)
Re: (Score:2)
I care. Computers can be used for different things. I don't want to have a dedicated computer for gaming, a dedicated computer for coding, and a dedicated computer for graphics/video. If one computer can adequately perform to my satisfaction in all these facets, what's the problem? As long as unacceptable sacrifices are not being made to accomodate multiple uses, I'm fine with it.
At a previous job, I had a nice dual Xeon machine. It took whatever I could throw at it. It had a "professional" graphics
Re: (Score:2)
Re: (Score:2)
I think there are folks in the video game and computer animation industries (TV and film) who would disagree with you.
Re: (Score:2)
Re: (Score:2)
The chip is not a kludge. It may seem that the right way would have been to build the four cores into one die instead of two, but according to some information Intel accidently slipped during the IDF (in German) [heise.de] due to the yield they get for Core2 chips the price for a monolithic 4-core die would be $36,13 compared to $29,37 for two 2-core dies. So this might simply be driven by economic reasons, till the process and the yield.
embargo (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It is of course much funnier with the distinctive expressive talents of Wallace Shawn and Mandy Patinkin. When said aloud, you really have to adopt a muddled spanish accent to get it right.
That movie is right up there with Monty Python and the Holy Grail in terms of belonging to the collective unconscious of geekdom.
Re: (Score:2)
We Americans are not so unworldly as to not know the meaning of "embargo". Hmph.
It's buttered snails, often served in French restaurants.
Re: (Score:2)
Meaning 1, not in the "especially" sense, (i.e. "an official ban") seems to fit.
Time to refine operating systems... (Score:5, Interesting)
With 4 core out this year, and 80 cores out in 5 years, it's time to rethink multiprocessor operating systems. There needs to be a significant change in the locking and threading metaphors, because 4 and 8 way will be obsolete by this time next year.
Re: (Score:2)
Its the threads libraries and programming style of larger apps that matter. Say you have a basic Linux system with 4 cores. It will reasonably distribute processes across all 4. The problem is when a single process with multiple short-lived threads takes all the CPU power. Like games. Thats where funky compiler directives will be witnessed.
Re: (Score:2)
Re:Time to refine operating systems... (Score:5, Informative)
Yes and no. Programs can split the hard work across several threads and all of those threads will be managed by the task scheduler regardless of how many cores there are. The hard part is making an algorithm that can split the heavy processing work to multiple threads, that threading has to be programmed. If the program has all the hard work in one thread, then it's not going to use more than 100% of one CPU, 50% each of two CPUs, etc.
Re: (Score:3, Informative)
Usually, if an application can split its work up into 2 threads, it can split its work up onto n threads (if it's well designed). This isn't
always the case, but it tends to be. The hard part is breaking an algorithm up into pieces, usually not the number of pieces in particular.
So, for example, it could spread its work over 100 threads on a 1-CPU machine. This would be an inefficient use of threads, if they're all doing work. Usually 1 or 2 threads is ideal for a 1-core machine.
Similarly, i
Re: (Score:2)
The OS does not know what kind of interdependecies there are between threads and or even different processes. One of the most common performance issues is where thread-A needs to communicate with thread-B, but B is sleeping so A sits and waits for B to wake up (be scheduled) and respond. By that time, A may have been put to sleep (by the scheduler) so that when B finally responds, A isn't there
Re: (Score:2)
You seem to be implying that threads can't be woken up except at scheduler quantum boundaries. That would be horribly inefficient, so it's a good thing it isn't true... any decent/modern OS can wake up a thread immediately at any time. So even if thread A was put to
Re: (Score:2)
Re: (Score:2)
Obviously, you still need the apps to support it too.
Re: (Score:2)
I believe it's pretty good, and getting better.
Obviously, you still need the apps to support it too.
Not necessarily... one trick Apple is doing is to hide the multithreading code inside its higher level libraries. As a hypothetical example, say you have a single-threaded application that calls RenderCool3DScene() in Apple's Cool3DGraphics library: on a single CPU machine, the scene will be rendered in normal way, but on a multi-CPU machine, Apple'
Re: (Score:2)
Average. Probably at about the same level NT 4.0 and Linux 2.2 were in their day.
OTOH, with the read availability of multiprocessor machines today, it will (and has) likely improve a lot quicker than they did. I would expect it to be comparable to Windows and Linux at either the next major release, or the one after that.
Re: (Score:2)
Sure .. (Score:2)
As long as you don't mean to suggest it for an office or home use, unless you can also suggest a meaningful Windows / Office / Exchange replacement that won't require retraining a few million people.
Re: (Score:2)
I assume those'll run nearly as well on Solaris as well as Linux, being open source.
Re: (Score:3, Insightful)
Proofreading... (Score:1)
The graphs show the Dual Core out-performing the Quad, but the descriptions indicate how much faster the Quad is.
Sigh.. oh well. Moving on..
Re: (Score:2)
The quad-core Core 2 Extreme QX6700 is only showing about a 14% performance advantage over the dual core X6800 chip in the base CPU test module . We should note that an Athlon 64 FX-62 dual core processor scores around 5700 in the PCMark05 CPU test module.
The overall score actually shows the QX6700 slightly slower than the dual core Core 2 chip.
Re: (Score:1)
Before the naysayers come out (Score:3, Insightful)
Re: (Score:2)
Oh yeah, a real hotbed of hate for Intel, this is...
/. with myths of chip shortages, poor performance, heat problems, crappy motherboards, etc. These myths have been in decline in recent years, but they still persist, even though AMD has been slaughtering Intel until VERY recently.
Give me a break. AMD, to this day, gets unfairly poor treatment on
And your post is even a good examp
Quad Core Gaming (Score:2, Interesting)
Re: (Score:2, Informative)
Re: (Score:2)
Re: (Score:2)
Names (Score:3, Insightful)
So now, now only have they gone back to pointing out the clock speed, they add the NVidia product name at the end? Surely there's got to be a simpler way to do this, without even taking into account AMD. I mean you have:
- Dual Processor Pentium
- Dual Core Pentium D
- Core 2 Solo
- Core 2 Duo
- Core 2 Quad
- Dual Processor Core 2 Quad
Seriously, that's some major word jumble and you haven't even specified anything like clock speed (I know it's not all about clock speed, but uniform naming to differentiate would help).
Re: (Score:2)
I mean, does Intel Core 2 Extreme QX6700 really roll of the tounge so much worse than AMD Athlon 64 3200+ socket 939 (which, if you remember, is important since socket 754s also had a 3200+)?
Re: (Score:2)
Still no good motherboards. (Score:2)
Re: (Score:2)
But PCI-X is a dying standard that is getting replaced by PCI-E x8 and x16 slots. (PCI-E x8 is 2GB/s while PCI-X is only 1.06 GB/s)
There are more and more RAID and other controllers being produced for PCIe, so hopefu
Re: (Score:2)
Did Intel learn *anything* from Java2? (Score:5, Insightful)
I get that they are trying to say "Hey look, it is a totally different architecture!" But calling it Core2 isn't going to do that. People will just end up calling them Dual Core or Quad Core anyways, not Dual Core2 and Quad Core2. It's just going to detract from their branding, not help it.
Re: (Score:2)
Actually, up through Java 2 Standard Edition 1.4, they used "Java 2 Standard Edition 1.x". I'm pretty sure there was and is no "Java 4" product.
The next product version is "Java 5", for which the runtime and development kit
Re: (Score:2)
Ok, maybe they never called Java2 1.4 Java 4, but that's my point: with Java2 1.5, they officially [sun.com] changed this approach. There will never be anything called "1.6" when it is released (well, maybe somewhere buried in the code or in some arcane property)--it will be called Java 6. It's not a guess that they will be Java 6 or Java 7--that's the new naming scheme.
Which is what I was trying to get at--naming something FooBar2 3.4 is absolutely crazy from a branding and public relations perspective. Sure, i
Re: (Score:2)
Intel has different historical problems that they are responding to, particularly, the inability to trademark a number, and the fact that their competitors (including AMD, Cyrix, and others) copied Intel's non-trademarkable numbers to sell competing processors, which is why, starting from the Pentium, Intel hasn't used numbers as the main iden
Re: (Score:2)
I assume you are referring to the inability to trademark 386, 486, etc. But I don't see how that problem has to do with them coming up with the brilliant brandname "Core 2". And don't dismiss branding as an issue--the Core 2 line could be the biggest thing to happend to Intel since the original Pentium.
So the
Re: (Score:2)
I'm not dismissing branding. OTOH, its not much different for the Core than what they've did with the Pentium series, except that there are modifiers up and down the line, rather than just at certain places:
Re: (Score:2)
And you think calling it "Java 2 Platform Standard Edition 5.0" is any better?
Re: (Score:3, Insightful)
Core 2 is the second iteration of the "Core" line. There is Core 2 Solo and Core 2 Duo. It's the new "Pentium", stick with a single brand and append numbers to it. It doesn't help that there's *also* pentium-branded chips still being made.
I agree though, it's still a mess. I'm pretty experienced, and I get confused by it. Quick, which is newer, a Foofra QXV5024351GL or a Wibble RG188716912B?
It's not Core 2. It's Core 2 Duo (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
64-bit benchmarking? (Score:2)
Re: (Score:2)
Who gets them in volume first? (Score:2)
1066MHz FSB? (Score:2)
I'm sure earlier articles were saying it would have a 1333MHz FSB. Has the spec been dropped for some reason, or is it just early models that will have this limitation?
Re:Should have wait... (Score:4, Funny)
How about quad memory capacity? (Score:2)
Re: (Score:2)
But Windows XP is supposed to run just fine on a system with 512mb (ducks, runs for cover).
Seriously, though, AFAIK, the cores don't balkanize the RAM, staking out a 1/cores share and then fencing it off to prevent incursion by other cores, shouting MINE like a 2 year old on steroids. I believe they were taught how to share before Intel sent them off into the big bad world.
- Greg
Re: (Score:3, Interesting)
The apps I run at home (video conversion, maybe a VMWare instance) would each use very close to 512mb apiece. I might even run Oblivion in one cpu while turning a DVD converter loose on another process; AFAIK Oblivion will grab whatever it can, so 1gb for that cpu isn't unfeasible.
I can imagine other, more memory intensive apps trying to run in tandem and running into problems if you ha
Re: (Score:2)
Re:How about quad memory capacity? (Score:4, Insightful)
You've got a better imagination than I do, then. I can't see applications forking off copies of themselves and jockeying for position! If you meant "I can see running other, more memory-intensive apps in tandem" then duh, you'll use more memory. Exactly the same as you would on a single-core system. If you've got an app that scales well, it'll still take the same amount of memory no matter how hard it's exercising however many CPUs. Input set sizes are pretty much fixed, whether they're hard-coded or dynamically configured based on system size: your app will allocate the same amount of memory either because it always grabs 32M or because it always grabs 1/16th of total system memory. Number of CPUs has nothing to do with it. Unless there's some software that allocates one thread per CPU, and allocates some fixed buffer size per thread, which now that I mention it actually sounds reasonable for some classes of software, but I've never heard of it actually being done.
My favorite application will (Score:3, Interesting)
Synchronization overhead for us is less than 1% (Score:2)
We did a lot of analysis on the speedup conferred by parallelizing our code. Interestingly enough, for a long while it was actually super-linear! I.e., quadrupling the number of CPU's cut the time to less than 1/4th of the original time. This was explained by the effects of having a larger total cache size.
Nevertheless, sure, many applications will not benefit from parallelization as much as ours. Neural networks are naturally parallelizable.
Re: (Score:2)
lots of people will find 4 cores to be very freeing and decide to use even more apps at once than what they used to. that 2 gigs of ram will become very crowded. that's why lots of motherboards have a capacity of greater than 2 gigs of ram capacity.
Re: (Score:2)
Next to it will be a shop selling computers with slightly used parts.
(3) profit!!!
Re: (Score:3, Informative)
I'm reminded of a cartoon I saw years back, where a computer salesman is showing a customer a selection of computers: "Here we have the ones that will be obsolete in 6 months, and over here are the ones that will be obsolete in 9 months."
Thing is, that though Intel is releasing a consumer grade quad soon, they're only releasing the "Extreme" ver
Re:ExtremeTech has more benchmarks (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
That's a dual-core Opteron and a single-core Opteron in a tri-core system. Highly unstable, but very close to working, and the cores in the processors they used weren't even identical.
A Core 2 Duo 2.66GHz and a Core 2 Quadro 2.66GHz would have six identical cores, with the exact same clock speed, bus speed, and instruction set. I'd really like to see so
Re: (Score:2)