Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Intel's Quad Core CPU Reviewed 286

Gr8Apes is one of many to let us know that Tom's Hardware Guide has posted a review of Intel's new Kentsfield quad core processor. From the article: "Even expert opinions are deeply divided, ranging from 'more cores are absolutely necessary' to 'why do I need something more than my five-year-old PC system?' Although the Core 2 quad-core processors are not expected to hit retail channels before October, Tom's Hardware Guide had the opportunity to examine several Core 2 Quadro models in the test labs. We would like to make it clear that these samples were not provided by Intel."
This discussion has been archived. No new comments can be posted.

Intel's Quad Core CPU Reviewed

Comments Filter:
  • by siewsk ( 603269 ) on Tuesday September 12, 2006 @01:26AM (#16086772)
    It's the bandwidth stupid! Does not matter how fast the CPU is if it is bandwidth limited.
  • by savuporo ( 658486 ) on Tuesday September 12, 2006 @01:55AM (#16086847)
    Coincidentally, Gamasutra just two nice feature articles on rearchitecting the game engine flow to better parallelize the tasks so that multi-core can be taken advantage of, utilizing OpenMP
    "Multithreaded Game Engine Architectures "
    http://gamasutra.com/features/20060906/monkkonen_0 1.shtml [gamasutra.com]
    "Multi-Threaded Terrain Smoothing"
    http://gamasutra.com/features/20060531/gruen_02.sh tml [gamasutra.com]
  • by convolvatron ( 176505 ) on Tuesday September 12, 2006 @02:09AM (#16086882)
    of course you are right to a large degree. global memory bandwidth
    is the cause of the day and given coherency, its not trivial to
    architect around. the parent may have been a little terse, but
    as you point out, overall throughput doesn't go up if all the
    cores are too starved to issue.

    however the memory latency picture isn't changing very much, and the
    most compelling method to hide it for general purpose machines is through
    thread parallelism (ignore vectors for a moment, its kind of a special
    case of the same thing).

    this is what makes the multicore picture interesting. assuming a workload
    that can exploit it, you can really turn the scale knob pretty far up.
    unfortunately the whole affair pivots on being able to get past the
    crappy heavyweight thread-and-lock model the software people have grown
    so fond of. and the software community isn't particularily light on its
    feet.

  • Re:Experts? (Score:5, Informative)

    by steveha ( 103154 ) on Tuesday September 12, 2006 @02:36AM (#16086931) Homepage
    The companies who are really serious about servers are particularly interested in CPU power compared to heat dissipation -- thermal density [processor.com]. This new Intel CPU is high performance with high heat--more of a gamer chip. At least so far it is; it's a very early sample and Intel hasn't had time to tune the power management features.

    Intel's latest chips are fabbed at 65nm, while AMD is still only shipping chips fabbed at 90nm. This should give Intel a serious edge in the performance/heat ratio, but AMD's chips are so much more energy efficient that they are still competitive. (The current best performance/heat is the AMD Athlon64 X2 3800+ ADD [lostcircuits.com] chip.) When AMD finally ships 65nm Opterons, those ought to be really great for dense server installations.

    It's telling that even Dell is planning [com.com] to ship servers with AMD chips. They announced a 4-core server; two dual-core Opterons. It wouldn't surprise me if they will be 65nm Opterons when they finally are released.

    The article says that Intel is going to transition from 65nm to 45nm sometime in 2007, and to 34nm sometime in 2009. They beat AMD to 65nm big-time. They may well be at 34nm before AMD can make it to 45nm! Just imagine some sort of server chip with 16 cores... or more likely, 8 cores and a whole bunch of cache.

    But we shouldn't count those chickens before they hatch. Right now Intel is at 65nm and AMD will be there soon.

    steveha
  • Re:Experts? (Score:3, Informative)

    by julesh ( 229690 ) on Tuesday September 12, 2006 @02:56AM (#16086967)
    I've heard a lot of experts suggest that scaling outwards (i.e. adding more nodes to the cluster) is a better solution than improving the performance of individual nodes. They look at google as a model of how to build a high-performance database application.

    I'm not convinced, but that's one point of view that's often expressed.
  • Re:Duo 2 Sexo? (Score:3, Informative)

    by james b ( 31361 ) on Tuesday September 12, 2006 @03:03AM (#16086981) Homepage
    I've wondered in the past why multi-core/multi-processor systems usually seem to have a power-of-two number of cores. This quote [sdsc.edu] is interesting:
    Besides, it's very rare for users to need an odd number of processors (in any of the parallel codes I've seen at least). Most parallel problems are able to work in parallel by decomposing some sort of domain (be it physical, a mathematical matrix, etc.), and this decomposition usually happens in more than one dimension (this generally is done to optimize computation vs. communication). So generally worst case are prime numbers of nodes and the best cases are powers of two.
    So perhaps it's a convention borne of parallel computing algorithm design? But there could be a more fundamental SMP architecture reason - anyone know?
  • by adam31 ( 817930 ) <adam31 AT gmail DOT com> on Tuesday September 12, 2006 @03:06AM (#16086993)
    Not so far in future, there will be more bandwidth and then the limiting factor could be the speed/size of the memory. Or, it could be the power envelop of the entire system, Or, it could be back to the raw performance of CPUs.


    Ah, but the future is now. Cell has already addressed these issues: 25.6 GB/s main memory bandwidth, 256 kb of L1 cache per core, OoO sacrificed to minimize heat, maximal raw performance of CPUs in FP, integer, FP, load/store, FP, and main memory transfer (DMA engine) without any silliness like 8 GP registers (128). Even when multiple Cells are hooked together, it's over a 35 GB/s IOIF port.

    Also for onboard multitasking, you forgot about being latency-bound by atomic operations, which is something that would be really bad with separated L2 caches. This issue is also elegantly handled by cell by having only a single bus-snooped L2.

    It must be frustrating for the hardware guys. You address all the bottlenecks in a pretty uniform way, and they still criticize: "But... uh, the software guys need a refresher course in hypertasking..."

  • Re:FSB (Score:2, Informative)

    by Dersaidin ( 954402 ) on Tuesday September 12, 2006 @04:19AM (#16087143)
    Its just that
    1333 + 4 = 1337

    :/

  • by joto ( 134244 ) on Tuesday September 12, 2006 @05:30AM (#16087286)

    Processor makers need to really work on energy efficiency of all their desktops, these speeds were achieved through sheer increases in heat and power consumption, and its really flatly unacceptable

    Didn't you pay attention in class? All the processor makers have started doing this. For the last one or two years, the mantra has been "computing per watt", pretty universally, no matter which company you were from. And by the way, if you compare your 386 and a modern computer, I'll bet the modern computer gives more "computing per watt", no matter how you decide to calculate, so all improvements can't have been "achieved through sheer increases in heat and power consumption".

    Maybe if every 3-5 years there was a responsible and substantial leap in computing power people would upgrade in regular phases [SNIP] Of course gaming is to blame for this constant demand

    Ha ha ha. Hi hi hi. Ho ho ho. Hilarious! So you think if it wasn't for the gaming industry, we would somehow magically have every manufacturer on the planet agreeing to wait releasing their improved products untill someone (who?) said it was time for a new "generation"?

    Apart from the fact that such a thing has never happened in other industries (or would you care to give a counterexample?), that it is uncompatible with the idea of a free market, and that it is bad for consumers as well as producers, please explain why you think that would happen AND why you think that would be a good thing.

    So I figure nothing will ever change until we hit that mythical peak in which everyone says "Good enough" and the race to real time photorealism is over.

    Yeah, because when graphics are suddenly "good enough", nobody needs computers for other purposes. People don't need plain old office computers, workstations, databases, web-servers, B2B-applications, industrial control software, or anything else. Because the only thing driving the computing industry forward is gaming. And when photorealism in gaming has been achieved, all computing innovation will stop. Right!

    And besides, there will never be a "good enough" for graphics. In my view, audio has been "good enough" since long before the invention of the CD. That doesn't mean that there aren't people still working on creating better and/or cheaper audio components.

  • by keesh ( 202812 ) on Tuesday September 12, 2006 @07:00AM (#16087475) Homepage
    Not the compiler. Compilers are complicated enough that making them threaded is just a recipe for disaster. The build system is what should be doing the work -- in a project with lots of files, a decent build system (like, say, make) will be able to use all cores quite happily. Heck, make -j can use all 32 CPUs on our big release box without any difficulties.

    If your build system doesn't support parallelisation, you should look into switching build systems. Developer time is not cheap.
  • by Anonymous Coward on Tuesday September 12, 2006 @07:04AM (#16087490)
    "The question is, do we need this much processing grunt?"

    OK, a lot of people have asked the same thing and I'm not picking on you. Hell, we all think it and wonder it but the answer is so crystal clear no one should ever ask it again.

    I wish my 1 MHz Commodore 64 was a little bit faster. I would have paid good money for an extra MHz or two. If you would have told me then that one day computers would be 100 times faster I would have been shaken but not stirred. It'd be hard to imagine computers being that much faster than my 64 and it would sort of seem wasteful, but we're talking future here so sure, why not.

    If you'd told me that computers in 20 years would be 3000 times faster than my good old 64, well I imagine I would have had a hard time wrapping the brain around that one. I'm sure I'd be asking what would be the point? How could you possibly use 3000 MHz?

    Of course we all know how now. So, if you were to tell me today that next year computers are going to be 10 million times faster than today, I'd say cool.
  • by jtwronski ( 465067 ) on Tuesday September 12, 2006 @12:01PM (#16088946)
    I am trying to see where having some webpage open 5 milliseconds faster or something like that is any sort of huge advantage that I should pay hundreds of dollars for.


    If all you do is surf the web and watch movies, well then you'll be fine for some time. I edit movies quite often at work, and at 720x480x2000kb/s, I'm screaming for as much horsepower as I can get my hands on. I'm currently on a P2.4 with 1Gb of memory, and i'd like at least 4 times as much power as that. Cost be damnned. It'll pay for itself in saved time even before it becomes obsolete.

One man's constant is another man's variable. -- A.J. Perlis

Working...