Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Intel IDF Day 1 - Quad Core, Santa Rosa And More 120

MojoKid writes "From demos of the new Alan Wake game engine on a 3.73GHz overclocked Quad-Core QX6700 to design showcases with a wafer of 80-core teraflop capable chips, Intel's IDF opening day was brimming with tech-wonder from the company affectionately known as Chipzilla. Paul Otellini also showed pics of upcoming fab facilities in Arizona (Fab 32) and Israel (Fab 28). In total, Intel will have three 45nm fabs by the end of next year at an investment of about $9B, all targeted 45nm manufacturing processes. Finally, a bevy of Quad-Core Kentsfield-based systems are shown here, with Dell and Voodoo's offering looking especially swank."
This discussion has been archived. No new comments can be posted.

Intel IDF Day 1 - Quad Core, Santa Rosa And More

Comments Filter:
  • Grr (Score:5, Funny)

    by in2mind ( 988476 ) on Wednesday September 27, 2006 @11:21AM (#16215059) Homepage
    Paul Otellini, Intel's president and CEO, kicked off this season's IDF by coining the phrase "It's what's inside that counts", and spoke about why processing power matters again.

    All these years we all thought whats outside the processor that matters?

    • ...spoke about why processing power matters again.

      When did it stop mattering, and why wasn't I told?
      • Re: (Score:3, Insightful)

        by TheRaven64 ( 641858 )
        It stopped mattering to Intel Marketing while NetBurst was being completely spanked by the K8. Now the Core microarchitecture is (finally) giving competitive performance/money and performance/power numbers, it matters (to Intel Marketing) again.
    • Re: (Score:2, Funny)

      Yep, I was thinking more neon case-lights, and the latest Intel sticker more metafloppers per parsec. No really, this time around, it's not about marketing, it's about *chokes up* what's inside... No really.

    • Not only that, but he didn't even coin the phrase. Hell, even the Enquirer [enquirer.com] used that phrase in a headline in 2000. Of course, they didn't put a 80-teraflop chip into the puppy in the article in order to get it to perform aide tasks -- now that would have been innovative.
    • "All these years we all thought whats outside the processor that matters?"

      Apparently he has not pulled his head out of his CPU long enough to tell what really matters.
  • http://www.hothardware.com/image_popup.cfm?image=b ig_keynote_139.JPG&articleid=877&t=a [hothardware.com]

    The image says:
    ANNOUNCING THE
    $1,000,000
    Intel Core 2 Challenge
    What is this challenge they speak of? I want a million dollars...
    • Re:Core 2 Challenge (Score:4, Informative)

      by TheRaven64 ( 641858 ) on Wednesday September 27, 2006 @11:27AM (#16215133) Journal
      Unless they are running two challenges, it is to design a Mac Mini-like box to be marketed under the ViiV brand. The prize money is (as I recall) $600,000 for product development, and $400,000 for marketing. I don't know what this will do for the Apple-Intel relationship; paying people to compete harder with your customers isn't exactly the kind of thing that makes suppliers popular.
      • by NatasRevol ( 731260 ) on Wednesday September 27, 2006 @12:00PM (#16215553) Journal
        I wonder if Apple/Steve Jobs could just walk in to Paul Otellini's office and ask for the $1M check?
        • by frankie ( 91710 )
          Yes, that certainly sounds funny, but I bet it's actually true. If Steve Jobs called Otellini and said he was willing to slap "Intel(r) Core(tm)" and "Intel(r) Viiv(tm)" logos on the mini, Intel would be all over that with a big fat grin. Apple is already foregoing piles of marketing moolah from Intel for not using the logos on Mac, so it would be a huge score if they gave in.
          • by Amouth ( 879122 )
            i wonder if i could get it buy. buying a macmini / spray paint/ and rebrand it.
          • I know Apple arn't the richest company in the world, but would they really sell out for a measly $1m? The corporation that pimps me out makes $3.5m a day (thats for 3500 staff) and we're no where near as big as Apple.
    • The challenge is, without using AMD technology, create an intel based chip that you CANNOT also use as a barbeque.
  • Sun's UltraSparc T1 [sun.com] has 8 cores, 32 threads. So, will Intel catch up anytime soon?
    • by TheRaven64 ( 641858 ) on Wednesday September 27, 2006 @11:30AM (#16215181) Journal
      Apples to oranges. The T1 is a superb chip for some workloads, and an appalling one for others. The T2, which has an FPU for each core (unlike the shared one in the T1) should do a bit better, but there are still a lot of workloads where the T1 does very badly. This is why Sun still sell UltraSPARC IV+ chips as well, and these are only dual-core.
      • IMO, all non-x86 are too small market share to make a long term difference (probably). Whether Sun's procs are good or bad, I just don't see a server proc becoming a mass market standard like x86. Consumer's and volume is where it's at. IBM probably has the best chance at staying relevant due to cell and the fact it is targetted at high volume consumer electronics.
        • IMO, all non-x86 are too small market share to make a long term difference

          I disagree. x86 is a tiny, tiny market. PowerPC and ARM are both much bigger. Not sure about SPARC, but it might be. Have you looked at mobile 'phones recently? They're out-selling x86 CPUs, and most have a PowerPC or ARM chip in them. Take a look at a modern BMW; it has 25 PowerPC-based computers on-board. The three next-generation consoles? All of them are PowerPC based.

          We are coming to the end of the desktop computin

          • by samkass ( 174571 )
            We are coming to the end of the desktop computing era

            Um, no. You might... MIGHT... be able to make that statement if there's ever a year in which the year-over-year growth of desktop PC sales (let alone usable installed base) decreases. For now, we appear to still be in the initial geometric expansion phase of desktop computing.

    • by DrDitto ( 962751 )
      Sun's UltraSparc T1 has 8 cores, 32 threads. So, will Intel catch up anytime soon?

      And my single-threaded program runs 10x slower on a T1 than it does on a P4. I measured this myself.
    • by TheLink ( 130905 )
      The T1 was probably behind the very day it was launched.

      And by now it's way behind as AMD and Intel battle it out furiously.

      Just look at Intel's new CPUs (which beat AMD's chips is many situations by 40-50% - AMDs chips beat Intel's old CPUs by a similar margin too ).

      Show me a real world benchmark done by an independent party where the T1 does better.
    • Re: (Score:2, Interesting)

      My impression is that Intel's more worried about the applications keeping up with the cores. If today's apps could utilize all the cores you could throw at them, processors like Sun's T1 and T2 could be a problem for Intel. Intel and Sun need a killer app to promote their multi-cores. To do this, you're going to need to get multi-core machines into the hands of a lot more developers. I think Intel has a better chance of this and consequently the killer app will be more suited to Intel's processors than
  • they're dual-die. There is a difference. First, the dual-die process takes more power. Second, it costs more. Boo.

    Core2Duo is neato, you can overclock them like mad and the ALU/FPU is very efficient. But let's not kid ourselves. dual-die is not the same thing as quad-core.

    Tom
    • Re: (Score:3, Insightful)

      by maynard ( 3337 )
      Those dual die dual-core chips are fabbed at 65nm. The upcoming 45nm facilities should be able to manufacture quad-core chips on a single die.
      • Indeed. A small, single die [wikimedia.org]. For perspective, the small gold dot in the corner is actually the top of a construction worker's hat as he proudly admires the finished product.
    • There is a difference. First, the dual-die process takes more power. Second, it costs more.

      Putting the same number of transistors on one die takes as much power as putting them on two dice. (Unless there is some law of semiconductors that I haven't heard of.) Also, yield is lower (thus cost is higher) on a larger die than on two smaller dice.

      Don't you work for AMD? I guess your attitude is not a suprise.
      • Re: (Score:3, Informative)

        by yabos ( 719499 )
        There's more to it than just the core. There's all the rest of the CPU which includes cache(usually shared for dual core while separate on dual CPUs) and the rest of the interconnects for connecting to the motherboard and whatnot.
      • Well, the pads/pad drivers do consume a very significant quantity of the total die power. A dual-die solution has more pads that run at the same speed as a single-die and so should consume more power. In *theory*, it should be possible to leverage those additional I/O for increased performance since you do have more bandwidth. In practise, it might be damn difficult.

        All of this is implementation dependent. Did they do a true MCM with reduced driver sizes? What sort of package are they using? How is th
    • Re: (Score:3, Interesting)

      they're dual-die

      As Intel Ceo said in the press sesion: "The initial (four-core chips) are multi-chip, but so what?" he said. "I think you guys are misreading the market (if you think customers care about that)."

      First, the dual-die process takes more power

      I will wait for their quad-core before judguing it. The first dual core Intel CPU was hot and slow because it was based in the presscot platform, not because it was dual-die. This "dual-die" quad core is based on the Core 2, which is a great platform, IMNSH
      • by LWATCDR ( 28044 )
        The real issue will be memory speed.
        The Core2 is a nice chip but two of them is really going to be pushing the memory bus. AMDs Hypertransport scales better than Intel's FSB. Not only that but do the math. Intel's quad core system is going to run twice has hot and use twice the power of the Core2. Why? because it is two of them.
        The speed for watt and the speed for heat ratio will be worse than the Core2 because it will not be twice as fast. AMDs true quad core may very well bet Intel's two die solution. It
        • Intel's quad core system is going to run twice has hot and use twice the power of the Core2. Why? because it is two of them.

          Applying your logic: "AMD's quad-core systems are going to run x4 hotter and use 4x the power of a single opteron core. Why? Because there're four of them".
          • No, because AMD is 95W at 90nm, the quad-cores will be 65nm parts.

            The quad-core from AMD will be a real 4 core in a single die designed to fit in the 95W power enveloppe. Unlike Intel who designed a 65W processor and just pasted two of them on there.

            Tom
          • by LWATCDR ( 28044 )
            Nope because AMD is doing a die shrink to get all the cores on a single die. The die shrink should buy you a lower heat output. Until we get them in our hot little hand we don't know how well this will work.
            Intel on the other hand is just sticking two dies in a package. Each die will produce roughly the same heat and draw the same power as a single Core 2 chip since that is what they are.
            Now when Intel goes for a die shrink it will also have a lower heat output as well as a true quad core cpu. The problem t
      • by pdbaby ( 609052 )
        Unfortunately the 2.66GHz Core 2 Quadro uses 167W while idle(as much as the Core 2 Extreme 2.93GHz at full load) and 260W at full load. With heat to match, of course... I think I might wait a wee while to upgrade my Core 2 Duo to a core 2 quadro, anyway!
    • by dfghjk ( 711126 )
      It's not the same to Intel but it shouldn't matter to you. All that matters is cost, heat and performance.
  • I hope the article makes more sense because the summary has me staring blankly at the screen
    • by Zaatxe ( 939368 )
      You are seeing it from the wrong angle...
      • by Zaatxe ( 939368 )
        Oh, nevermind... this message was supposed to be in other article. That's what happens when tabbed navigation falls in the wrong hands.
  • Lol..slashdotted already...

    See...haha...pah, Mr. Intel, you fought vat your 80-cores could stand before me? Mwahaha.....behold, my invicible weapon, the slashdot....

    Still, it'll be interesting to see how Intel markets this to the everyday Joe user. I mean, the whole HT thing was marketted as helping you to burn a CD while you watched a movie...wonder if they'll use the same line here.

    "See, now you can burn *79 CDs* and watch 1 movie, all without breaking a sweat"...

    Sure, if you pick up Process Explo

    • by TheRaven64 ( 641858 ) on Wednesday September 27, 2006 @11:39AM (#16215317) Journal
      Sure, if you pick up Process Explorer, you'd think there's hundreds of threads running, but the truth is, most of those are idling.

      Pretty much all desktop apps can be split into two categories:

      1. The ones that contribute to the 5-20% load that your CPU generally sits under. (Web browsers, mail clients, music players, etc).
      2. The ones that cause the load to spike at 100% for extended periods. (Audio/Video encoders, compilers, typesetting engines, etc).
      Applications in category 1 will not see any benefit from a CPU that's twice as fast. That 5-20% load may drop to 2.5-10%, but no one cares. Those on category 2 will complete in half the time (assuming that they are CPU-limited and linearly scalable). As CPUs get faster, more and more things fall into category 1. Once you run out of things in category 2, stop upgrading. This happened for a lot of people about five years ago.

      I recently found out about an interesting experiment Intel did a few years back. They have a full-system simulator that allows them to test various things easily. They modified it so that all CPU operations took zero (simulated) time to complete. This gave about a 2.5x speed improvement for most tasks, i.e. an infinitely fast CPU only gave a 2.5x speed boost to most tasks. It doesn't take a huge speed increase before you run out of CPU-limited things and start hitting memory, disk, and network bottlenecks.

      • You need to move into the modern era.

        #2 includes web browsers now (mine spikes 100% frequently for Flash / embedded video / javascript on AJAX sites).

        Also it includes chat programs (which include real time voice/video communciation), and others. Many people are finding their PCs too slow to do 'new' things with because those 'new' things are hard on the CPU.
      • As CPUs get faster, more and more things fall into category 1. Once you run out of things in category 2, stop upgrading.

        Your arguement is based on a faulty assumption - that is that the limiting factors will not change. Whilest it's true that something like a word processor is basically limited by the typing speed of the user and throwing CPU at it isn't going to do much, the idea that applications limited by, for example, memory bandwidth will end up in category 1 and stay there is flawed. Memory is gett
      • Actually, I find when loading a web page, my web browser works more on the model of:

        1. Wait until it has enough data to do something.
        2. Do it as quickly as possible.
        3. Go back to step 1 until the web page is completely rendered.

        The net effect is that if the web browser has the CPU to itself, it will be constantly spiking the CPU to 100% for short amounts of time. If you are doing something like requires a bit of CPU usage, like playing a game, you'll notice it skip and jitter when a webpage is loading.
  • Moving fast now, eh? (Score:5, Interesting)

    by tygerstripes ( 832644 ) on Wednesday September 27, 2006 @11:32AM (#16215205)
    It seems the beast has awoken. They were so far behind in the chip-war with AMD for so damn long, it seemed their market share was getting nibbled to death.

    Now they finally seem to have woken up and, by god, they are really moving now, aren't they. $9bn in 45nm fabs? A wafer of 80-core chips already? Speaking as a one-time AMD Fanboi, I have to say - the daddy is back.

    (Let the flaming commence)

    • by Shivetya ( 243324 )
      I was thinking the same thing. Very much like America before Japan forced official entry into the war. Very content with themselves and position in the world.

      Just as it took America years to spin up properly, leveraging her resources, Intel now has come back to fight more prepared and ready for the long haul. The question that remains is, can AMD keep up with an Intel obviously aware of what the mission is?

      I would hope that with AMDs recent acquisitions that they not only keep up but open some new areas
      • by dfghjk ( 711126 )
        That's a pretty disrespectful analogy even if you find it accurate. There were estimated 62 million lives lost in WWII. No need to trivialize that by comparing it to a technology competition.
    • by Golias ( 176380 ) on Wednesday September 27, 2006 @12:24PM (#16215865)
      When Apple switched to Intel chips for the Mac, a lot of people were asking "why not AMD?"

      At the time, the answer from Apple was "Intel showed us their future road-map, and we wanted on board."

      Now we are starting to get a glimpse of what they were talking about.
      • by turgid ( 580780 )

        All other big companies (and even the small ones) that design CPUs are doing the same. Some even have a large head start (Sun with UltraSPARC T1). You need the interconnect too. Sun, IBM and AMD have that with Hypertransport (intel's work-a-like won't be out until 2008, a full 5 years behind AMD, and 12+ years behind Cray, Sun, IBM and SGI).

        Don't forget software. The Solaris kernel has been there since Solaris 7 in the mid-late '90s (SGI Irix too). Linux is catching up on multithreding (2.4 kernels with NP

  • ChipZilla (Score:1, Funny)

    by Anonymous Coward
    Since when was ChipZilla an 'affectionate' nickname?
    • Re: (Score:1, Funny)

      by Anonymous Coward
      I know. I always referred to them as "Pookey Bear." Much more affectionate.
    • by rsborg ( 111459 )
      Since when was ChipZilla an 'affectionate' nickname?

      Dunno, maybe it has something to do with naming similarity to the friendly Mozilla [mozilla.com]?

  • What's the relevance of Santa Rosa?
    • by paskal ( 150433 )
      They mention the upcoming Santa Rosa platform briefly on page three of the article:
      http://www.hothardware.com/viewarticle.aspx?page=3 &articleid=877&cid=9 [hothardware.com]

      He spoke of the upcoming mobile "Santa Rosa" platform that will incorporate NAND storage, Core 2 Duo CPUs, 802.11n WiFi, and G965 chipset. Santa Rosa will be a big step up in performance, and will have better battery life through advancements in manufacturing and the use of NAND storage. NAND will be integrated onto the motherboard in Santa Rosa an

    • by mi ( 197448 )
      What's the relevance of Israeli Defense Forces?
  • Wow... (Score:3, Insightful)

    by neoprog ( 1006513 ) on Wednesday September 27, 2006 @11:43AM (#16215371)
    I don't really feel like dual-core has really even reached any kind of saturation point in terms of how many people own a dual-core processor. I can't believe (but I do thank) Intel for pushing ahead and making good tech affordable to everyone. I spent a small fortune building the box I'm typing this on (with an AMD x2 socket AM2 4600+). And I know I got a little burned by the am2, but I think the upgrade path will stay clear for a while. I wish I could have waited, but I needed a computer for college and this was top-end about 4 months ago. I loved AMD when they had the better stuff, and now I love Intel, cause the core2's kick butt. When AMD catches up, that's when I'll upgrade my 4600+. ;) prog
    • by oc255 ( 218044 )
      Please explain how you got burned because I'm considering an AM2 based mini-server for home. Burned as in going from 939->AM2 chipset isn't compatible? Price?
  • by chromozone ( 847904 ) on Wednesday September 27, 2006 @11:57AM (#16215517)
    From the article:

    "Paul Otellini, Intel's president and CEO, kicked off this season's IDF by coining the phrase "It's what's inside that counts", and spoke about why processing power matters again"

    But then this in another article covering the same event:

    "Otellini briefly responded to concerns that Intel's first quad-core packages are simply "glued-together" dual-core processors while AMD is working on a native, single-die quad-core chip. "So what?," said Otellini, adding, "The public doesn't care what's inside a processor."

    http://www.tgdaily.com/2006/09/26/intel_core_2_qua d_announcement/ [tgdaily.com]

    In yet another article in Ars Technica we read that Intel is look to an 80 core chip. I like the Core 2 Duo a lot but I hope the Intel megahertz fixation isn't just going to become a "core" fixation .

      http://arstechnica.com/news.ars/post/20060926-7840 .html [arstechnica.com]

    Robert Moses built a lot of bridges and roads around New york hoping to relieve congestion but it had the counter-intuitive effect of creating more traffic. I hope all the increases in size and power of computers doesnt just bring more garbage. With all the legacy code bloat, and things like video cards that get hot as toasters and power supplies that waste energy (the Google thing) I think computing could use a few reductions instead of increases. In that regard it's nice to see the Core 2 Duo bring down the wattage.

    • "Core" fixation (Score:3, Interesting)

      by benhocking ( 724439 )
      In yet another article in Ars Technica we read that Intel is look to an 80 core chip. I like the Core 2 Duo a lot but I hope the Intel megahertz fixation isn't just going to become a "core" fixation .
      Speaking as someone who uses code [neurojet.net] that is very parallelizable - I hope it does! (Well, assuming that they also address memory bottlenecks and what not.)
      • No, you shouldn't hope it does. If Intel gets on a core fixation then they'll cram all the cores they can on there without a bus able to keep each core running full bore. So either an 80-core Intel chip that has bandwidth to keep maybe 20 going, or an AMD chip with 20 cores that has no problem with bandwidth.
        • If, within 5 years, (as Intel is promising) I'll be happy if it only has the bandwith to keep 20 going. I do understand the problem with them not keeping other bottlenecks up to the task of 80 cores, but, if I can only get 20, that's better than 2 or 4.
        • The 80-core chip is not a CPU. It is strictly for floating point. It could potenially be used for raytracing or scientific calculations that benefit from extreme parallelism.

          I would guess that this will first be seen in some type of add-on board with it's own local memory. I don't think FSB speeds are likely to increase enough in the next few years to keep this monster fed otherwise.
    • That is a good point, but I really don't think the Kentsfield being two Woodcrests "glued" together really is a big problem. People have been raising a big stink about it as simply being bad. The problem is that other than for gaming, the benchmarks seemed to show that there is a marked performance advantage over Kentsfield over Conroe, especially for media work, rendering and CAD.
    • "Paul Otellini, Intel's president and CEO, kicked off this season's IDF by coining the phrase "It's what's inside that counts", and spoke about why processing power matters again"

      But then this in another article covering the same event:
      "Otellini briefly responded to concerns that Intel's first quad-core packages are simply "glued-together" dual-core processors while AMD is working on a native, single-die quad-core chip. "So what?," said Otellini, adding, "The public doesn't care what's inside a processor."

      • "That said, if Intel can create a quad-core that performs very well and doesn't get too hot, I could care less if it's four cores on one die, 2 glued dual-core, or four single-core dies glued together, or a legion of microscopic gremlins that are really good at math."

        True of course. When I first wrote post I was mindful of a recent Tom's Hardware review that had the quad core doubling up in watts used (compared to Core 2 Duo)and that will sell for around 1000 USD.

        Core 2 Duo was such an exciting thing to see
    • Yes. I would like a computer which is just as fast and powerful as the one I have now, but that doesn't heat my entire apartment. Thanks to the Athlon 64 and my 450-watt power supply space heaters are a thing of the past.

      Additionally, it would be nice if we could get past the "My specular lighting and bump mapping is slightly better than yours" pissing contest that causes all these hardware upgrades. I'm perfectly happy playing System Shock 2 at upwards of 200 frames per second because it's an engrossing ga
      • Lol - my thoughts exactly. Just this week I was reading an article about video cards that were measured for temperature and plenty of cards were going to or over 200 degrees F. The first quad core will double the wattage of the Core 2 Duo and cost 1000 USD. But Intel people have said in interviews that dual core is the focus for now so even they arent pushing the news quads too hard at the mo. Upgrades for power just dont look appealing like a few years ago. Its beautiful to see Core 2 Duo do more with less
  • Intel chips sets are still behind Nvidia and Ati. The best that Intel has is x8 x8 or x1 x16 while Nvidia has x16 x16 one 590 board even has x16 x8 x16 and most 590 boards have with duel gig-e with ip/tpc offload. These are for amd 590 as the Intel ones are not out yet.
    This also shows up in the workstation / severs chips sets as well. Aka the power Mac g5 has more pci-e lanes then the Mac pro and it has less bandwidth in the chip set to chip set link.
    Also looking at Motherboards form super micro

    Dual AMD
    • by julesh ( 229690 )
      Intel chips sets are still behind Nvidia and Ati. The best that Intel has is x8 x8 or x1 x16 while Nvidia has x16 x16 one 590 board even has x16 x8 x16 and most 590 boards have with duel gig-e with ip/tpc offload. These are for amd 590 as the Intel ones are not out yet.

      My understanding is that the nvidia chipsets only have 20 pcie channels, so when you have two x16 cards in there, it behaves as though those slots were x8s.

      Still:
      (1) at least the slots are there, which they aren't on intel based board
      • intel 590 is 48 lanes

        NVIDIA nForce Professional 3600 and 3050 56 lanes 12 links Flexible
        NVIDIA nForce Professional 3600 28 lanes 6 links Flexible
        NVIDIA nForce Professional 3400 28 lanes 6 links Fixed
        NVIDIA nForce Professional 2200 and 2050 40 lanes 8 links Flexible
        NVIDIA nForce Professional 2200 20 lanes 4 links Flexible
      • I think the newer nForce 590 has more PCIe lanes. IIRC, the one motherboard I have with nForce 590 is a 16+16+4+1 for a total of 37 lanes (another note says 46 lanes)

        http://www.sudhian.com/ [sudhian.com]

        nForce 550 - 20 lanes
        nForce 570 - 20 lanes
        nForce 570 SLI - 28 lanes
        nForce 590 SLI - 46 lanes

  • 1 youtube video can max out a cpu? Sheesh, this is the thing people are talking about, how can you have a multitasking system when every application spikes the cpu. Seems Hyperthreading was a nice limp along until dual core, now we need more.

    You need a core just for IO! I'm using a dual core, and even its being pushed to its limits, I cant wait for a quad or dual quad core to actually make a system multi-tasking friendly.

    BTW, This isnt a windows bashing comment, this happens in linux too...
    • 100% just to play a Youtube video? Man, Intel procs must really suck!

      Seriously though, I think this is just something that Intel marketing decided to spout out in the hopes that nobody would actually check.

      I've got a laptop with a Turion ML-40 [2.2Ghz] (32bit WinXP), and I just tested this. Even with the CPU throttled down to 800Mhz, the youtube videos only used 50% CPU, not even enough to bump the processor up to the next speed step.

      If Intel CPU's really pegged at 100% just to play a flash video that an 80
    • Try BeOS then, its so threaded and slick, you'll think you're on a multicore machine, even with a single core.

      +not a BeOS user, just tried it once
      ++Slashies!
  • a 3.73GHz overclocked Quad-Core QX6700
    mmmm...

    a wafer of 80-core teraflop capable chips
    ooohhhhhh...

    three 45nm fabs
    aaahhhhh...

    swank
    WTF!?
  • When I first saw IDF at the topic, the first thing come into my mind is "Isrealli Defence Force"
  • But those look like intel's 975XBX mainboards in the pictures of the quad core machines. I thought I read earlier that intel was hyping the 965G chipset and not the 975X for their quad core architecture? Unless I'm totally missing something and intel has started using black pcb and the blue flame heatsinks on a 965 board.
  • Unfortunately just putting more cores on a chip will eventually reach a breakeven point where memory bandwidth becomes the limiting factor. If you can't fit it on the core's cache, you have to communicate with the outside world.

    As it stands, a Core2 is only marginally faster than a single core for the multithreaded scientific computations I work on. I would bet that a quad core would show little to no advantage over a dual core.

    -Tom
    • by julesh ( 229690 )
      Note that the quad cores have a faster FSB, so you probably would see some improvement with them.
  • The article is boring, but I read "IDF" as "Intel Defensive Forces" and imagined a division of Intel-powered robo-jews militarily occupying Santa Rosa, an amusing thought.
  • Our Quantum System Engineering (QSE) Group [mrfm.org] has immediate need of 80-core teraflop/terabyte processing ... it's just what we need to compute the real-time dynamics of imaged biomolecular structures ... on the desktop.
  • 80 cores on a chip and optical interconnects sounds great...but I wish they would talk more about the end application goals (i.e. a system that does 1080p ray-tracing and has 100% speech recognition). It's great that they are pushing the design limits as they are, but without clear vision of how the technology is to be used, it's likely that it will miss the mark.

    I was hoping to hear about a single die with cpu/northbridge/southbridge/gpu all integrated (and for mobile use)... that would certainly turn the
  • i haven't been paying close attention to any news about amd in the last couple of weeks, but i am expecting that they may be looking to the portable market in the near future.

    why should amd be concerned with intel building excellent workstation cpu's when they can (economically) build a portable, all-inclusive board with a high-end igp and swift dual-core cpu?

    how much longer until we see dual-core gpu's that aren't two cards glued together?

    if intel can't get well-coordinated with nvidia and amd offers i

  • It says he started by coining the phrase it's what's inside that counts. I coulda sworn *I* coined the prhase trying to get girls to go out with me back in high school!

    How the hell can someone claim a phrase is being coined when it's such a generic phrase used everywhere?

"Money is the root of all money." -- the moving finger

Working...