Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Intel - Market Doesn't Need Eight Cores 548

PeterK writes "TG Daily has posted an interesting interview with Intel's top mobility executive David Perlmutter. While he sideswipes AMD very carefully ('I am not underestimating the competition, but..'), he shares some details about the successor of Core, which goes by the name 'Nehalem.' Especially interesting are his remarks about power consumption, which he believes will 'dramatically' decrease in the next years as well as the number of cores in processors: Two are enough for now, four will be mainstream in three years and eight is something the desktop market does not need." From the article: "Core scales and it will be scaling to the level we expect it to. That also applies to the upcoming generations - they all will come with the right scaling factors. But, of course, I would be lying if I said that it scales from here to eternity. In general, I believe that we will be able to do very well against what AMD will be able to do. I want everybody to go from a frequency world to a number-of-cores-world. But especially in the client space, we have to be very careful with overloading the market with a number of cores and see what is useful."
This discussion has been archived. No new comments can be posted.

Intel - Market Doesn't Need Eight Cores

Comments Filter:
  • by GundamFan ( 848341 ) on Thursday July 27, 2006 @02:35PM (#15793146)
    I don't doubt an "8 core" desktop will exist in the near future. Then again he has a point... we won't likely need it.
    • Isn't this the same thing they said about 64bit chips?
      • At least 64-bit chips let us address a lot more RAM, and everybody knows that programs are gobbling up more and more RAM these days. Millions of cores aren't quite as useful, at least for the time being, for your typical home PC.
        • I've been doing my part to help increase memory usage with the following handy function:

          void * allocateMemory(size_t bytesNeeded)
          {
          time_t myTime;
          time(&myTime);
          struct tm * myTm = localtime(&myTime);
          unsigned int ramWastingFactor = myTm->tm_year > 100 ? (myTim->tm_year - 100) : 1;

          return malloc(bytesNeeded * ramWastingFactor);
          }
        • by IamTheRealMike ( 537420 ) on Thursday July 27, 2006 @05:58PM (#15795015)

          Almost no desktop programs actually use 4 gigabytes of RAM. Not even allowing for rapid expansion will we reach that bottleneck anytime soon.

          The Intel guys were right. What are the uses of 64 bit systems? They are removing a bottleneck that very few were hitting. The AMD64 instruction set fixes (more registers etc) are nice but not worth the hassle of losing binary compatibility. Result? Hardly anybody uses a pure64 system. Only enthusiasts.

          • Highend game engines are already at a point where 3-4gb can be a major improvement on 2gb. Battlefield 2 was one of the first that showed a difference, and Oblivion's outdoor scenes are really begging for at least 2gb.

            As with every other technological yardstick of computers, entertainment is driving the platform, not "desktop programs," for which the technology of a decade ago was adequate.
          • ...only enthusiasts ?!

            Obviously, you have never tried to simualate or graph propagation of an organic virus with a 4 million node set using Matlab x64 on a desktop system.

            We would be pleased to take your enthusiast money and 128 of your gaming buddies' money and build a Linux computational cluster to solve a problem that will likely save your life or the life of someone you know.

      • No, because 64-bit doesn't have the same kind of diminishing returns increasing the number of cores does. We don't need eight cores, at least in the short-to-medium term, because it would require fundamentally rewriting all our software to be more parallel (unlike 64-bit support, which only requires fixing code that assumes 4-byte pointers).

        • We don't need eight cores, at least in the short-to-medium term, because it would require fundamentally rewriting all our software to be more parallel

          I think that's somewhat of an exageration. Not all software has to be rewritten, just software where 1) speed is a driving concern and 2) isn't already multithreaded. In other words, normal office and web software doesn't need rewriting because it runs fine on 1 core. Raytracers and video effects software doesn't need rewriting because it's already multit

          • Audio processing is the classic example of trivially massively parallel processing on the desktop. Let's say you have 40 tracks with four or more effects slots, plus a few effects on a bus, plus a few effects on the master output. Each one of those is a separate computation unit.

            That's not to say that there aren't dependencies, of course. Within each channel chain, each plug-in has a dependency on data from the previous plug-in. Each of the bus mixers (including the master) has a dependency on having

      • by samkass ( 174571 ) on Thursday July 27, 2006 @04:18PM (#15794204) Homepage Journal
        Isn't this the same thing they said about 64bit chips?

        Good point... yes, Intel said this about 64bit chips, and they were right. Almost nobody needs 64bit chips. But now virtually all chips are 64bits, wasting a lot of die real estate and engineering effort because of the perceived benefits driven more by AMD's marketing than reality. It's quite possible 8 cores could end up in the same boat-- AMD pushing it for no valid technological reason and Intel being forced to follow suit.
    • by kfg ( 145172 ) on Thursday July 27, 2006 @02:43PM (#15793231)
      I don't "need" an R sticker and turbo sound synthesizer either, but they sure make my FIAT 500 go faster.

      The Little Mouse that Roars!

      KFG
    • by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Thursday July 27, 2006 @02:44PM (#15793246) Homepage
      If you're basing that on some logical sense of "need" I may remind you the average consumer doesn't need a quarter the computer they already have.

      Tom
    • Don't know if it really qualifies as a desktop, but there are motherboards that will support 8 cores. [allstarshop.com]
    • by mrxak ( 727974 ) on Thursday July 27, 2006 @02:47PM (#15793268)
      I frequently run as many as 8 programs at a time, sometimes more, but I seriously doubt each program would know what to do with its own core. With my two-CPU set-up, I find RAM to be almost the biggest limiting factor (although with 2GB, I've never actually run out). There's really no need for 8 cores until my brain is able to take multitasking to the next level, doing many many complex tasks that would gain benefit from (essentially) unlimited CPU power for each program.

      They say the biggest bottleneck of any modern computer is its user...
      • by avronius ( 689343 ) * on Thursday July 27, 2006 @03:27PM (#15793695) Homepage Journal
        See, here's where I have to disagree.

        Imagine an RPG that has multiples (100's) of 'computer' competitors that are "developing" along the same lines as you and your character(s). Or perhaps an MMORPG with thousands of players, competing against 100's of thousands of virtual characters that are developing along the same lines as your and the mmorpg's characters. Say goodbye to random encounters with stale NPC's - and hello to enemies with unique names and playing styles - all due to the computer's ability to handle such incredible virtualization.

        Adding more RAM and a minor increase in speed wouldn't help in either of these scenarios. Bring on the cores, man, and don't stop at 8...

        • by vadim_t ( 324782 ) on Thursday July 27, 2006 @06:02PM (#15795031) Homepage
          Actually, I've been discussing this with a friend recently.

          Take NWN for instance. How about making a game where things are REALLY happening? So far most worlds are extremely static. MMORPGs are static in that nothing ever changes, you kill the Lord of Evil and he's back on his dark throne 5 minutes later. And in most RPGs things just stay there and wait for you to appear (say, you never miss a battle in progress, as they just stay there until you appear nearby so that you can conveniently join the battle).

          For example, in NWN it's very clear that there are multiple factions living in the area. How about having kobolds, knolls, wolves, etc move around on their own, gather food, kill each other, reproduce, try to invade, etc? Wouldn't it be neat if you could defeat the gnolls, then wander off for whatever reason, and when you return find the kobolds now took over the gnoll cave, increased their population, and Tymofarrar got out of the cave and set fire to the town?

          Of course, make it too realistic and it gets a bit weird... imagine having to kill kobold children and walking on gnolls having sex.
      • by guaigean ( 867316 ) on Thursday July 27, 2006 @03:45PM (#15793877)
        The home consumer market isn't exactly the goal for technology like this intiially, and the price won't be inline with home consumers anyhow. This is the kind of stuff used in High Performance Computing, as a single computing node can maintain large amount of CPU performance with no transfer between nodes. 2GB is nothing in the HPC world, and 8 cores get filled up fast. While it may be easy to assume "I can't fill 1 CPU, what would I do with 8"? you have to remember that there are people out there running huge simulations, which could very easily use up many thousands of CPU's.

        Utility is in the eye of the user.
      • There's really no need for 8 cores until my brain is able to take multitasking to the next level

        Video processing
        photo processing
        Multitrack digital audio recording with multiple real time DSP effects

        And that's just what I thought about in 10 seconds. Not to mention what video games could do with all that processing power.

      • I frequently run as many as 8 programs at a time, sometimes more, but I seriously doubt each program would know what to do with its own core.

        Indeed. Even with multiple applications, there is rarely more than 1 CPU-bound thread on a desktop system.

        Not that I doubt the ability of the industry to produce applications that warrant increased numbers of cores, but this will take time and in the current landscape anything more than 2 goes to waste for most people. There are some applications that need it already,

    • by rbgaynor ( 537968 ) on Thursday July 27, 2006 @02:48PM (#15793278)

      eight is something the desktop market does not need

      So is he the only person on the planet who has not tried the Vista beta?

    • by hackstraw ( 262471 ) * on Thursday July 27, 2006 @03:06PM (#15793466)
      I don't doubt an "8 core" desktop will exist in the near future. Then again he has a point... we won't likely need it.

      My crystal ball is not always crystal clear, but I believe that 8+ cores will exist and are needed in the near future, at least for desktop systems.

      History here. I'm an HPC admin which translates into I run beowulf stuff where pretty much OTS computers are connected together to work as one big computer. I'm also a desktop computer user who is anal retentive about having realtime info regarding the status of my computer with respect to CPU utilization and whatnot.

      Now, in the many years of running desktop systems and being anal retentively monitoring them, I've noticed that CPU utilization is very often bursty. Meaning that its common for the CPU to hover around zero, and spike up with doing something like rendering a webpage, printing, compiling code, etc, etc. But most of the time (> 90% or well more if including when I sleep and stuff), the CPU is doing nothing.

      So, what is my point? Give me cores out the wazoo, and let them completely power down when not needed and crank up to all 8 or more when needed. This will greatly improve power requirements and improve performance at the same time. Evidence of similar stuff in either nature or in other technologies are plentiful. 1) Hybrid gas/electric cars. They use both for higher performance when needed, and then back off and oscillate between the two when its optimal for efficiency. 2) Animal tissue like muscles and nerves. Muscles are pretty much idle most of the time, and only use a few fibers when doing a light contraction, but all of the available fibers become active when exerting maximum effort. Similar, but different with nervous systems. 3) Human workloads. There are certain industries that are not really a constant, and even the seemingly constant ones also have bursts as well, but lets think of things like seasonal things like retail, taxes, or things like seasonal vacation spots. These kinds of jobs bring in more human bodies to handle the peak loads, and let them go when the peaks are over. Its nuts that in many places in the US, seasonal vacation spots are frequently employed by people from half way across the world!

      Now, is my 8+ core pipe dream going to happen tomorrow? No. But I believe this is where computing is going. Another thing that will have to change is that RAM should not be as random. In other words, memory, like CPU cores, should go dormant when not needed in order to conserve power as well, and of course there is the memory bandwidth issue as well.

      • The cores will only help if the action being performed is parallel in nature. Rendering a webpage is not parallel, you have to parse the file serially. Printing is not parallel, the instructions need to arrive in order. Compiling is the only example of a parallelizable action (and there's serial bottlenecks there as well).
        • You can distribute different pages for printing, different frames pro html rendering, or divs or something. Your browser could be decompressing pngs and jpegs in other cpus while one parse the html too.

          Web browsing is still limited by the network anyway, increase cpu to browse the web doesn't make any sense for me. At least with my 400kbps DSL

          But I at least would not want to increase cpu power for these trivial tasks. I would prefer that it happens when I do something heavier, like a game, or at least so

        • by 2nd Post! ( 213333 ) <gundbear.pacbell@net> on Thursday July 27, 2006 @04:46PM (#15794490) Homepage
          Uh, all your examples are only serial in implementation, not serial in nature.

          A webpage, for example, need not be parsed serial, though the performance of current systems is high enough that you get nothing in attempting to parallelize the renderer. A printer, however, can trivially be designed to be parallel, especially if you have unusually high DPI. Think of a printer rendering to a paper in the same way that a graphics card renders to a framebuffer. If you can use multiple pipelines, GPUs, and cards to accelerate video display, why wouldn't the same be possible for printing? The neat thing about printers and printed data is that there is no dependence, the image in the upper right exists independent of the image on the lower right, and etc etc. In theory you could have a core assigned to every PIXEL printed on a page, and a corresponding printhead with a printhead for each core, and you would be able to print an entire page in a cingle CPU cycle. Technically.

          So there are plenty of other things that could be executed on multiple cores:

          Decoding video (playback)
          Encoding video (storage, rendering, chat)
          AI for games (imagine simulating a multitasking AI on multiple cores)
          Physics for games (uncoupled events can be processed independently and coupled events require access to the same data)

          Yes, everything has a serial bottleneck, such as data access, but once properly set up most things can also be set up to be multicore as well. Saving a file, for example, can be multicore if you imagine the write as happening all at once, rather than serially, with each core assigned to a write head, each write head then operating independently... Etc.
    • I don't doubt it. We'll see eight cores available for typical workstations at the end of this year, the Cloverton (the next Xeon DP, 2 packates x 4 cores per package) will be available, about the same time as Kentsfield which is the four core desktop chip. And late last year / early this year was when dual core came out.

      The market currently doesn't need eight cores in a desktop, but there may be a call for it in high end desktops and workstations.

      I figure more cores is inevitable, but the issue is whether
    • by Junior J. Junior III ( 192702 ) on Thursday July 27, 2006 @03:29PM (#15793725) Homepage
      640 cores ought to be enough for anybody.
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Thursday July 27, 2006 @02:35PM (#15793147)
    If you put 8 core procs in desktop machines, software will be written that will take advantage of them. Which means you'll sell more 8 core procs.

    Are you going to lead or follow?
    • If they double the speed of my CPU, I can take advantage of that just by not trying as hard and letting my code bloat.

      If they double the number of cores, I can only take advantage of that if I have a problem that can be parallelized and then if I work very very hard to multi-thread my project.
      • If they double the speed of my CPU
        Tried and failed.
      • First, the hardest part is going from 1 to 2 cores. For that, you have to figure out the principle of how to split the workload. Going from 2 cores to n cores will usually be easier. And since dual cores are already becoming mainstream, professional programmers will be forced to take the step from 1 to 2 cores anyway.

        Second, the makers of multimedia applications already go ahead with multithreading, because it really works for that type of application. This will drive the market for more cores. In the long
    • They will make money, of course. That is their true motivation (to an extent, the engineers like making cool stuff, the marketers like selling cool stuff, in the end everyone gets a paycheck).

      It is irrelevant whether Intel leads or follows except insofar as it achieves their agenda of technical, marketing, and profit superiority.

      Intel will wait until the technology is mature, the market is ready, and their competition UNready, if they have the choice.
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Thursday July 27, 2006 @02:35PM (#15793153)
    Comment removed based on user account deletion
    • Re:Translation (Score:4, Insightful)

      by ssista537 ( 991500 ) on Thursday July 27, 2006 @03:09PM (#15793497)
      Seems like people dont RTFA. Let me Quote "Will we see eight cores in the client in the next two years? If someone chooses to do that, engineering-wise that is possible. But I doubt this is something the market needs." He is talking about next two years not ever. We just have an abundance of dual core machines in the market now and the apps to take advantage of it. Tell me how much different software we had two years ago than today. If so there is no way a desktop market needs 8 cores two years from now. Geez we have so many fanbois and script kiddies here with absolutely no knowledge of the industry, it is sickening.
  • Question. (Score:4, Insightful)

    by Max_Abernethy ( 750192 ) on Thursday July 27, 2006 @02:35PM (#15793154) Homepage
    Does having multiple cores do anything about the memory bottleneck? Does this make the machine balance better or worse?
    • Worse. More cpu resources need more data from memory. Memory doesn't magically get any faster from this advancement.
      • by DrYak ( 748999 ) on Thursday July 27, 2006 @04:14PM (#15794163) Homepage
        Not quite exactly. Things depends on the brand.

        For Intel that's exactly the case :
        With current intel architecture, memory is interfaced with the NorthBridge.
        With multicore and multiproc systems, all chips communicate to the NorthBridge and get their memory access from there.
        So more cores and processors means same pipe must be shared by more, and there for memory bandwith per core is lower.
        Intel must modify their motherboard design. They must invent QUAD-channel memory bus, they must push newer and faster memory types (that's what hapenned with DDR-II ! They needed the faster datarates, even if those come at cost of latency), etc...

        But the more their pursue in this direction, the more latency they add to the system. Which in the end will put them in a dead end. (Somewhat like the deeper pipe of their quest for Gigahertz put them in dead-end of burning-hot and power-hungry P4).

        For AMD that's not quite the same :
        With the architecture that AMD started with the AMD64 series, memory is directly interfaced with a memory controller that is on-die with the Chip.
        The multiple procs and the rest of the mother board communicate using a standarized HyperTransport.
        The rest of the mother board doesn't even know what's hapenning up there with the memory.
        And with the advent of HyperTransport-plugs (HTX) the mother board doesn't even realy need to know it.
        Riser cards with Memory-And-CPU-Both-of-Them (à la Slot 1) is possible (and highly anticipated, because it'll make possible a much wider possibility of specialized accelerators to be plugged than currently with AM2 socket)

        The most widely publicised advantages of this structure are the lower latency.
        But this also makes it easier to scale up memory bandwith : Just add another on-board memory controller and voilà you have dual-channel. That was the differences between first generations of entry-level AMD64 (Athlon 64 for 7## socket : one controller - single channel, Athlon FX for 9## socket : 2 controllers, dual channel).
        by the time 8 cores processors come out and if CPU riser-board with standart HTX connector appears, nothing will prevent AMD to just build riser board designed for 8 cores chips with 4 memory controllers (and Quad-channel speed). Just change the riser board, memory speed will scale. Mother board doesn't need to be re-designed. In fact, same mother board could be kept.
        And this won't come at the price of latency or whatever : the memory controller is ON the cpu die, and must not be shared with anything.

        In fact, that's partially already happening :
        In the case of multi procsystems, instead of all procs sharing the same pipe thru the NorthBridge, each chips has it's own controller going at full speed.
        And this memory can be shared over the HT bus (albeit with some latency).
        It's basically 4 memory controllers (2 per proc) working together. Acheiving quad-channel alike shouldn't be that difficult.
        Specially when Intel is pushing the memory standart to chips with higher latency : asking for more bandwith in parallel over the HT-bus won't be that much penalizing.

        So I think AMD will be faster at developping solutions to scale against higher number of cores than Intel, due to better architecture.

        Maybe, it's not a coincidence that AMD is working on technology to "bind together" cores and present them as single proc to not-enough SMP-optimized software, and that at the same time Intel is telling who ever wants to listen to them that 4 cores is enough, 8 is too much. (Yeah, sure, just tell it to the database- and Sun Niagara people. Or even to older BeOS users. This just sounds like "640k is enough for everyone")

    • Re:Question. (Score:5, Informative)

      by Aadain2001 ( 684036 ) on Thursday July 27, 2006 @02:51PM (#15793316) Journal
      It depends on which memory bottleneck you are talking about. There is a memory hierarchy in computers, with the fastest also being the closest to the processor, the level 1 or L1 cache (usually split into separate data and instruction caches). This is then tied into a much larger, but slower, L2 cache (combined instruction and data lines). Some processors use an L3 cache, but not many these days. Current processors have L1 and L2 directly on the chip. If you see those die pictures they show off to the press, the largest areas of the chip are the caches. Finally, the chip can go across the front side bus and access the main system memory, which is very large compared to the L2 and L1 caches, but much slower in terms of number of cycles to access.

      So which bottleneck are you refering to? The new Core 2 Duo chips of Intel's share the L2 cache and, as far as I can tell from the reviews I have read, this setup works very well. Both chips can share data very quickly or when executing a single sequential program one of the cores can use all of the L2 cache (which in the Extreme Edition verion is up to 4MB!). Or are you refering to the main memory? It is possible for both cores to need to access the main memory at the same time, but modern pre-fetching and aggress speculation techniques reduce how often that occurs and the timing penalties when they do occur. And of course, the larger the L2 cache the more memory can be stored on the chip at once, reducing the need to access the main memory very often. According to Intel's own internal testing, they had a very hard time using all of the bandwidth the current front side bus and memory offers, which means the main memory shouldn't be a bottleneck.

      So what is the bottleneck you are refering to?

      • According to Intel's own internal testing, they had a very hard time using all of the bandwidth the current front side bus and memory offers, which means the main memory shouldn't be a bottleneck.

        Was this testing done with one 'Core 2 Duo'? Two? Four?

        My understanding is that the FSB bottleneck only really comes into play with multiple chips, and that AMDs solution (EV7 derived PTP connections between each Chip and the Memory, I think), was better when more chips were involved because each chip ends up wit

      • Finally, the chip can go across the front side bus and access the main system memory, which is very large compared to the L2 and L1 caches, but much slower in terms of number of cycles to access.

        That's the good thing about AMD processors. They don't have to go across the FSB to get at the RAM
    • Re:Question. (Score:4, Insightful)

      by NovaX ( 37364 ) on Thursday July 27, 2006 @03:12PM (#15793522)
      Worse.

      For Intel, they are currently using a shared bus approach. It makes sense for a lot of reasons (mainly by being very cost effective), and they are developing a point-to-point bus for the near future. In such a system, each CPU is using the bus for retrieve data. This means that they lock the bus, make their calls, finish, and unlock. The total bandwidth available is split between all parties, so if there are multiple active members (e.g. CPUs) then their effective bandwidth is split N ways. The only solution to this is to have multiple shared busses, which is expensive.

      A point-to-point bus gives each member their own bus to memory. Thus, there is NxBW effective bandwidth available. As memory cells are independant, the memory system can feed multiple calls. You'll only run into issues if multiple CPUs are accessing the same memory, but models have been around for a long time. There might be a slightly higher latency, but not by much.

      With multiple cores, you may get the benefit of shared caches which could remove a memory hit.

      Overall, I would assume a multi-core system would scale fairly similarly to a multi-processor system.
    • Re:Question. (Score:3, Interesting)

      by larien ( 5608 )
      Well, if you believe Sun's marketing, it's great for throughput. The new Niagara chips (in the T1000/2000 servers), each core has 4 compute threads. As thread 1 waits for RAM, thread 2 kicks in, repeat until we get back to thread 1, which now has its data from memory and gets a chance to do some work, before passing onto thread 2 etc, etc.

      However, these chips are designed for throughput of multiple threads; for a desktop, single threaded app, you will still have the same memory bottlenecks we have now.

      • Re:Question. (Score:4, Insightful)

        by NovaX ( 37364 ) on Thursday July 27, 2006 @04:51PM (#15794533)
        One thing to remember, Sun has a lot more expertise on memory busses than Intel does. The UltraSparc chips have never been great performers, but are wonderful at scaling in multiprocessor systems. Intel has never put too much effort in their bus system, because the economics favor cheaper solutions. Their shared bus approach reduces costs for a mass market, but they even use it for ultra high-end systems like Itanium. Those systems really need a better system bus, but simply used a tweaked version of the standard Xeon one. I believe Intel is targetting 2007 for the release of their new bus architecture.
  • well, (Score:4, Insightful)

    by joe 155 ( 937621 ) on Thursday July 27, 2006 @02:36PM (#15793156) Journal
    I don't want to insult the person but saying that 8 is something that will not be needed seems very short sighted. People were saying only a few years ago "1GB is too big for a hard-drive"... Never under estimate the increasing need for power in computers, even for home users
    • Re:well, (Score:5, Insightful)

      by man_of_mr_e ( 217855 ) on Thursday July 27, 2006 @02:42PM (#15793228)
      I think he was talking about the foreseeable future.

      1 core is really enough for most users. 2 cores is enough for most power users. 4 cores will be enough for all but the most demanding jobs. Workstations are different, however and are not usually considered part of the "desktop". For example, I could see 3D artists using 4 or 8 cores easily. In fact, there's simply no such thing as a computer that's "too fast" for certain purposes.

      The issue, though, is one of moderation. Why would a desktop user want 8 cores, which are drawing insane amounts of power, when they're not even utilizing 4 to full advantage? Word processing, accounting, and surfing the web don't need any of this. Games? I can imagine in 10+ years we'll have some photo-realistic 3D games that run in real-time, but the vast majority of the work will likely be handled by GPU's and won't need 8 cores to deal with it.

      I simply cannot fathom a purpose for 8 cores for any "desktop" application that isn't in the "workstation" class.
    • technically we didn't "NEED" a 1 GB HDD, assuming that DOS stayed the primary OS, you don't do image, audio, video files. which is what the people who said that back then implied. any data that wasn't used was stored on Floppy or tape backup. and hdds were nicely organized and kept clean because space was a limiting factor I now have 360GB of hdd space, (a 200 and a 160) and i have had an average of 100 GB free.. i've gone down to 180 free, and could clean up a lot more if I burned the data to cd... i
    • The summary misrepresented what he actually said.
      Perlmutter: Will we see eight cores in the client in the next two years? If someone chooses to do that, engineering-wise that is possible. But I doubt this is something the market needs.
      In the next two years, we likely won't see eight cores, but he didn't claim anything past that. Reading comprehension is not a strong point of slashdot editors, it appears.
    • Re:well, (Score:2, Insightful)

      by Waffle Iron ( 339739 )

      People were saying only a few years ago "1GB is too big for a hard-drive"

      But this time the new hardware would be dependent on a major overhaul of the software industry. Any programmer can write code to fill up a 1GB hard drive, but effectively using 8 cores usually requires talented programmers who have mastered multithreaded programming. This is a small fraction of the software developer population, so apps that can take advantage of an 8-core CPU will probably be few and far between for a good long whi

      • Re:well, (Score:5, Insightful)

        by koreth ( 409849 ) on Thursday July 27, 2006 @03:32PM (#15793755)
        effectively using 8 cores usually requires talented programmers who have mastered multithreaded programming.

        But ineffectively using 8 cores can be done by any dumbass with a C# compiler or a book on the pthreads library. Which is why we actually will need 8 cores.

    • I wonder if that was a marketing-friendly "our architecture can not scale to 8 cores, so desktops have no use for that"
    • which beggs the quesstion..."did Perl mutter any gem of wisdom?" Or is he just uttering bitterness toward AMD.

      But, I suppose he's lucky he's not a doctor with one of those names such as:

      bone
      cutter
      sharp
      burns
      butcher
      crusher ...

      But, hopefully, he'll come up with something TRULY prescient, maybe to rival Moore's Law... Any possibility?
  • Every app designer that cares about performance is going into data-parallel design right now, with many media apps able to use 8+ threads already. Run two of these apps in a workflow and we'll see people on the desktop enthusiastic to get ahold of 16+ core chips.
  • by Jerf ( 17166 ) on Thursday July 27, 2006 @02:38PM (#15793179) Journal
    Of course it's a bit of a chicken and egg problem right now, isn't it? If more software used multiple cores, then we'd have a greater need for more cores. Or you could start programming in Erlang and sort of automatically use those cores.

    On the other hand, to be fair, the scaling issues start getting odd. I'd expect that we're going to have to move from a multi-core to a multi-"computer" model, where each set of, say, 4 cores works the way it does now, but each set of 4 gets its own memory and any other relevant pieces. (You can still share the video and audio, though at least initially there will presumably be a priviledged core set that gets set as the owner.)

    Still, as my post title says, this does strike me as rather a 640KB-style pronouncement. (The original quote may be apocraphal, but the sentiment it describes has always been with us.)
  • by ackthpt ( 218170 ) * on Thursday July 27, 2006 @02:39PM (#15793197) Homepage Journal

    If the home user can justify (even indirectly due to demands of the operating system or changes in software architecture) 4 cores then 8 is immenently logical. Seems some minds at Intel are falling back to the dubious position they held regarding home users never needing 64 bit CPUs. Then again, maybe they're just playing dumb and are slaving away, burning midnight oil by the drum, to make 8 and 16 core processors.

    Three Cores for the Clippy, but I don't know why,
    Seven for the Vista kernel which is defect prone,
    Nine for for Bloat which will make the cooling fry,
    One for the Screensaver to toil alone,
    In the Land of Redmond where Marketing lies.
    One Core to rule them all, One Core to find them,
    One Core to bring them all and in the darkness bind them
    In the Land of Redmond where Marketing lies.

    • In the Land of Redmond where Marketing lies.

      That's a catchy little rhyme, there, but I have some problems with it.

      1) Marketing lies all the time. That's a given. You don't have to tell us what we already knew.
      2) It's not restricted to Redmond as you seem to imply.
  • Translation (Score:2, Insightful)

    by growse ( 928427 )
    Intel saying "The market doesn't need 8 cores" = Intel saying "We can't really engineer 8 cores right now, we've hit some trouble". Of course the market would like 8 cores. Markets are greedy for new stuff, that's how you keep on making money. Intel's covering their ass for putting 8 cores on their roadmap for anytime soon.
    • Cores don't come cheap. Think about the power requirements for 8 cores, then think about return you're getting in terms of actual utilization of those cores. Remember, we're talking desktop, not server. Sure, there will be people that want the bragging rights, but that's about it.

      I suppose you could transcode a DVD in 5 minutes with that many cores... or a Blu-ray disk in an hour ;)
      • Think about the power requirements for 8 cores, then think about return you're getting in terms of actual utilization of those cores.

        Okay, I just thought about power requirements for 8 cores, and then I thought about the fact that included even in the article summary was an indication that CPU power requirements are going to drop "dramatically." Obviously, I can't see into the future (or Intel's labs) to find out what the word "dramatically" means, but it's a clear indication that they're trying to keep

  • by MarkByers ( 770551 ) on Thursday July 27, 2006 @02:42PM (#15793225) Homepage Journal
    I think there is a world market for maybe five cores.
  • Classic mistake (Score:5, Insightful)

    by ajs ( 35943 ) <ajs.ajs@com> on Thursday July 27, 2006 @02:44PM (#15793244) Homepage Journal
    He's right. Current desktops don't need 8 cores. However, as four cores become widely available, desktops will begin to change. They will become more threaded, and more processing that would have been avoided previously will begin to happen passively. Constantly streaming video in multiple thumbnail size icons on taskbars, stronger and more pervasive encryption on everything that enters or leaves the machine, smarter background filtering on multiple RSS sources, MUCH beefier JIT on virtual machines, on-the-fly JIT for dynamic languages, more complex client-side rendering of Web content (SVG, etc), these will all start to become more practical for constant use. Other things that we haven't even thought of because they're impactical now will also spring up. By the time 8-core systems are available, the market will already be over-taxing 4-core systems.
    • Constantly streaming video in multiple thumbnail size icons on taskbars...

      Umm, I already have that in my OS X dock. What else have you got?

      ...stronger and more pervasive encryption on everything that enters or leaves the machine...

      Hmm, I'm not sure we need much stronger, but it does not take much processing power now. Between and encrypted home dir, VPN, and SSL/SSH, everything is already pretty much encrypted at least once.

      ...smarter background filtering on multiple RSS sources...

      Maybe a little,

  • by spyrochaete ( 707033 ) on Thursday July 27, 2006 @02:47PM (#15793275) Homepage Journal
    I recently read about a 1024-core chip for small devices like cell phones Each core ran on a simplified instruction set and specialized in a certain task like muting the microphone when incoming sounds are too quiet, smoothing text on the low resolution screen, and other minute tasks. Individual cores could be placed in low power sleep mode until the software dictated a need for that instruction set.

    Is it possible to couple CISC and RISC cores on one die? Is this how the math coprocessors of the 386 era worked? This sounds like an ideal solution to me since nobody needs 4 or 8 cores to be fully powered and ready to pounce at all times.
    • by Kjella ( 173770 ) on Thursday July 27, 2006 @06:13PM (#15795093) Homepage
      Is it possible to couple CISC and RISC cores on one die? Is this how the math coprocessors of the 386 era worked?

      It's essentially how all modern processors are. I think the old coprocessors were the last that weren't on the same die (except the fake "coprocessors" that actually took over and completely ignored the old CPU, was more like a CPU upgrade in drag). Modern processors have a CISC instruction set which gets translated to a ton of mircoops (RISC) internally, and with parallel execution you in essence have multiple cores on one die - they're just not exposed to the user.

      The limitation compared to a cell phone, which has an extremely fixed feature set is trying to find workable dedicated circuits for that are meaningful for a general purpose computer. That's essentially what the SSE[1-4] instruction sets are, dedicated encryption chips (on a few VIA boards, plus the new TCPA chips), dedicated video decoding circuitry (mostly found on GPUs) and maybe a few more. But on the whole, we've not found very many tasks that are of that nature.

      In addition, there are many drawbacks. New formats keep popping up, and your old circuitry becomes meaningless or CPU technology speeds on and makes it redundant. The newest CPUs can so barely decode 1080p H.264/VC-1 content, but I expect that to be the hardest task any average desktop computer will face. What more is there a market for? I don't think too much.
  • by ausoleil ( 322752 ) on Thursday July 27, 2006 @02:48PM (#15793285) Homepage
    "Need" is subjective.

    Once upon a time, Bill Gates said we would never "need" more than 640K.

    Once upon a time, mainframes only had 32K of RAM -- and that was a vast amount more than their predecessors.

    The '286 came out and was primarily aimed at the server and workstation market. "No one will ever need all of that power."

    Thing is, people always "need" more speed, more RAM and more storage. And they'll pay for it too, so Intel may "need" to sell 8X cores.
  • Just because some desktop users don't need 8 cores doesn't mean that nobody does.

    Outside of my web browser and email client, 3 of the 5 applications I use on a daily basis for very intensive computing take full advantage of multi-processor threading, and all 3 of those would take full advantages of 8 cores (compared to the 2 I currently have and the 4 my next machine will have).
    • Ah, but will those applications truely take advantage of all those cores? Just because a program is multithreaded does not mean you can simply throw more cores at it and get ever increasing performance. That would denote a perfectly linear performance increase, which almost nothing does (a raytracer comes close on complex scenes).

      The only real programs that will take advantage of large numbers of cores will be scientific applications. Games? You offload the AI to one core, the sound to one (or the nice

  • Dual-core/processor system are nice. It's the equivalent of "load balancing" for your PC. You don't even need software that takes advantage of dual-processors to take advantage of it, really. Just put half of your processes on one CPU, and half on the other. Makes things nice and speedy.

    But once you try to write an individual program to split it's *own* load between 2 CPUs, things get complicated fast. And the more CPUs you try to use, the harder it gets.

    The fact is, it's hard to make multi-threaded softwar
  • There have been a number of effects I tried to use on an image under GIMP. After a minute or so with nothing apparently happening, I would cancel.

    I eventually asked somewhere whether there was a known bug, or whether the effect was just plain computationally intensive, and thus inherently slow. The answer: the latter.

    I would think that image manipulation would often lend itself to divide and conquer, and hence could use as many CPUs as you'd care to throw at the task... and that also, it's a common enough t
  • Yeah, 8 cores is too much like 8 Megs of RAM is too much. Maybe for today's single-core apps, and low-core O/S stuff, but tomorrow? Tomorrow I want as many cores as you can put on there, why not? What famous last words... now AMD will come out with some cheap fab process, make a 512-core chip and kill Intel to death!
  • by kaoshin ( 110328 )
    "TG Daily: Would you go as far as saying that Core reverses the competitive landscape in the micro processor industry? Does AMD now have" you like totally by the balls?

    "Perlmutter: Yes."

  • Where was intel's claim that the market doesn't "need" a Pentium with 100MHz (when it really didn't)? Where was its claim that the market doesn't "need" 3+ GHz CPUs (when they were faster than AMD)?

    Now suddenly we're supposedly not "needing" faster technology?

    'scuse me, please, but this is still a free market, no matter what the producers think. And in a free market I decide what I need! And should I decide to need it and someone makes it, I will buy it.

    If the chip isn't from Intel, so be it.
  • by CrazyJim1 ( 809850 ) on Thursday July 27, 2006 @03:03PM (#15793428) Journal
    The 6 pack has been tried and true, why try and stuff an additional 2 Coors into it.
  • by AHumbleOpinion ( 546848 ) on Thursday July 27, 2006 @03:13PM (#15793540) Homepage
    What quite a few other posters are failing to understand is that he is referring to diminishing returns. 1 to 2 give you some fractional improvement, 2 to 4 gives you a smaller fractional improvement, 4 to 8 gives you an even smaller fractional improvement, etc. At some point the cost, size, heat, noise (for the cooling), etc is not worth the fractional improvement. For most users that will probably be dual or quad.

    For those extremely rare apps and jobs that are highly parallelable 8 and above will be useful. However this will be very rare and this is why the comparisons to the infamous 640K quote are misguided. Increasing RAM is easy, software naturally consumes RAM with no additional work necessary, just do more of what you are alraedy doing. Multiprocessing is something completely different, the code must be designed and written quite differently, and it is often very difficult to retrofit existing code for multiprocessing. Now you have the practical problem that not all problems are parallelable.

    Strangely enough, I think one case where 8 cores could be useful in a home environment would be a bit retro. A multiuser/centralized system. One PC with the computational power for the entire family, dumb terminals for individual users, connections to appliances for movies, music, etc. Such a machine might go into the basement, garage, closet, or other location where noise is not an issue. Of course, I'm not sure such a centralized machine would be cost effective.
  • Two are enough for now, four will be mainstream in three years and eight is something the desktop market does not need.
    That... sounds familiar... where have I...

    No one will need more than 637 kb of memory for a personal computer. ~ Bill Gates
    Aha!

    (Yes, I know he didn't actually say that, but it's still a famous misquote.)
  • While an 8 core desktop is gonna be overkill for a lot of people, it still leaves us with a nasty problem.

    Peak CPU speed.

    For now we have topped out on this, meaning our existing software is either gonna have to get more efficient, or it's going to have to change, unless we want to just deal with the level of performance and features we currently have.

    (like that's gonna ever happen --how else would the closed corps sell upgrades then?)

    Additionally, some application areas do not have enough CPU power to fully realize their potential. MCAD is one of these, by way of example. Take the fastest CPU's we have today and they are still not fast enough to fully render a solid model without wasting the operators time. Current software offerings are all working toward smarter data, creative ways to manage the number of in-memory and in-computation models, better kernel solves, etc...

    But it's just not enough for the larger projects to work in the way they could be working.

    Most of the MCAD stuff currently is built in a linear way. That's largely because of the parametric system used by almost all software producers today. With a few changes to how we do MCAD, I could see many cores becoming very important for larger datasets.

    Peak CPU and RAM are the two primary bottlenecks that constrain how engineering CAD software develops and what features it can evolve for it's users. It's not the only example either.

    The bitch is that most of the software we have is more than adequate for most of the people. For those that lie outside the norm, dependance on this software (both development and just use value need), constrains their ability to make use of multi-core CPU capabilities...

    Messy.

    Will be interesting to see how this all goes. Will the established players evolve multi-core transitional software that can bridge the gap, or will new players arise, doing things differently to take advantage of the next tech wave?

    IMHO, there is a strong case for Intel doing the, "If we build it, they will come thing." For the higher demand computing needs, there really isn't any other way to improve, but through very aggressive code optimization.

  • by QuantumFTL ( 197300 ) * on Thursday July 27, 2006 @04:03PM (#15794060)
    The custom rendering software I work on at Maas Digital (used for things like the IMAX Mars Rover film) is very cache sensitive. I've been mulling this over recently, because in computer graphics, memory is almost always the bottleneck, and it's lead me to conclude we really need some different languages, or at least language constructs.

    Pixar's Photorealistic Renderman (perhaps one of the greatest pieces of software ever written, from an engineering point of view) is very odd in that its shading language, while interpreted, is actually much faster at accomplishing its goals than other compliant renderers which compile down to the machine level. I believe this is because of memory bottlenecks, and despite the fact that computer graphics is an "embarassingly parallel" problem, eight cores is likely to aggrevate this much more than it is to help.

    What I think is needed is a more functional-programming approach to a lot of these problems, where the mathematics of an operation is more purely expressed, leaving things like ordering/scheduling up to the compiler/runtime environment. Runtime compiled languages, like Java, can sometimes outperform even the best hand-optimized C due to the fact that the runtime compiler can optimize to the cache size and specific chihp family.

    Also, this type of language would benefit multi-core processing because it would help expose the most possible parallelization opportunities, and let the compiler (perhaps even through trial and error) determine exactly when and how much parallel code to create.

    Currently all of my parallel supercomputing code uses Fortran and the Message Passing Interface, but it's clear that this approach leads to code that is often very hard to debug and is very programmer-intensive. Hopefully the future of programming languages will help ease us into general purpose computing on highly parallel architectures like Cell.
  • by iabervon ( 1971 ) on Thursday July 27, 2006 @04:18PM (#15794202) Homepage Journal
    Once you have 8 cores, it becomes advantageous to have memory which is faster for each group of 4. At 8, you're on the edge where the advantage exists, but isn't sufficient to justify the additional architectural complexity. For 16 and up, it's much better to have 4-processor nodes each with its own memory (and slower access to memory on other nodes). It's unlikely that improvements in chip technology will change this. It's also not something about desktop computers; existing large machines use 4-processor nodes.

    So he's right; before it makes sense to have more than 4 cores on a chip, you'll want multiple chips of 4 cores each with separate memory busses, and then system RAM on the processor chip (at which point the architecture is significantly different, because the system is asking the processor for memory values, rather than the opposite), and only then does it become efficient again to put more cores on the chip, as you can have a multiple-node chip.
  • by Jherek Carnelian ( 831679 ) on Thursday July 27, 2006 @07:57PM (#15795662)
    All this discussion over some BS from a marketingdroid? Are you really all such suckers?

    Let me translate from marketing-speak to plain English for y'all:

    droid: two are enough for now, four will be mainstream in three years and eight is something the desktop market does not need.
    translation: We have a two core product available now, we will have a 4 core product available in three years but we don't yet have a plan for an eight core product.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...