Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Intel - Market Doesn't Need Eight Cores 548

PeterK writes "TG Daily has posted an interesting interview with Intel's top mobility executive David Perlmutter. While he sideswipes AMD very carefully ('I am not underestimating the competition, but..'), he shares some details about the successor of Core, which goes by the name 'Nehalem.' Especially interesting are his remarks about power consumption, which he believes will 'dramatically' decrease in the next years as well as the number of cores in processors: Two are enough for now, four will be mainstream in three years and eight is something the desktop market does not need." From the article: "Core scales and it will be scaling to the level we expect it to. That also applies to the upcoming generations - they all will come with the right scaling factors. But, of course, I would be lying if I said that it scales from here to eternity. In general, I believe that we will be able to do very well against what AMD will be able to do. I want everybody to go from a frequency world to a number-of-cores-world. But especially in the client space, we have to be very careful with overloading the market with a number of cores and see what is useful."
This discussion has been archived. No new comments can be posted.

Intel - Market Doesn't Need Eight Cores

Comments Filter:
  • by GundamFan ( 848341 ) on Thursday July 27, 2006 @02:35PM (#15793146)
    I don't doubt an "8 core" desktop will exist in the near future. Then again he has a point... we won't likely need it.
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Thursday July 27, 2006 @02:35PM (#15793147)
    If you put 8 core procs in desktop machines, software will be written that will take advantage of them. Which means you'll sell more 8 core procs.

    Are you going to lead or follow?
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Thursday July 27, 2006 @02:35PM (#15793153)
    Comment removed based on user account deletion
  • Question. (Score:4, Insightful)

    by Max_Abernethy ( 750192 ) on Thursday July 27, 2006 @02:35PM (#15793154) Homepage
    Does having multiple cores do anything about the memory bottleneck? Does this make the machine balance better or worse?
  • well, (Score:4, Insightful)

    by joe 155 ( 937621 ) on Thursday July 27, 2006 @02:36PM (#15793156) Journal
    I don't want to insult the person but saying that 8 is something that will not be needed seems very short sighted. People were saying only a few years ago "1GB is too big for a hard-drive"... Never under estimate the increasing need for power in computers, even for home users
  • by seanmb15 ( 942069 ) on Thursday July 27, 2006 @02:41PM (#15793212)
    Isn't this the same thing they said about 64bit chips?
  • Translation (Score:2, Insightful)

    by growse ( 928427 ) on Thursday July 27, 2006 @02:42PM (#15793223) Homepage
    Intel saying "The market doesn't need 8 cores" = Intel saying "We can't really engineer 8 cores right now, we've hit some trouble". Of course the market would like 8 cores. Markets are greedy for new stuff, that's how you keep on making money. Intel's covering their ass for putting 8 cores on their roadmap for anytime soon.
  • Re:well, (Score:5, Insightful)

    by man_of_mr_e ( 217855 ) on Thursday July 27, 2006 @02:42PM (#15793228)
    I think he was talking about the foreseeable future.

    1 core is really enough for most users. 2 cores is enough for most power users. 4 cores will be enough for all but the most demanding jobs. Workstations are different, however and are not usually considered part of the "desktop". For example, I could see 3D artists using 4 or 8 cores easily. In fact, there's simply no such thing as a computer that's "too fast" for certain purposes.

    The issue, though, is one of moderation. Why would a desktop user want 8 cores, which are drawing insane amounts of power, when they're not even utilizing 4 to full advantage? Word processing, accounting, and surfing the web don't need any of this. Games? I can imagine in 10+ years we'll have some photo-realistic 3D games that run in real-time, but the vast majority of the work will likely be handled by GPU's and won't need 8 cores to deal with it.

    I simply cannot fathom a purpose for 8 cores for any "desktop" application that isn't in the "workstation" class.
  • Classic mistake (Score:5, Insightful)

    by ajs ( 35943 ) <{ajs} {at} {ajs.com}> on Thursday July 27, 2006 @02:44PM (#15793244) Homepage Journal
    He's right. Current desktops don't need 8 cores. However, as four cores become widely available, desktops will begin to change. They will become more threaded, and more processing that would have been avoided previously will begin to happen passively. Constantly streaming video in multiple thumbnail size icons on taskbars, stronger and more pervasive encryption on everything that enters or leaves the machine, smarter background filtering on multiple RSS sources, MUCH beefier JIT on virtual machines, on-the-fly JIT for dynamic languages, more complex client-side rendering of Web content (SVG, etc), these will all start to become more practical for constant use. Other things that we haven't even thought of because they're impactical now will also spring up. By the time 8-core systems are available, the market will already be over-taxing 4-core systems.
  • by tomstdenis ( 446163 ) <tomstdenis AT gmail DOT com> on Thursday July 27, 2006 @02:44PM (#15793246) Homepage
    If you're basing that on some logical sense of "need" I may remind you the average consumer doesn't need a quarter the computer they already have.

    Tom
  • by ausoleil ( 322752 ) on Thursday July 27, 2006 @02:48PM (#15793285) Homepage
    "Need" is subjective.

    Once upon a time, Bill Gates said we would never "need" more than 640K.

    Once upon a time, mainframes only had 32K of RAM -- and that was a vast amount more than their predecessors.

    The '286 came out and was primarily aimed at the server and workstation market. "No one will ever need all of that power."

    Thing is, people always "need" more speed, more RAM and more storage. And they'll pay for it too, so Intel may "need" to sell 8X cores.
  • by mrxak ( 727974 ) on Thursday July 27, 2006 @02:51PM (#15793313)
    At least 64-bit chips let us address a lot more RAM, and everybody knows that programs are gobbling up more and more RAM these days. Millions of cores aren't quite as useful, at least for the time being, for your typical home PC.
  • by Anonymous Coward on Thursday July 27, 2006 @02:54PM (#15793355)
    Previously they said there is no need for 64-bit on the desktop.
    Then AMD made a huge success with it and they had to backtrack.
  • Re:well, (Score:2, Insightful)

    by Waffle Iron ( 339739 ) on Thursday July 27, 2006 @02:59PM (#15793399)
    People were saying only a few years ago "1GB is too big for a hard-drive"

    But this time the new hardware would be dependent on a major overhaul of the software industry. Any programmer can write code to fill up a 1GB hard drive, but effectively using 8 cores usually requires talented programmers who have mastered multithreaded programming. This is a small fraction of the software developer population, so apps that can take advantage of an 8-core CPU will probably be few and far between for a good long while. (Not to mention, not every computing task can even be parallelized in the first place.)

    Changes in software architecture seem to have a huge amount of inertia. It took almost a decade to transition from 16-bit to 32-bit desktops even though it was *easier* to program a 32-bit app than a 16-bit one. Who knows how long it would take to get most apps taking advantage of large numbers of cores when the coding will be much harder than most developers are used to?

  • by hackstraw ( 262471 ) * on Thursday July 27, 2006 @03:06PM (#15793466)
    I don't doubt an "8 core" desktop will exist in the near future. Then again he has a point... we won't likely need it.

    My crystal ball is not always crystal clear, but I believe that 8+ cores will exist and are needed in the near future, at least for desktop systems.

    History here. I'm an HPC admin which translates into I run beowulf stuff where pretty much OTS computers are connected together to work as one big computer. I'm also a desktop computer user who is anal retentive about having realtime info regarding the status of my computer with respect to CPU utilization and whatnot.

    Now, in the many years of running desktop systems and being anal retentively monitoring them, I've noticed that CPU utilization is very often bursty. Meaning that its common for the CPU to hover around zero, and spike up with doing something like rendering a webpage, printing, compiling code, etc, etc. But most of the time (> 90% or well more if including when I sleep and stuff), the CPU is doing nothing.

    So, what is my point? Give me cores out the wazoo, and let them completely power down when not needed and crank up to all 8 or more when needed. This will greatly improve power requirements and improve performance at the same time. Evidence of similar stuff in either nature or in other technologies are plentiful. 1) Hybrid gas/electric cars. They use both for higher performance when needed, and then back off and oscillate between the two when its optimal for efficiency. 2) Animal tissue like muscles and nerves. Muscles are pretty much idle most of the time, and only use a few fibers when doing a light contraction, but all of the available fibers become active when exerting maximum effort. Similar, but different with nervous systems. 3) Human workloads. There are certain industries that are not really a constant, and even the seemingly constant ones also have bursts as well, but lets think of things like seasonal things like retail, taxes, or things like seasonal vacation spots. These kinds of jobs bring in more human bodies to handle the peak loads, and let them go when the peaks are over. Its nuts that in many places in the US, seasonal vacation spots are frequently employed by people from half way across the world!

    Now, is my 8+ core pipe dream going to happen tomorrow? No. But I believe this is where computing is going. Another thing that will have to change is that RAM should not be as random. In other words, memory, like CPU cores, should go dormant when not needed in order to conserve power as well, and of course there is the memory bandwidth issue as well.

  • Re:Translation (Score:4, Insightful)

    by ssista537 ( 991500 ) on Thursday July 27, 2006 @03:09PM (#15793497)
    Seems like people dont RTFA. Let me Quote "Will we see eight cores in the client in the next two years? If someone chooses to do that, engineering-wise that is possible. But I doubt this is something the market needs." He is talking about next two years not ever. We just have an abundance of dual core machines in the market now and the apps to take advantage of it. Tell me how much different software we had two years ago than today. If so there is no way a desktop market needs 8 cores two years from now. Geez we have so many fanbois and script kiddies here with absolutely no knowledge of the industry, it is sickening.
  • Re:Question. (Score:4, Insightful)

    by NovaX ( 37364 ) on Thursday July 27, 2006 @03:12PM (#15793522)
    Worse.

    For Intel, they are currently using a shared bus approach. It makes sense for a lot of reasons (mainly by being very cost effective), and they are developing a point-to-point bus for the near future. In such a system, each CPU is using the bus for retrieve data. This means that they lock the bus, make their calls, finish, and unlock. The total bandwidth available is split between all parties, so if there are multiple active members (e.g. CPUs) then their effective bandwidth is split N ways. The only solution to this is to have multiple shared busses, which is expensive.

    A point-to-point bus gives each member their own bus to memory. Thus, there is NxBW effective bandwidth available. As memory cells are independant, the memory system can feed multiple calls. You'll only run into issues if multiple CPUs are accessing the same memory, but models have been around for a long time. There might be a slightly higher latency, but not by much.

    With multiple cores, you may get the benefit of shared caches which could remove a memory hit.

    Overall, I would assume a multi-core system would scale fairly similarly to a multi-processor system.
  • by AHumbleOpinion ( 546848 ) on Thursday July 27, 2006 @03:13PM (#15793540) Homepage
    What quite a few other posters are failing to understand is that he is referring to diminishing returns. 1 to 2 give you some fractional improvement, 2 to 4 gives you a smaller fractional improvement, 4 to 8 gives you an even smaller fractional improvement, etc. At some point the cost, size, heat, noise (for the cooling), etc is not worth the fractional improvement. For most users that will probably be dual or quad.

    For those extremely rare apps and jobs that are highly parallelable 8 and above will be useful. However this will be very rare and this is why the comparisons to the infamous 640K quote are misguided. Increasing RAM is easy, software naturally consumes RAM with no additional work necessary, just do more of what you are alraedy doing. Multiprocessing is something completely different, the code must be designed and written quite differently, and it is often very difficult to retrofit existing code for multiprocessing. Now you have the practical problem that not all problems are parallelable.

    Strangely enough, I think one case where 8 cores could be useful in a home environment would be a bit retro. A multiuser/centralized system. One PC with the computational power for the entire family, dumb terminals for individual users, connections to appliances for movies, music, etc. Such a machine might go into the basement, garage, closet, or other location where noise is not an issue. Of course, I'm not sure such a centralized machine would be cost effective.
  • by indy_Muad'Dib ( 869913 ) on Thursday July 27, 2006 @03:21PM (#15793619) Homepage
    they also dont need their $200 shoes, or their multi million dollar homes, or their $60k cars.

    their $500 clothes sets, their personal shoppers, their $100 hair cuts.

    but hey this is america, the land of capitalism.

    overindulgence is expected here.

    shame we are now the fatest, laziest, most uneducated country in the world but atleast you have that 50" plasma TV right?

    thats all that matters, you have more "stuff" than somebody else.
  • by 'nother poster ( 700681 ) on Thursday July 27, 2006 @03:26PM (#15793686)
    Well, as we all know, "He who dies with the most toys... Is still dead."
  • by fmoliveira ( 979051 ) on Thursday July 27, 2006 @03:32PM (#15793754)

    You can distribute different pages for printing, different frames pro html rendering, or divs or something. Your browser could be decompressing pngs and jpegs in other cpus while one parse the html too.

    Web browsing is still limited by the network anyway, increase cpu to browse the web doesn't make any sense for me. At least with my 400kbps DSL

    But I at least would not want to increase cpu power for these trivial tasks. I would prefer that it happens when I do something heavier, like a game, or at least something that takes more than 1sec.

  • Re:well, (Score:5, Insightful)

    by koreth ( 409849 ) on Thursday July 27, 2006 @03:32PM (#15793755)
    effectively using 8 cores usually requires talented programmers who have mastered multithreaded programming.

    But ineffectively using 8 cores can be done by any dumbass with a C# compiler or a book on the pthreads library. Which is why we actually will need 8 cores.

  • by Lonewolf666 ( 259450 ) on Thursday July 27, 2006 @03:38PM (#15793808)
    First, the hardest part is going from 1 to 2 cores. For that, you have to figure out the principle of how to split the workload. Going from 2 cores to n cores will usually be easier. And since dual cores are already becoming mainstream, professional programmers will be forced to take the step from 1 to 2 cores anyway.

    Second, the makers of multimedia applications already go ahead with multithreading, because it really works for that type of application. This will drive the market for more cores. In the long run, I expect the mainstream market to settle at the number of cores that works best for multimedia applications and games.
    I think this will be at least four cores (in that I agree with David Perlmutter) but it may be more, depending on the progress of computer science in parallelization of the above applications. Personally, I would not be surprised to see 16 core CPUs in mainstream computers someday.
  • by guaigean ( 867316 ) on Thursday July 27, 2006 @03:45PM (#15793877)
    The home consumer market isn't exactly the goal for technology like this intiially, and the price won't be inline with home consumers anyhow. This is the kind of stuff used in High Performance Computing, as a single computing node can maintain large amount of CPU performance with no transfer between nodes. 2GB is nothing in the HPC world, and 8 cores get filled up fast. While it may be easy to assume "I can't fill 1 CPU, what would I do with 8"? you have to remember that there are people out there running huge simulations, which could very easily use up many thousands of CPU's.

    Utility is in the eye of the user.
  • by mrchaotica ( 681592 ) * on Thursday July 27, 2006 @03:53PM (#15793960)
    x86-64 in particular

    Well, AMD did come up with a way to make their 64-bit CPUs immediately useful: they increased the number of registers at the same time (but could only make them available in 64-bit mode, to avoid breaking stuff when running in 32-bit mode). Aside from that, 64-bit isn't intrinsically useful unless you want a virtual memory address space bigger than 4 gigabytes (which, at the moment, tends not to be true for casually-used PCs).

    In any case, i'm really hoping that these multi core consoles translate to more experience in multithreading programming moving to the PC side of things, whether it's games or something else.

    I'm betting on just that -- I'm a CS undergrad, and I took a parallel programming specifically for that reason

  • by 'nother poster ( 700681 ) on Thursday July 27, 2006 @03:58PM (#15794006)
    Actually I could benefit from 4 or 8 cores right now. On my desktop at work I have currently have 4 browsers, 5 Xwindows, A mail client, and PCanywhere. Large portions of the work day I am 100% CPU util.I could use those cores.

    In the future if they have some decent logic to handle context switching and thread migration then shifting to lots of small parallel operations could make computing even better as far as I'm concerned. Parallel operations on multiple cores could really benefit some types of desktop apps. Others simply wouldn't benefit because of the simple logical linear progression required by their nature.
  • by iabervon ( 1971 ) on Thursday July 27, 2006 @04:18PM (#15794202) Homepage Journal
    Once you have 8 cores, it becomes advantageous to have memory which is faster for each group of 4. At 8, you're on the edge where the advantage exists, but isn't sufficient to justify the additional architectural complexity. For 16 and up, it's much better to have 4-processor nodes each with its own memory (and slower access to memory on other nodes). It's unlikely that improvements in chip technology will change this. It's also not something about desktop computers; existing large machines use 4-processor nodes.

    So he's right; before it makes sense to have more than 4 cores on a chip, you'll want multiple chips of 4 cores each with separate memory busses, and then system RAM on the processor chip (at which point the architecture is significantly different, because the system is asking the processor for memory values, rather than the opposite), and only then does it become efficient again to put more cores on the chip, as you can have a multiple-node chip.
  • by samkass ( 174571 ) on Thursday July 27, 2006 @04:18PM (#15794204) Homepage Journal
    Isn't this the same thing they said about 64bit chips?

    Good point... yes, Intel said this about 64bit chips, and they were right. Almost nobody needs 64bit chips. But now virtually all chips are 64bits, wasting a lot of die real estate and engineering effort because of the perceived benefits driven more by AMD's marketing than reality. It's quite possible 8 cores could end up in the same boat-- AMD pushing it for no valid technological reason and Intel being forced to follow suit.
  • Re:Question. (Score:4, Insightful)

    by NovaX ( 37364 ) on Thursday July 27, 2006 @04:51PM (#15794533)
    One thing to remember, Sun has a lot more expertise on memory busses than Intel does. The UltraSparc chips have never been great performers, but are wonderful at scaling in multiprocessor systems. Intel has never put too much effort in their bus system, because the economics favor cheaper solutions. Their shared bus approach reduces costs for a mass market, but they even use it for ultra high-end systems like Itanium. Those systems really need a better system bus, but simply used a tweaked version of the standard Xeon one. I believe Intel is targetting 2007 for the release of their new bus architecture.
  • by LordKronos ( 470910 ) on Thursday July 27, 2006 @04:54PM (#15794553)
    There's really no need for 8 cores until my brain is able to take multitasking to the next level

    Video processing
    photo processing
    Multitrack digital audio recording with multiple real time DSP effects

    And that's just what I thought about in 10 seconds. Not to mention what video games could do with all that processing power.

  • Re:The point is... (Score:4, Insightful)

    by timeOday ( 582209 ) on Thursday July 27, 2006 @04:59PM (#15794608)
    Those running huge simulations and using far more than 2GB of RAM are not doing so on a desktop.
    That's obviously because a desktop can't do the job. I run cluster jobs, and I assure you I'd prefer to run them on my laptop, if only I could put 100 cores in there.
  • by bomanbot ( 980297 ) on Thursday July 27, 2006 @05:14PM (#15794713)
    Well, I RTFA and for me it looks like he is taking about the near future (emphasis mine):

    But especially in the client space, we have to be very careful with overloading the market with a number of cores and see what is useful. I believe '2' is a good number. '4' will be an interesting number for the high-end. Will we see eight cores in the client in the next two years? If someone chooses to do that, engineering-wise that is possible. But I doubt this is something the market needs.

    and

    I think that it will be two or three years until you are going to see four cores entering the mainstream.

    So according to him, for the next few years anything more than four cores will not be mainstream. Sounds pretty reasonable to me.
  • by IamTheRealMike ( 537420 ) on Thursday July 27, 2006 @05:58PM (#15795015)

    Almost no desktop programs actually use 4 gigabytes of RAM. Not even allowing for rapid expansion will we reach that bottleneck anytime soon.

    The Intel guys were right. What are the uses of 64 bit systems? They are removing a bottleneck that very few were hitting. The AMD64 instruction set fixes (more registers etc) are nice but not worth the hassle of losing binary compatibility. Result? Hardly anybody uses a pure64 system. Only enthusiasts.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday July 27, 2006 @06:05PM (#15795052) Homepage Journal
    do you think 8 cores sharing a memory pool are really going to help you out

    If those 8 cores are from AMD, then they'll be utilizing a NUMA architecture, and provided the OS "does the right thing" then no, you won't be waiting for memory, at least not any more than you are now, and probably less so.

    If those 8 cores are from intel, they'd better have improved their bullshit bus, or no, it won't help.

  • by JamesTRexx ( 675890 ) on Thursday July 27, 2006 @06:27PM (#15795180) Journal
    Although I also have plenty of programs running, I could really use them for all the virtual machines I have running. Having each one run on its own cpu would speed things up considerably.
  • by Squalish ( 542159 ) <Squalish AT hotmail DOT com> on Thursday July 27, 2006 @07:09PM (#15795412) Journal
    Highend game engines are already at a point where 3-4gb can be a major improvement on 2gb. Battlefield 2 was one of the first that showed a difference, and Oblivion's outdoor scenes are really begging for at least 2gb.

    As with every other technological yardstick of computers, entertainment is driving the platform, not "desktop programs," for which the technology of a decade ago was adequate.
  • by WuphonsReach ( 684551 ) on Thursday July 27, 2006 @07:49PM (#15795624)
    Not necessarily. A dual-core system is more expensive, per-core-GHz, than a single-core system. That is, $300 might buy you a 2.0GHz dual-core CPU or a 3.0GHz single-core CPU (apples-and-apples GHz here, so AMD and not Intel).

    $154 - AMD Athlon 64 X2 3800+ Dual Core 2GHz
    $86 - AMD Athlon 64 3200+ 2GHz

    Looks pretty close to a wash in my book.

    $327 AMD Opteron 165 Dual Core 1.8GHz
    $170 AMD Opteron 144 - Box 1.8GHz

    Not much difference here on $ per-core-GHz either.

    Your statement might have been true last week, prior to the AMD price cuts. But things are a lot nicer now (and the low-end dual-cores are almost an automatic choice). $68 for the 2nd core makes a lot of sense, even for a low-end CPU because it will add a few years of usability onto the lifespan of the machine. Or at least the machine will feel snappier for a few years longer then the single-core.

    And the primary reason that AMD 64bit CPUs get so much goodwill? Unlike the Itanic, AMD came up with a 64bit design that provides for the future while still providing excellent performance for 32bit applications. So why not buy a 64bit chip even if you're still running 32bit? There's no performance hit and if the landscape changes and we all need to move to 64bit, you're already there.

    Pretty much a no-risk decision as a result. You're not betting on 32bit or 64bit, you're simply prepared for either.

  • by Aadain2001 ( 684036 ) on Thursday July 27, 2006 @08:34PM (#15795788) Journal
    I'll give you that the data sets programs are using today are getting gigantic, which can easily lead to constant memory block swaping between the main memory and the caches. But when it comes to instruction caches, you obviously haven't heard of the 90/10 Locality rule of thumb: a program executes about 90% of its instructions in 10% of its code. That's because of branches, loops, the fact that there are large sections of code that are run only once, during initialization, and never run again, etc. So while the Java run time engine is larger than the L2 cache in all but the most expensive workstation processors, the majority of the instruction that are executed are only a small subset of the actual code, which can fit easily in typical L2 caches.

    If you look at Intel's Core 2 Duo, the cache space is not "divided" as the number of cores increase. Each core, if running at full load, will have 2MB of cache (extreme edition anyway). That is a very respectivable cache size and would be a respectable single core processor. When one core is not running (like when running only Word), one core sleeps while the other core is given all of the cache.

    Past marking ploys (GHz) were definately wrong, and trying to directly replace those metrics with the number of cores is also a bad choice. But don't you see that that is exactly what Intel is trying to prevent? The interviewee in the article is saying that more cores != more performance. Hence why desktop users will have no need for 8 cores or more. Most of the posts on this topic are along the lines of "ya right, more cores FTW!", which is a very uninformed mentality.

  • by GaryOlson ( 737642 ) <.gro.nosloyrag. .ta. .todhsals.> on Thursday July 27, 2006 @10:35PM (#15796218) Journal
    ...only enthusiasts ?!

    Obviously, you have never tried to simualate or graph propagation of an organic virus with a 4 million node set using Matlab x64 on a desktop system.

    We would be pleased to take your enthusiast money and 128 of your gaming buddies' money and build a Linux computational cluster to solve a problem that will likely save your life or the life of someone you know.

  • by Mateorabi ( 108522 ) on Thursday July 27, 2006 @10:37PM (#15796225) Homepage
    The problem with the asynch message passing method is that you have to explicitly send/copy your data to the next process, which causes the bandwidth to skyrocket. (Better to pass by reference, with some mechanism to ensure the producer can't make more modifications after the consumer process gets the pointer.)

    Actualy what you described is a very specific instance of dataflow programs, where the flow can best be described by a directed "dataflow" graph. Technicaly macrodataflow since you pass data between processes; true dataflow reduces the granularity all the way down to individual instructions passing each other operands.

    The reason "applications naturally parallelize" is because the language is forcing the programer to be explicit about the parallelism, something that doesn't come naturaly to your Freshman CS101 coder. Imperative languages like C, Fortran, Java, etc. that students are taught first are geared towards von Neumann machines and are incredebly hard for the compiler to parallelize.

    Interestingly, functional languages like you mentioned (also try 'Id') map quite well to dataflow. This is directly due to their lack of side effects (i.e. manipulating structures in memory, which must be inherently sequential in order for the programer to reason well about program correctness.)

    Dataflow had a lot more following in the 80s and early 90s. One problem was actualy an explosion of too much parallelism exposed in the application, more than the functional units could handle. The overflow then had to be shuttled back and forth to memory, making the aps bandwidth limited. Look at the MIT TTDA, Monsoon, *T, TERA, TAM, WaveScalar, and other projects. The ability to put many functional units (cores) and sufficient memory to keep them fed on a single die recently (last 5 years) reduces this limit and may allow the field to have a bit of resurgance.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...