Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Intel's "Terascale" Vision 220

Vigile writes, "Intel is pushing the envelope with its latest vision — 80 cores on a single processor. Dubbed 'Terascale' computing, Intel aims to bring low-powered, massively interconnected cores and unleash a new era in data-mining, media creation, and entertainment." For balance, read Tom Yager over at InfoWorld imploring AMD to stop at 8 cores while everybody gets the architecture right.
This discussion has been archived. No new comments can be posted.

Intel's "Terascale" Vision

Comments Filter:
  • Good (Score:4, Funny)

    by betterunixthanunix ( 980855 ) on Wednesday September 27, 2006 @04:18PM (#16219967)
    Now I can run 80 instances of Doom at the same time. Nothing quite like heavy multitasking.
    • Re: (Score:3, Insightful)

      by networkBoy ( 774728 )
      Heh,
      Not really.
      This chip (as designed) would be one CISC CPU core and 80 Mini cores (kinda like Cell?)
      Anyway, where this will be awesome is in rendering &&|| Cryptography, where the memory bandwith requirements is not as high as CPU compute requirements.

      I personally hope these come out in a 4xPCIe expansion card:)
      -nB
      • This is gonna be interesting. Can you imagine Intel touting Cell's no-out-of-order, no-cache architecture? They'd have to disavow the last twenty years marketing efforts. The mind reels...
        • Wouldn't surprise me. Intel does that. Consumers have very short memories when it comes to marketing.
        • The average buyer will not understand "out of order architecture" anyway. The MHz race was different, because even non-techies could see how the computers got faster with increasing clock speed.
          But now?
          Maybe it will be "number of cores". Otherwise Intel and AMD will have to use meaningless slogans like "Intel inside" to suggest a sense of security when using their particular brand.
          I expect a mixture of touting lots of cores and almost-fraudulent crap like "the Pentium III will make your internet faster".
    • by rts008 ( 812749 )
      Heh, add Dual Quad SLI, then maybe I can get 600fps in Tux^H^H^H PPracer!
    • oh oh oh, so THAT is what they are waiting for before releasing Duke Nukem Forever? Eye candy that requires a minimum of 80 cores MUST be good!
    • Didn't we all argue about this already yesterday.
      • by dc29A ( 636871 )
        Didn't we all argue about this already yesterday.

        Thank $Deity$ for dupes. I forgot to cancel my pre-order for a 10 ghz P4 CPU. But thanks to this dupe, I did cancel it and instead placed a pre-order on an 80 core CPU. YAY!
    • Please. 2 cores, 4 cores, 8 cores, it will never take off until someone invents multithreaded porn.
  • by Compaq_Hater ( 911468 ) on Wednesday September 27, 2006 @04:18PM (#16219971)
    we are on our way to L-Cars computers i can feel it.
    CH
  • by nycsubway ( 79012 ) on Wednesday September 27, 2006 @04:19PM (#16219991) Homepage
    This processor must already be submitting stories... If it is there should be 78 more dupes just like it.

    I like the idea of an 80 core processor. Multithreaded applications will work better. Why are people afraid of multiprocessors? Systems with dozens of processors are not uncommon. I dont see why it would be bad for the desktop.
    • by Goblez ( 928516 )
      How about due to the lack of code that takes advantage of the multiple processors? If you mainly use one heavy application that doesn't take advatage of more than one or two cores, then those other 78 are going to be bored (and not submitting their dupes to ./)!
      • Re:80 Submissions (Score:5, Insightful)

        by nycsubway ( 79012 ) on Wednesday September 27, 2006 @04:34PM (#16220187) Homepage
        That is true. A lot of applications do not heavily use multithreading. But, in the scientific community a lot of applications require it. Where I work, we process several GB of MRI data a day. We are able to parallelize the overall processing, so the more processors, the better. However, I wish Matlab would become multithreaded! Our servers have 4 processors and if matlab used them all, we could process 1 dataset in 1/4 the time, instead of processing 4 datasets at once to utilize the CPUs. Processing one dataset at a time would reduce disk I/O.
        • I could've sworn I saw something about a version of matlab compliled with mpich a while back, but I'll admit it could've just been a way to run multiple instances of it at once that i saw. If your situation does not require a whole lot of fiddling, you could put a lot of the code into a compiled .mex file and make that parallel.

          Ironically, the nature of MATrix LABoratory's design goals is particularly suitable for multi-processor implimentations. The language is expressly designed to allow/coax users into
        • Re: (Score:3, Interesting)

          by kabocox ( 199019 )
          That is true. A lot of applications do not heavily use multithreading. But, in the scientific community a lot of applications require it.

          If anything, this will be the one great thing to come out of 8+ core desktop systems. I honestly don't think "most" apps even pretend to use more than 1 core very well. Once 8+ cores are one your bare bones Dell home PC then I'd expect to see everything under the sun start to be multithreaded. With the expectation of 32+ or 64+ cores in a decade time, then I could see alot
          • Right now, everyone is thinking of hey you can't break every problem into parallel tasks. Well, what if we make an algorithm breakthrough that says we can?

            Good luck, I believe that particular problem (generalization, when some parallelism can be extracted within tasks) has been shown to be NP-hard. Even if you come up with a golly-gosh-whizzbang algorithm or pattern that takes maximum advantage of a multi-cpu system, you're still going to have a hard time forcing problems to fit it. My guess is that even

        • However, I wish Matlab would become multithreaded! Our servers have 4 processors and if matlab used them all, we could process 1 dataset in 1/4 the time, instead of processing 4 datasets at once to utilize the CPUs. Processing one dataset at a time would reduce disk I/O.

          It's sounds like you are using a fairly high end computer system. I have to sugestions for upgrades to it that would help reduce disk IO times and increase the speed.

          1) RAM Drive
          2) Solid State Hard Drives

          Depending on your system and
        • Matlab's good for testing and small calculations, but if you want to do some serious computing, you should consider learning C++ and writing the code yourself. Then, you can create as many threads as you want.
      • Re:80 Submissions (Score:4, Interesting)

        by m0nstr42 ( 914269 ) on Wednesday September 27, 2006 @05:26PM (#16221003) Homepage Journal
        How about due to the lack of code that takes advantage of the multiple processors? If you mainly use one heavy application that doesn't take advatage of more than one or two cores, then those other 78 are going to be bored (and not submitting their dupes to ./)!
        I'm curious - supposing that the software existed to take advantage of it, would it be possible to design an operating system that used a vast number of cores in a radically different (and advantageous) way than we use one or two (or a few more) today? i.e. the kernel spawns several sub-kernels on different processors or clusters of processors, with each one set up to handle a very specific task. Is there really any advantage? In nature, large scale systems of simple agents tend to be able to accomplish complex tasks more efficiently than single agents or small groups.
        • Re: (Score:2, Informative)

          by archen ( 447353 )
          I think that was part of what the article was inferring. Assuming you had a perfectly optimized kernel and a zillion cores, performance still isn't going to scale all that well. There is just too many bottlenecks all over the way the general purpose PC is designed today. And lets not forget how far behind hard drive tech is dragging compaired to the rest of the system. It's funny because everyone acts like this is so new despite the fact that high end stuff like supercomputers have been dealing with the
          • Supercomputers are devices for turning Compute bound problems into I/O bound problems?
            if every desktop is bound by I/O, then that's an interesting place to be.
        • I wonder if something like GNU's Hurd [gnu.org] where "The Hurd is a collection of servers that run on the Mach microkernel to implement file systems, network protocols, file access control, and other features that are implemented by the Unix kernel or similar kernels (such as Linux)." might benefit from the a massively multi-core environment?

          Granted it still needs a lot of work, but it might provide a faster system then the monolithic kernels of Linux and Windows.
        • Another question along these lines is whether it is even remotely cost effective to build general-purpose OS architectures that can handle your proposition, and how much overhead is required for your OS/meta-OS to supervise the whole affair. I've got a buddy who does some of this kind of work at a big research lab, but they are not writing a general OS kernel; they've been writing software to model a teeny sliver of quantum-scale physical phenomena, and they've been working on it for 10-15 years.

          My hunch i
    • Re: (Score:3, Insightful)

      by NerveGas ( 168686 )
      It's not the number of processors or cores that they're afraid of, it's the fact that with the exception of a very few cases, your performance does not scale linearly with the number of CPUS, it is less than 1:1. To make it worse, as the number of CPUs rises, the cost to intelligently, quickly deliver sufficient bits to and from all of the CPUs gets exponentially higher.

      Recently, some of our managers wanted to see what it would cost to purchase a system that would significantly outperform our 8-way Opteron
      • Throwing hardware at a badly performing application is usually the wrong way to go about getting better performance. The performance gains you're going to get are usually marginal unless there's some gross configuration mistake. The biggest performance gains are made by making changes to the application. Count that as another reason to investigate free software.

         
    • I like the idea of an 80 core processor. Multithreaded applications will work better

      Multithreading models from the Windows/Unix/Linux community all assume equal access to system resources such as memory across all threads. They like Uniform Memory Architecture models.

      An 80 core system can't really provide a uniform memory access model, as it runs into severe switching and coherency problems. (You want to snoop HOW MANY L1 caches?!??). Fancy interconnects like hyperchannel and Monte Carlo stochastic scheme
  • by Aqua_boy17 ( 962670 ) on Wednesday September 27, 2006 @04:19PM (#16219995)
    Anyone else first read that as "Intel's Testicle Vision"?

    Man, it's been a long day.
  • by chroot_james ( 833654 ) on Wednesday September 27, 2006 @04:24PM (#16220053) Homepage
    When you can have 80 underfed chickens?
    • Re: (Score:2, Funny)

      Hell yeah, now Micro$oft can write Windows BBQ edition that will fix those OX and Chickens.
      CH
    • Actually, I am guessing that someone at Intel has been taking the "imagine a beowulf of those" jokes on slashdot too serious, and decided to put it on one chip.

      Tho is *seems* that if the OS was written specifically for 80 cores (ie: 64 bit, one bit per core or something) then *if* they synced up nicely, you could do some cool stuff with games at the least. My guess is that getting the OS to work with 80 cores in near real time is going to produce some serious overhead, however. For what it is worth, $10 s
      • by mikael ( 484 )
        If it is possible to get Linux or Windows to run on two core system, it shouldn't be too much trouble to run on a 64 or 80 core system. Although, I'd be worried about getting 80+ shell terminal messages:

        CPU0: Temperature above threshold
        CPU0: Running in modulated clock mode ...
        CPU80: Temperature above threshold
        CPU80: Running in modulated clock mode

        • Let me rephrase that then: running *well*. Not just booting, but actually efficiently using the 80 cpus.
      • by fotbr ( 855184 )
        So...imagine a beowulf cluster of THOSE.

        Actually, as far as uses for it go, I think games will be pretty low on the list. Scientific computing will probably be far and away the number one use for it -- IBM will probably come up with a new supercomputer they can sell based on racks of those instead of racks of opterons, etc.

        I'd also guess that NSA type agencies around the world would like them as well - X years of CPU time doesn't seem like much when you've got 80 of them in one box, and you've got acres of
        • But if Intel wants to sell any reasonable QUANTITY of them, which is necessary to get the scales of economy working for them, they need to appeal to a broader market. Porn and games are about as broad as it gets, I'm just sure why you need 80 cores to watch porn.
    • Re: (Score:2, Insightful)

      Because the chickens actually work better now and most oxen are now really just three chickens yoked together--hell, most chickens are three chickens yoked together!
    • by Yokaze ( 70883 )
      Because you can't breed oxen from a pair of chicken and you can only put maybe a single ox in a hen-coop for 80 chickens? Figuratively speaking.

    • by RingDev ( 879105 )
      But in this case, those 80 underfed chickens will have lasers strapped to their heads.

      -Rick
    • I guess you were going for funny here, but:
      Why have 8 Oxen doing speculative operations (and failing most of the time) when you could have 80 chickens doing only what you needed.
      i.e if you accept memory badwidth is your limit, then only do the instrucitons you have to. 200 processes across 80 cores doing only the things they have to vs a few cores doing branch prediction and guessin like hell, the chickens sounds like a better deal to me.
  • by Hahnsoo ( 976162 ) on Wednesday September 27, 2006 @04:29PM (#16220109)
    Gilette is releasing a new shaver called the "Plutonium Mach80", a razor with 80 blades. Each blade has a separate distinct function, and you can get even closer shaves with the synergistic cuisinart action. Also comes in a "For Women" model for "sensitive areas". "Basically, 5 blades isn't enough. I mean, really, more is better, right?", says Gilette CEO James Kilts. Schick is reportedly working on a competitor blade that may exceed the legendary "100 blade barrier".
  • 80 cores... (Score:4, Funny)

    by windowpain ( 211052 ) on Wednesday September 27, 2006 @04:31PM (#16220151) Journal
    And slashdotters will still be overclocking the sumbitch.

  •     Didn't Sun try that sort of idea with the UltraSparc T1? If I recall correctly, while the concept of lots of light cores was cool, the real-world performance didn't do any better than Intel- or AMD-based systems.

    steve
    • If my conversations with "average" purchasers of hardware are any indication, real world performance is not a closely examined metric. How many of us know somebody that spent a metric butt ton of cash on a Duo laptop for browsing the web and word processing? I wish I had a block diagram (like the old old von Neumann architecture picture from early comp sci text books) that shows how insanely out of wack the interconnecting lines have gotten between modern computer sub components and especially including the
  • by CaptKilljoy ( 687808 ) on Wednesday September 27, 2006 @04:35PM (#16220207)
    If they succeed, does this meen the tera-rists have won?
  • by foxtrot ( 14140 ) on Wednesday September 27, 2006 @04:35PM (#16220211)
    A lab prototype like this can help them with something important: Given multi-core processors look to be the way future computers will be built, how do you feed them data? The current paradigm won't scale past 4 cores on a single chip's worth of FSB, and there are folks who don't think that even 4's going to be a useful increase over 2.

    Even if Intel never sells a chip bigger than 16 or 32 ways, an 80 core lab mule will teach them many things about how to get information to a processor and keep those caches full of appropriate data.

    -F
  • by ncc05 ( 913126 ) on Wednesday September 27, 2006 @04:39PM (#16220283)
    [A] teraflop is approximately 1000 Megaflops.
    Is there such a thing as a gigaflops? What happened to that?
  • by dtfinch ( 661405 ) * on Wednesday September 27, 2006 @04:41PM (#16220319) Journal
    Practicality and usefulness problems aside, you can fit over 6,000 6502 processors in the space of a P4, each running at several ghz.
  • The first 80-core chip will actually look live a conventional kitchen hotplate. You add a pot of cold water on top of the chip, then with a dial on the unit you determie how much heat you want to produce. The CPU will automatically run the correct number of instances of Seti@Home to generate the desired level of heat.

    The 4 X 80 "stove top" model will come out later that year. It will include an "oven" that has its own chip and convectional cooling.
    • by rco3 ( 198978 )
      That's a funny comment, true. But it's not the worst idea in the world. People who use electric ranges spend hundreds of watts for nothing but heat - why not get some computation done in the process? SETI@Home, hack some Diebold machines, protein folding - whatever!
  • You know the arguments in the yard at lunch:

    AMD: We now have two cores, so there!
    Intel: Oh yeah, well we now have four cores- losers!
    AMD: Oh yeah, well we're coming out with eight cores next. Ha beat that!
    Intel: We can and will! We're going to come out with, with EIGHTY cores! Yeah that's right, eighty cores!

    Disclaimer: I've not kept up on the Core War, so any inaccuracies are for dramatic effect...

    • Isn't there something called an Arm processor? Core vs. Arm - that brought back a lot of fond gaming memories.
  • The 80 cores are all simple floating point cores. A lot like the IBM/Sony Cell.
    It is of interest for say super computers and video cards. It isn't the prototype of the Octodec80Core that will be in the new 72" iMac.
    Yea it is a dupe alright.

    What I think a lot of people are missing is that it almost looks like Intel is going to repeat the mistakes with Netburst all over again.
    Now instead of a clock speed race Intel is starting a core race.
    Intel is sticking more and more cores onto it's current FSB. This is go
    • The 80 cores are all simple floating point cores. A lot like the IBM/Sony Cell.

      Nope. That's what they made for the proof-of-concept demo, but ultimately they'll use something else. From TFA:

      These cores will be low power and probably based on a past-generation Intel architecture that has been refined and perfected.

      I'm betting Pentium Pro. Good, solid architecture, decent performance, still runs a lot of software. Wasn't the P-II basically just the PPro with MMX (and a half-speed cache...)? Hmmm, now tha

      • by LWATCDR ( 28044 )
        Actually yes.
        From the CNet story "Intel's prototype uses 80 floating-point cores, each running at 3.16GHz, "
        So these are just floating point cores.
        And from the TFA you your sited.
        "the cores in a terascale processor will be much simpler (kind of like we are seeing in the Cell processor design)."
        In fact it was the line above the one you cited.
        Just like TFA said. A Super Cell.
    • by dfghjk ( 711126 )
      My guess is that Intel understands this seeing how they are already planning integrated memory controllers in future products. Funny how people here think they understand processor design better than the most dominant processor design company in the world. Memory bandwidth is something they've been dealing with for some time now.
      • by LWATCDR ( 28044 )
        "Funny how people here think they understand processor design better than the most dominant processor design company in the world."
        Then why did they stick with Netburst for so long?
        I don't think you can call Intel the most dominant processor design company. IBM is killing them in the very high end with the Power 5 and beat them out with design wins in the XBox 360, Gamecube, and PS/3.
        Intel doesn't produce the fastest CPUs or and it is very possible if you count all the PPCs even the most popular.
        They just h
  • by gsfprez ( 27403 ) on Wednesday September 27, 2006 @04:50PM (#16220453)
    I'm a video guy. I can't render video fast enough. I can't do transcoding fast enough. My video is getting larger and deeper in color, and i need more power.

    all of that is threadable.

    so is photographic processing. You can divide a picture 80 ways and have each processor do whatever it is you want to do on it.

    Gamers? Fscking a.... i'm so SICK of hearing hiow everything is for them. Just because something isn't going to help Halo Life 3 run faster is not any of my concern.

    There are lots of people working on their computers that want to see more cores because it will make our lives better.
    • I agree. You can never have enough processors. I do 3D rendering and that is massively parallelizable. Also once the power is there, new things will emerge that use it in a constructive way. For example, when computers became powerful enough, suddenly people were using them for video editing, and that really does require the power.
  • by tygerstripes ( 832644 ) on Wednesday September 27, 2006 @04:50PM (#16220459)
    I'm sorry but, well... didn't you guys do this with processor speed a while ago?

    That didn't work because AMD worked out that architecture can trump speed. They innovated, and then did it again with decent dual-core (as in NOT the two-dies-on-one-chip cack that you churned out at first).

    So, you improved your architecture and implemented dual-core properly, to produce the fantastic Duo. You got back in the race.

    And then there was talk of more cores. And you went "Fuck that, bitches, stay DOWN - we is gon' fuck you up good with 80 cores, bitch, an' dat hard!". Yes, you decided to try and dominate the pissing contest of multi-core instead of megahurtz.

    Jesus guys, didn't you learn a fucking thing? STOP trying to turn out something that little bit "more" than the competition, just get on with innovating and coming up with damn good chips. That's how AMD threatened you and, if you go on with this "anything you can do" shit again, you'll be back to square one.

    • by Psiren ( 6145 )
      I'm taking the 80 cores thing with a huge pinch of salt, as I'm sure everyone else is. Many have pointed to the whole "Pentium 4 will scale to 10GHz" debacle as an example of Intel making dubious claims.

      However, I've not read anything that would suggest Intel isn't looking at improving memory and bus throughput. They'd be mad to think they can stick 80 cores into an existing system and have it do anything useful. I don't think even Intel are stupid enough to try and get away with that one.

      I don't think 80 c
  • by Sloppy ( 14984 ) on Wednesday September 27, 2006 @04:57PM (#16220567) Homepage Journal
    Tom Yager writes:
    If I had a vote, I'd have both vendors stop at four cores and focus on fat and fast busses that give those cores something to fill instead of something to wait for

    What's a memory bus? Oh right, that thing you use to access the DDR4 swap device when the page you want to access is no longer in the on-CPU RAM. ;-)

    Seriously, look at the growth of L2 caches, and tell me the day isn't coming when they just call it "RAM" instead of "cache." If Intel and AMD want to keep piling transistors onto their chips, this'll give 'em something to do.

  • 640 cores (Score:5, Funny)

    by ion_ ( 176174 ) on Wednesday September 27, 2006 @05:06PM (#16220721) Homepage
    640 cores should be enough for anyone.
  • Arrgghhh (Score:4, Interesting)

    by Usquebaugh ( 230216 ) on Wednesday September 27, 2006 @05:12PM (#16220817)
    Why is it that with intel talking about a radical change in consumer hardware the level of comments on /. is barely higher then that on AOL.

    We have had multi processor machines for ages. This is not a sudden unknown. Look up transputer, connection machine, beowulf, cray. There is still ground to be covered but it's not unkown territory. The difference is this is intel, intel needs a big market to sell to.

    This is not going to make significant difference to the end user, most of them will still write letters, calculate spreadsheets and browse the web. It might be enough to finally expose MS et al for what they have always been, the parasites.

    Where this is going to hit home is in the realm of programming and OS.

    Want to run an OS primarily designed for uniprocessing on a multi way architecture? Look at the issues Win&Lin have with SMP, limited to 16 processors I believe. Numa and beowulf are a different kettle of fish. So what will we have on these massive SMP architectures?

    Programming, at last we might be getting out from under VonNuman. Progress might be possible after 30+ years of stagnation. The symbolic/functional languages are going to start to move forward. Hell we might even get to run on stack based cpus with energy reclamation automated :-) Of course a nice message passing symbolic language might score big.

    But given then history of software we'll have a bunch of ignorant, loud mouth idiots running around telling everybody the one true way is Java with mutex and semaphores. PHBs will grab at the first thing that has enterpise written on it and is 'guaranteed'. Most programmers will code how they have always coded head down, ass up. The number of processors will double every two years and the speed of software will continue to halve in the same period.

    Of course nobody will suggest that a staged conversion should take place. There will be all these reasons to throw everything away and start over. Because this time we'll get it right!
    • by dfghjk ( 711126 )
      "Want to run an OS primarily designed for uniprocessing on a multi way architecture? Look at the issues Win&Lin have with SMP, limited to 16 processors I believe."

      Neither Windows (NT+) nor Linux were "primarily designed for uniprocessing". Both platforms run on much larger than 16-way systems. Licensing issues are a different matter.

      "So what will we have on these massive SMP architectures?"

      Applications that won't use them for a while?

      "Programming, at last we might be getting out from under VonNuman."

      W
      • Which large scale SMP system does Linux & Win run on? I thought the HAL for Windows was designed for 16 way and that Linux SMP is being added as needed. The issues of SMP are prevalent on both systems, when devloping the OS the multi processing was not high on the priorities.

        Von Nueman(sp) is the uniprocessor architecture. Single bottle neck through which all data and instructions must pass.

        I remember a time before x86 and the current x86 chips are just x86 on the surface. The x86 instruction set is
    • Yes, but in the background, CS will continue to move foward, in fits and false starts as it always has of course, but forward, by the real researchers working in the background and totally ignored by the mainstream. The maths will be better and better understood over time and significant proof are completed. Once in a while one of the enterprisey languages will pick up one of their ideas, call it the holy grail, and introduce it to the ass up programmers forcefully, which they will find uncomfortable for
      • Although I tend to agree with most of the above. I am concerned that nowhere is CompuSci blue sky research being done. Where is Princeton of the 40s, Burroughs of the 50s, Bell Labs of the 60s, Xerox Parc of the 70s? Yes I did sort of make up the dates etc to make a point but it's still pretty close.

        Today it seems that all research must show a profit/product. Is any where looking to hire the cream of post docs, those with radical ideas and the skills to implement. How about tackling a big problem e.g.
        • We knew how to get to the moon roughly. We know how to buld Quantum computers roughly. We haven't the foggiest idea of how to make a strong AI. No amount of money in the world will make that strok of genius happen... it just will.

          To answer your questions though, the reasearch is being done in the universities mostly, supplimented by the big corps. Microsoft has some excellent reasearch going on in this field that I have seen first hand, as does IBM and Google to name a few. The missing peice is still t
          • Which university, which papers? I have yet to see anything with MS on it that makes me do anything other than yawn. IBM got mired in process refinement during the 90s, make the same stuff better. Google I'm not to sure about, they have a lot of people with credentials but they seem to go strangely silent when they work for google?

            Nothing wrong with academic research but it's rarely blue sky and if it's being funded by a business it's usually difficult to get info. Prehaps I'm looking in the wrong arenas
            • Re: (Score:3, Insightful)

              by Procyon101 ( 61366 )
              The complexity of the research is due to the fact that it is so very, very refined. There isn't much out there that's revolutionary, so instead of "Look at this great concept... a wheel" we are getting "The benefits of XYZ fractal tread on a vulcanized rubber tire under wide tempurature variations spanning water phase changes." The second paper is much more educated on the subject, but much drier and not at all revolutionary ;)

              In Genetic programming, we had heirarchial GP a couple years ago, breaking thro
  • I expect we're going to need some progress from the fan guys.
  • ... unleash a new era in data-mining, ...

    Data mining? How does having 80 cores improve I/O?

  • I think the important thing in the announcement is not the 80-core thing, but the idea of a memory chip sandwich. What was described is attaching the chips with what would be several thousand connection points giving more than a terabyte per second aggregate bandwidth. I heard (I watched the presentation) each core would have 256 megabytes dedicated memory.

    Assuming this memory could be used smartly, segregating incoherent memory spaces (it seems rather obvious the dedicated memory would not be a coherent im
  • They have a slide that matches successive levels of application demand with: Text, Multimedia, Video & 3D, RMS.

    Okay, so I understand that AI is more compute-intensive than video. And I understand that it could be easier (tera instead of peta) if social reasoning isn't included. But really, Intel, I just don't want RMS on my computer.

    Also, the jump from nanoscale to terascale may be impressive, but I don't think it'd be useful to have a transistor with a 310-million-mile-wide gate. Your device isn't goin
  • Intel(and AMD for that matter) need to design some sort of application layer that handles parcing out tasks to the various cores regardless of the number of them. The biggest problem with multi-core applications right now is many many programs simply don't take multiple cores into account. In addition, this is going to become a huge hassle for future programmers unless this is done: "Well how many cores are we going to write this program to take advantage of?".

    Also, this is something that intel/amd ar
  • from http://www.informationweek.com/shared/printableArt icle.jhtml?articleID=191901844 [informationweek.com] "I've always been amazed at the Apollo spacecraft guidance system, built by the MIT Instrumentation Lab. In 1969, this software got Apollo 11 to the moon, detached the lunar module, landed it on the moon's surface, and brought three astronauts home. It had to function on the tiny amount of memory available in the onboard Raytheon computer--it carried 8 Kbytes, not enough for a printer driver these days. And there wouldn

Recent research has tended to show that the Abominable No-Man is being replaced by the Prohibitive Procrastinator. -- C.N. Parkinson

Working...