Intel Pledges 80 Core Processor in 5 Years 439
ZonkerWilliam writes "Intel has developed an 80 core processor with claims 'that can perform a trillion floating point operations per second.'" From the article: "CEO Paul Otellini held up a silicon wafer with the prototype chips before several thousand attendees at the Intel Developer Forum here on Tuesday. The chips are capable of exchanging data at a terabyte a second, Otellini said during a keynote speech. The company hopes to have these chips ready for commercial production within a five-year window."
Apple and Microsoft and BSD better hurry and scale (Score:5, Interesting)
Are all cores created equal? (Score:2, Interesting)
In an 80-core environment, there will likely be inequalities due to chip-real-estate issues and other considerations. The question is, will these impacts be felt at the code level, or will the chips be designed to make these differences invisible? If the former, will the OSes be designed to use the cores efficiently, or will they simply see "80 cores" and, out of ignorance, make poor decisions when allocating tasks to various cores? If the latter, what performance penalty will be incurred?
Re:76 too many cores? (Score:4, Interesting)
A basic strategy would be for the OS to devote each process to its own processor.
This would reduce the need for TLB/cache flushes or eliminate context switches entirely. The whole machine would be really snappy.
That said, for a desktop machine, this is a huge amount of overkill, but with economies of scale being what they are, we'll probably have this power available soon.
What I'd like to see more though, is extra functionality in hardware rather than more of it. Wouldn't it be great if hardware was able to handle some of the things an OS is now used for, like memory (de)allocation? Or if we could tag memory according to type? Or if there were finer-grained controls than page-level?
nVidia should be worried.... (Score:5, Interesting)
The other thing is that with that many cores and all the SIMD and graphics instructions that are built into current processors it looks to me like the obvious reason to have 80 cores is to get rid of graphics coprocessors. You do no need a GPU and a bunch of shaders if you can throw 60 processors at the job. You do need a really good bus, but hey, not much of a problem compared to getting 80 cores working on one chip.
With that kind of computer power you can throw a core at any thing you currently use a special chip for. You can get rid of sound cards, network cards, graphics cards... all you need is lots of cores, lots of RAM, a fast interconnect, and some interface logic. Everything else is just a waste of silicon.
History has shown that general purpose processing always wins in the end.
I was talking to some folks about this just last Saturday. They didn't beleive me. I don't expect y'all to believe me either.
Stonewolf
Re:76 too many cores? (Score:2, Interesting)
- Dedicate 45 cores to the opponent AI (which would run on simple neural nets)
- Dedicate 20 cores to physics (because physics is the next-big-thing)
- Dedicate 8 cores to keeping the former fed with usable data (like game logic, asset management, etc)
- Dedicate 4 cores to 3d sound (because with so many cores it's cheaper for me to develop the sound myself than license the latest EAX from Creative, or whatever's hip at the moment.)
- Dedicate 1 to networking and voice-chat (because the better the compression, the better the experience)
- Dedicate 1 to coordinating the rest.
- Leave 1 for the OS and any parallel tasks.
Oh and not having to make my code terribly efficient would cut my development costs a lot.
So that's that for using 80 cores. Sure could use more in the AI department.
And the advantages of an 80-core chip over 40 2 core chips? A hell of a lot of physical space.
Re:Apple and Microsoft and BSD better hurry and sc (Score:3, Interesting)
Re:nVidia should be worried.... (Score:5, Interesting)
I don't think that general purpose processors will ever completely replace special purpose hardware. There is simply too much to be gained by implementing certain features directly on the chip.
Re:Not enough demand (Score:3, Interesting)
Personally, I also believe that the people blaming software for the failures of AI are wrong, and that multi-core computing will also finally enable some interesting applications like usefully robust speech recognition, object recognition in images, 3D reconstructions from video footage, stereo-vision-based navigation for robots, and other cool stuff we haven't thought of yet. All that's still a little farther out though.
basically what FPGA is about (Score:1, Interesting)
Re:But how do they interconnect? MPI (Score:3, Interesting)
What you're asking for is pretty standard stuff in the high end, where hundreds of processors is quite common. Cache coherency is a killer, and so they have died out long ago in the high end. when you think about it, CC basically requires a crossbar switch style memory archictecture which expands with the square of the number of processors, and much higher speed logic to resolve conflicts. So eventually, it doesn't scale. Instead multiple applications with large numbers of processors tend to only have small groupings (say 8 or so, but can go upto 30 odd) using shared memory/cc access) and then MPI for anything bigger.
Clusters have been using MPI for years for this sort of
thing. all the custom interconnects for supercomputing
have customized implementations in their MPI libraries to
take advantage of 1-sided communications. Most use a facility which can loosely be termed RDMA - remote direct memory access, another word sometimes used is OS-bypass. The idea is that for this sort of communications, you want to skip the TCP/IP stack and other OS buffering overhead, and just have straight memory to memory copies going on (under userland library control.)
folks generally don't do the direct invoking of things on other processors, but instead fire off jobs on blocks of processors, and have them communicate with 1-sided primitives. This is the sort of thing done on hundreds or thousands of processors today. It will just gradually percolate down to normal applications.
Why erlang is far ahead (Score:1, Interesting)
But from the programming point of view, erlang has supported multiple cores for as long as it has existed.
In erlang when you send a message to another process, you don't know if that other process is executing within the same OS-process or is executing in another OS-process, or even running on an distant machine. That is very good, because you don't have to rewrite anything to support distribution.
Concurrency is one of many features where erlang makes things easier for you.
On the other hand.. if you look at java or perl, those green processes or real threads (pthread) is just slapped on, like any other library. The language doesn't have any support for processes/threads nor is it oriented around processes/threads. This doesn't exclude you from doing erlang like stuff. It is just that an 10 line erlang-hello will expand to an 100 line pthread beast with sharp mutexes slashing around.
References:
New smp feature in erlang:
http://www.erlang.org/doc/doc-5.5.1/doc/highlight
Look at chapter 3, you don't have to understand erlang to follow:
http://erlang.org/doc/doc-5.5/doc/getting_started