48 Core Vega 2 in the Making 206
TobyKY76 writes to tell us The Inquirer is reporting that upstart Azul Systems is planning to integrate 48 cores on their next generation chip. From the article: "The first-generation Vega processor it designed has 24 cores but the firm expects to double that level of integration in systems generally available next year with the Vega 2, built on TSMC's 90nm process and squeezing in 812 million transistors. The progress means that Azul's Compute Appliances will offer up to 768-way symmetric multiprocessing."
Oh wow!! (Score:1)
Re:Oh wow!! (Score:3, Funny)
Re:Oh wow!! (Score:2, Interesting)
Re:Oh wow!! (Score:2, Funny)
Re:Oh wow!! (Score:2)
Re:Oh wow!! (Score:1)
I will be designing and selling an Earth Simulator on one chip. Only $50,000,000,000...paid in advance.
Re:Oh wow!! (Score:2)
The Inquirer is taking Azul's word for it at the moment, which is probably why the article is so light on details. About a billion questions pop in my mind when I hear a story like this. The only answer I get is that, "Sun is banging on Azul's door for IP infringement."
Sure.
Does anyone have any real info on these guys? About all
Re:Oh wow!! (Score:2)
Inquiring minds want to know!
Re:Oh wow!! (Score:2)
So what exactly is "network attached processing?"
BTW, thanks for responding. Sorry to bug you during your relaxing downtime in JFK.
Re:Oh wow!! (Score:2)
http://www.oracle.com/corporate/press/2005_dec/mul ticoreupdate_dec2005.html [oracle.com]
You would see that it depends upon your architeture. For example when running on UltraSpac you pay for one processor for every 4 cores you have. AMD/Intel multicore you pay for every 2 cores you have. Either way they would have to sit down and devise a per processor license for you. Though you can always purchase per user instead of per processor. This would probably be the
Re:Oh wow!! (Score:2)
Though you can always purchase per user instead of per processor. This would probably be the best route if your runnings hundreds or thousands of processors.
That assumes you don't also have 80K+ users-nB
Re:Oh wow!! (Score:2)
woo hoo... at last... something that has processors left over after allocating one to every task running on this Kubuntu box... ;)
Take that pop ups! (Score:1)
AutoCAD (Score:5, Funny)
Yeah, yeah... my Karma is SUPER negative...
Ghostbusters (Score:4, Funny)
There is no Vega only Azul.
(Thank you, thank you, I'll be here all week.)
Re:AutoCAD (Score:2)
Sorry for being one of those lame-asses who corrects others jokes. It just rankled me a bit to think of a million copy and pastes --- array is a nice command if you're a drafter. the polar array option comes in handy more often than you might think.
48 cores is a nice start, but.... (Score:5, Informative)
Re:48 cores is a nice start, but.... (Score:1)
Re:48 cores is a nice start, but.... (Score:2)
The wiki link says 80 not 160 - read (Score:4, Informative)
Re:The wiki link says 80 not 160 - read (Score:2)
Re:48 cores is a nice start, but.... (Score:1)
That equates to: 160 threads per processor package.
For instance: even though the Montecito Itanium is capable of four-threads per processor package, it is only a dual-core (each core is capable of two threads).
Re:48 cores is a nice start, but.... (Score:2)
Re:Cisco/IBM SPP (Score:2)
Re:Cisco/IBM SPP (Score:2)
Finally! (Score:5, Funny)
Vista? (Score:2, Funny)
Re:Vista? (Score:2)
Is more likely than Windows Vista supporting the hardware!
Are they x86 compatible? (Score:2)
Re:Are they x86 compatible? (Score:3, Informative)
That said, its still very impressive to get that many cores working together, though not as impressive as x86.
Re:Are they x86 compatible? (Score:2)
Cassatt was discussed on Slashdot a few weeks ago [slashdot.org].
-Steve
Not for nothing... (Score:4, Insightful)
"Home Discuss on our Forum Flame Author
Recommend this article Print"
Sounds to me like someone issued a press release and wants a share of the excess VC floating around... and the Inquirer took the bait. They did a good job of not loading the buzzwords, though -- they didn't say they would 'leverage their experience with graphics chip design' or anything like that.
I'd expect this company to turn around and sell out to AMD or Intel at the earliest opportunity, if given the chance.
Re:Not for nothing... (Score:2)
Finally! (Score:1)
768 cores, why? (Score:2, Insightful)
Re:768 cores, why? (Score:2)
To answer your question, anyways: More cores == better efficiency == less heat == lower electric bill. Desktop users may not need a 48-core chip (yet), but server farms love designs such as this.
Re:768 cores, why? (Score:1)
Re:768 cores, why? (Score:4, Insightful)
Any task that can be split into multiple processes. An example is an array of data, where a single algorithm is going to be applied to each element. An array of data can represent anything - an image, or stock prices, or DNA, etc.
for any single task, the thing would not be efficient.
It depends on what you really mean by a single task - a given process consists of multiple sequential tasks, where a task may be as fine-grained as a single CPU operation, or perhaps due to overhead of communication between tasks, a tuning effort can be made to say a "task" is some multiple of operations.
Graphics work (Score:2)
Re:768 cores, why? (Score:2)
Think "real-time raytracing". Think computer games with real shadows from multiple lights, with curved mirrors, with glass that actually bends light.
Think about a first-person shooter where you can notice someone sneaking up behind you when you spot his reflection off the hood of the car you're using for cover.
Re:768 cores, why? (Score:2)
Simulations, of weather, bio/chemical/nuclear reactions, searching for extra terrestrial signals in the radio spectrum (okay, kidding about that one)... sciency stuff
Re:768 cores, why? (Score:2, Funny)
Re:768 cores, why? (Score:2)
Remember, the human brain has 100 billion cores, each in itself very inefficient, and yet it is pretty powerful.
There are huge amounts of unexploited parallelism in the tasks our computers perform. The problem is mainly that most of the tools we use (programming languages etc.) are very serial in how they describe and handle problems and solutions. This is natural, of cou
Weather Simulation (Score:2)
My users.... (Score:2)
It is called supercomputing.
Re:768 cores, why? (Score:2)
Re:768 cores, why? (Score:2)
Re:768 cores, why? (Score:2)
for desktops video rendering can uses as many CPUs as you can get. Many desktop apliations are now written to tae advantage of multiple CPUs. Check out Apple's iTunes. I've seen iTunes run a dozen threads.
huge numbers of CPUs, as in "thousands" will likely be required for true AI.
chicken and the egg problem (Score:2)
A few years ago only expensive servers had multiple processors. Now every major CPU maker produces a multicore chip as their flagship product. Many of these chips have hit
Would'nt wanna... (Score:3, Funny)
Re:Would'nt wanna... (Score:2)
Blade servers (Score:5, Funny)
So, chip manufacturer's have adopted the Gillette approach to marketing chips. I guess it was inevitable after they went from one core to two. The only difference, I expect, is that they'll increase by powers of 2. Soon, we'll have a Intel Mach 512 Core Sensor Extreme or something :P
I don't know much about CPU internals but (Score:2)
I would suppose (but am not sure) extra cores reduce the number of transistors being idle at any one moment. The downside would seem that each extra core reduces the capability to process
Re:I don't know much about CPU internals but (Score:5, Informative)
The number of transistors can go up for a variety of reasons. Chief among them is designs that utilize complex performance enhancements. To name a few:
The secondary source of transistor usage is coprocessors like Floating Point Units and SIMD Units.
The latest craze in processor design is to simplify the microprocessor back down to the most basic level. From there, the processors are ramped up through shear numbers of parallel pipelines (i.e. threads) and cores as opposed to ramping up the individual CPU horsepower. These multi-core chips typically share coprocessors among a pipelines or cores, and may even have entire cores dedictated to specific tasks like SIMD. As a result, a properly designed program will be able to execute within a very short period of time, thanks to the parallel nature of the multi-core architecture.
Now the only problem is in finding these "properly written programs".
Re:I don't know much about CPU internals but (Score:2)
Almost.
Sooner or later you have to branch, or speculate, or go to main memory (or *gasp* I/O). So exploiting parallelism in arbitrary code is quite a challenge. Having worked on optimizing the linpack benchmark for a small "supercomputer" (64 CPUs) during college, it is quite a bit of work to decompose a problem this way.
However, the trend is not "well written" parallel apps, but rather parallel process scheduling: like antivirus + network stack + device drivers + MP3 player + exc
Re:I don't know much about CPU internals but (Score:2)
So if you don't know CPU internals, why make these statements:
- NO, number of transistors has nothing to do with it.
- NO, transistor c
Re:I don't know much about CPU internals but (Score:2)
But it would seem to me, that for the same sized chip, regardless of number of cores, the process (90 nm in this case) limits the number of transistors.
Re:I don't know much about CPU internals but (Score:2)
Correct, that would set the upper limit if everything was constant. But the article does not state what process was used in the first-generation chip:
The first-generation Vega processor it designed has 24 cores but the firm expects to double that level of integration in systems generally available next year with the Vega 2, built on TSMC's 90nm proces
Re:I don't know much about CPU internals but (Score:2)
Beyon
Re:I don't know much about CPU internals but (Score:2)
Your statement is just as wrong as his. Yes, you can design a faster CPU given more transistors - up to a point. (If more transistors don't help, why do you think CPUs have had increasing transistor count for entire history of digital computing?) Only recently have designers failed to find new ways to use more trans
Re:I don't know much about CPU internals but (Score:2)
Um.... hello? You think those extra cores didn't require transistors?
C//
Re:I don't know much about CPU internals but (Score:2)
Did you even read the article?
Still not quite right (Score:2)
In a way that's true, but most cpu's are nowhere near the bound of what you can get from the transistor count. The general purpose nature of them makes that true.
If you've ever used a hardware definition language like verilog, it becomes apparent that you can design silicon that can do FAR more per clock cycle than most
A new beginning (Score:3, Interesting)
Re:No it isn't (Score:2)
-nB
Cost of Hardware Failure (Score:2)
Are we putting too many eggs in one basket? I thought modular design was good.
Oh well, back to setting up Linux on old dell boxes. Maybe I will get a real server one day. *grin*
So True! (Score:2)
Especially if you are the sys/admin who did not order a spare X core chip or worse budget for the inevitable cost of ever replacing one or more of these behemoths!
Re:Cost of Hardware Failure (Score:2)
extreme (Score:3, Funny)
Trying to be humorous, not seriously comparing the two chips.
Re:extreme (Score:2)
tm
Re:extreme (Score:2)
Re:extreme (Score:2)
It sounds like you already knew the joke bombed.
I bet this happens often....you must be a blast at office parties.
So what's the memory model? (Score:5, Insightful)
There's no way you can feed that many processors over a single bus and if you've got symmetric access to a bunch of busses, that's one heck of a cross bar switch and I don't see that its any easier to program than NUMA. Instead of making sure data you need fast is local you have to make sure you load balance - that has to be harder much of the time.
Re:So what's the memory model? (Score:2)
First off, not every core needs to be as powerful as a AMD or Intel core. There are some problems that are easier to solve using a lot of low-power cores vice one or two uber cores.
You could also reduce the memory bandwidth to each core. You could keep a fairly powerful core, while only feeding each core a limited bandwidth.
You could also completely change the idea behind how the proc works. Maybe they c
Re:So what's the memory model? (Score:2)
I've never heard of these companies or projects, so until they demonstrate something or show some credible people in the project, I'll file it in the same category as Infinium Labs and Duke Nukem. I know the current well-known chip m
"However... (Score:1, Troll)
Ah, the more things change, the more they stay the same!
Neat stuff. (Score:4, Informative)
It's a very neat concept, and the careful wording ("virtual machine accellerator") indicates that they aren't tied to just Java... Azul's Compute Pool could be something future Parrot-lovers can use to sneak LAMP into places where Java rules all.
They're using some serious sillicon know-how to fuel an innovative and original product... gives me hope we aren't doomed to a wintel-only world, after all.
Re:Neat stuff. (Score:2)
Re:Neat stuff. (Score:2)
So Which is Better? (Score:2)
Unhappy Math (Score:2)
Wow!... (Score:2, Funny)
Ah, never mind.
Heat, and power (Score:2)
The issue I see with this is:
Multiple processors generate more heat, and consume more power. Would it not be the same for multiple cores, thus making such a machine a power-chugging space-heater? Are special cooling devices r
Re:Heat, and power (Score:2)
It Runs Java not X86 Code!!! (Score:4, Informative)
It is going to only run a Java Virtual Machine so anything written in Java will run on it.
Windows will not run on it. I took some operating system courses in college and the intel
architecture is a huge mess of hideousness of backwards compatibility that luckily only operating system implementers have to deal with. By only running Java these guys get to sidestep the whole mess and focus
on massively optimizing the hardware architecture for running java code.
http://www.azulsystems.com/products/nap.html [azulsystems.com]
traditional programming languages obsolete? (Score:4, Insightful)
x = zeta(y)
w = gamma(z)
print(x+w)
The code explicitly states that x should be calculated before w although they could certainly be calculated concurrently. Of course a smart compiler could figure out the dependencies, but the programming language shouldn't force the programmer to specify an order when none exist.
I predict that non-procedural languages will dominate the future of programming. Some currently used languages seem already well-suited for taking advantage of multiple cores, like HDL languages, functional languages, Labview-style languages.
Size (Score:3, Interesting)
TFA does not mention anything about this new processor's die size. But, if we scale up the Cell processor's transistor density, the Vega processor, with 812 million transistors, would result in a die size of about 800 mm^2, which is more than one square inch. In the processor industry, that kind of die size is just plain ridiculous. I wonder what the yields are?
Re:Size (Score:2)
If those transistors are most of cache then the yields are pretty good. If they are logic then there needs to be more clarification. Firstly the
To paraphrase... (Score:2)
"The desktop is the computer."
Re:Chevy Vega Returns! (Score:2)
What a beauty!
Re:Chevy Vega Returns! (Score:2)
Re:Chevy Vega Returns! (Score:2)
Re:OK, but... (Score:2, Informative)
Refer to:
http://graphics.cs.uni-sb.de/~sidapohl/egoshooter/ [uni-sb.de]
and for a screenshot with multiple reflection:
http://graphics.cs.uni-sb.de/~sidapohl/egoshooter/ screenshots/mutlipleReflectiveSpheres.JPG [uni-sb.de]
Re:OK, but... (Score:2)
Re:Vega (Score:3, Funny)
Re:Vega (Score:2)
It was a scary car.
Re:Points-of-Failure (Score:2)
Re:Memory interface (Score:5, Informative)
The cores are our own design, not MIPs, not ARM, etc. Simple, short in-order pipeline, decent caches (not huge) caches.
Power consumption is very low compared to the equivalent stack of P4 blades or other main-frame solution.
The first-gen box (368 cores) is about 2700 watts in an 11U rack mount.
Next-gen box isn't much bigger, nor draws very much more power (a little more of both I belive).
Re:Azul longevity (Score:3, Informative)
"The scalability is showing is attracting big-name early adopters, including Credit Suisse - and even enough to have Sun Microsystems lawyers hammering at the door, alleging intellectual property infringements."
Basically Sun are saying that Azul are infringing on Sun's patents and have illegaly obtained Suns trade secrets. Sun have tried to take part ownership of the Azul and charge ongoin
Re:Azul longevity (Score:2)