Sun's MAJC vs Intel's IA-64 64
Shauna writes "Ars Technica has an informative article on the differences and similarities between Sun's new MAJC and Intel's IA-64 plans on the design-level. I think it shows in a round-about way that MAJC is going places while Intel sleeps. " The article is definitely designed for the chip fetishists in the audience.
Good article. Sun rules. (Score:1)
Maintains Status Quo. (Score:1)
Maintains Status Quo. (Score:1)
up substantially from there. The question Sun needs to answer is "why dose IA32 have more apps
running on it than all the other platforms ?".
It's about price and openness and when Sun starts letting other people in atr all levels of it's market it
can start increase that apps support gap. I.e. License the motherboard and bus architecture, rent out
the chip masks, Open the complete API ( SCSL isn't OSS but it dose this ).
In short realize that iNTEL's success has more to do with the likes of VIA, AMD and Compaq than
it dose with Microsoft. Cause it sure aint about performance. Sun already has some segments of
the high performance corporate market sewn up. Just ask Oracle.
Easy Read (Score:1)
Especially I liked the concept of Space-Time Continu.. err Computing, which allows the processor to spawn a speculative instruction stream in case of the main execution stream is working on a loop, or what not. This speculative instruction stream can run ahead, and after the main execution stream gets its job done, both of them can join (possibly, I'll assume this is not always the case). The speculative execution stream can be given to a whole another processing unit in case of a processor cluster (it seems the idea is that there are several MAJC processors per die, mmm, truly parallel threads).
Clustering MAJC processors in a single die makes some interesting prospects for threads. One MAJC processor has enough registers for holding the contexts of four different threads at once (according to the author). This should make some real difference regarding context switches.
Fascinating stuff.
language specfic chips have failed in the past (Score:2)
Bus Architecture (Score:1)
-Spazimodo
Fsck the millennium, we want it now.
More info (Score:3)
MAJC developers connection [sun.com]
A chat with the lead designer for the MAJC architecture [sun.com]
MAJC 5200 chip press release [sun.com] From MAJC docs page: [sun.com] First MAJC implimentation presentation [sun.com].
Some comments... Firstly, I don't think it was made clear that the MAJC architecture can execute compiled C/C++ as easily as Java. You'd also probably be using a dynamic compiler like HotSpot rather than a JIT compiler. I think the guy got it wrong about the functional units being data agnostic - the registers definately are though. Still, you can (pretty much) execute 4 of any type of instruction at once - the first block is slightly different compared to 2-4 (which are indentical), though I don't know how. The very interesting STC concept is much easier to do with the Java programming model (because of certain issues with data) compared to C/C++. They don't say how easy it would be to apply STC to C/C++ though - might be impossible in the general case, though possible in some "limited" cases. The MAJC does also have scoreboarding for instructions for dynamic execution times eg loads, though I'm not sure if this applies to things like FP multi/div/sqrt etc. There's a couple of other interesting things the guy missed - the cross-bar data switch and the steaming data ports, for example. Though apart from that, I think it was a pretty decent review.
With regards to the first chip - the MAJC 5200, it's supposed to "tape out" (get first physical implimentation) by the end of the year and sample in Q2 next year. The 5200 has 2 CPU units on chip (with a shared L1 data-cache and a seperate L1 instruction caches), will be made on a .22 micron process, run at 500Mhz and consume 15W.
Here are some "here's how fast it is" stuff from the PR:
btw, Sun have a 2000 processor array for simulating their UltraSparc-III chip and they do do some pretty accurate simulations, including things like booting Solaris. In the MAJC 5200 PDF/PS file, they also quote some (estimated) speed-ups gained from using the STC technique for the SPECjvm98 suite of programs - they get from 40-60% or so, which I think is very impressive. The MAJC 5200 also has a graphics pre-processor (it's going to be used in Sun's new high-end graphics systems) and they quote some triangle processing figures with different levels of lighting detail. I don't really get what they mean, but they quote from 60 million triangles/s to 90M/s or so, which is in about the same region as the PlayStation2, or about 4x faster than the fastest current mainstream PC graphics card. However, that doesn't mean you can use 60m-90m in real world stuff...
In general, the chip is aimed at "low end" (though for Sun, "low end" equates to less than $100,000 generally) embedded solutions.
Re:Bus Architecture (Score:2)
STC speedup correction (Score:2)
compress: +51%
jess: +56%
db: +62%
javac: +67%
jack: +64%
mpegaudio: +55%
Personally, I think that's very impressive.
Re:AMD's x86-64 bit extensions (Score:1)
Threaded programming not dominant? (Score:2)
Re:Interesting, but.. (Score:1)
Re:"agnostic" function units (Score:1)
Re:Threaded programming not dominant? (Score:1)
Multi-threaded programming is not trivial, and can be hellish to debug. If you're referring to the usual Java version which is "Throw a mutex around a functions and your done" then it is not difficult, but when you have to take into account deadlocks then things can get trickier.
More scaleable? (Score:1)
However given that - an on-chip cross bar for a chip with a small number of requestors and LOTS of layers of metal for routing may be a better solution than trying to build something with an on-chip tri-state bus (we silicon guys try to avoid on-chip tri-state things if we possibly can)
Woolly (Score:1)
There are some big new concepts with MAJC. It needs code optimisation done for each implementation, uses speculative threads (which will be competing with the known and proven magic of Vector processing) and has a lump of registers with local/global separation.
I can't imagine any of these optimisations are going to kick butt in useful applications until more information comes out. Intel withheld information on the Pentium (Appendix H) and this resulted in compilers producing slower code when Pentium optimisations were turned on (for about a year).
Meanwhile, general purpose and graphics processors will continue to evolve with the advantage of mature tools supporting them. Embedded chips can also go to tightly coupled SMP on chip at low cost.
Exciting times.
PS: Context switching on cache misses has been around for at least ten years. When everyone thought that superconductors would take over the world there was discussion on how to deal with the monster memory latency and and one architecture (forgotten the name but it may have come from Japan) proposed running 5 threads concurrently and swapping execution on memory accesses.
Re:Good article. Sun rules. (Score:2)
Would you risk the future of your company on a processor that is optimized for an interpreted language (ahh, memories of old Wang minis with BusinessBASIC CPUs!)?
They had a shot at the embedded market with the microSPARC. They missed badly.
Sun needs to pick a direction, and quickly. SGI is hot on their tail in the high-end workstation / super-server market. Linux / FreeBSD is eating away at their mindshare in the ISP / internet market. Motorola is killing them in the embedded market.
Oh yeah, the Javastation is going to save them. Sure.
Just about as likely as me going to my boss and suggesting building a system around a mutant CPU from this drowning company.
Re:Good article. Sun rules. (Score:2)
Also Java has been rather unsuccesfull on the desktop but is quite succesfull on the server and is also gaining momentum in the embedded machines world. JIT is not the same as interpreted (which you are suggesting). JIT stands for just in time compilation. Another thing sun is working on is dynamic compilation which JIT taking advantage of profiling information collected during execution. One major advantage of dynamic compilation over static compilation is that you can use information that is only available during runtime for optimizations. One of those optimizations could be discovering paralellism. And its exactly this type of optimizations that is beneficial on a MAJC architecture.
Also the ability to combine a MAJC core with a DSP on one die seems like a killer feature to me for embedded machines where every extra chip increases the price of the product.
You seem to be under the impression that its not going well with SUN at this moment. Truth is that SUN one of the more succesful companies of the last few years, especially in the server market.
SGI on the other hand has had quite some reorganizations over the past time.
Linux doesn't have to be a threat to companies like SUN. SUN gets its most revenue from hardware, not from software.
"Just about as likely as me going to my boss and suggesting building a system around a mutant CPU from this drowning company."
Probably your boss won't consult you for this. In case you haven't noticed, I responded to each argument in your post, if you think I forgot one or don't agree: do reply.
The Drowning Companies (Score:1)
Vertical Multithreading (Score:2)
In fact, this is the basis of Tera Comupter's [tera.com]MTA (Multithreaded Architecture) processors that are already being evaluated at the San Diego supercomputer center.
Threaded programs (Score:2)
IMHO multiprocessing is easier to debug than multithreading since you don't have the potential problems of shared data structures unless you're explicitly using shared memory.
Stanford (Score:1)
Curious.
err whats wrong (Score:1)
sorry I was @ work and I know that alot of people where not familer with HP and IA64 I was trying to point out the fact that HP worked on IA64 more than intel have done most of the research has been done with them
its intresting to notice that alot of DATA centers where they process data run HP PA-RISC and HP-UX
HP-UX is a dog just because you find that nothing you would think would be standard is the manual might as well read
THE STANDARD IS
a good referance is the trimaran project
http://www.trimaran.org/ [trimaran.org]
which has been Ported to LINUX now you too can mess with compilers !
I am under NDA over the compiler but I can tell you that they have been very clever with the registers makeing full use of the hardware and that
A) the compiler sees most of what the programer wants to do so can preform most of the checking
B) the hardware has good V. good prediction built in and the compiler tells it what it thinks is going to happen
the compiler is the most important part of the system this was learnt from OS2 and from GCC how much does the GNU Compiler Collection get used !!
regards
john
(bit upset about the troll)
a poor student @ bournemouth uni in the UK (a deltic so please dont moan about spelling but the content)
JIT compilation is the future (Score:1)
Whether it is Java, or building "GCC" into the kernel loader so that you can execute a tarball file (and have it compiled on the fly and cached), we need binary independence.
Now, it's temping to say "just compile the source", but if the source is a 20 megabyte monster like Mozilla that takes a loooong time to compile, you have an end-user problem.
The future is going to be awash in processors of all kinds, and we need a way of porting binaries across CPUs. If we don't, we won't have competition in the CPU market anymore, and one architecture will dominate, just like the Windows API dominates.
Also, once you get to IA-64 like devices, code will have to be recompiled for each new generation of IA-64 depending on how many execution units it has. If one generation can do 4-way execution, and the next can do 8-way, a JIT compiler is needed if the code is to be optimal for each CPU.
One approach would be to simply take the middle-layer output of a compiler like GCC, write it out in a byte code format, and then have operating systems backend-compile peephole-optimize this code when loading it.
Whatever mechanism used, JIT compilation (it's not just Java) has several benefits:
1) binary independence, register independence
2) can perform optimizations that even C/C++ can't. C's linker can't inline polymorphic or indirect function calls. A JIT compiler could actually inline calls from a shared library that isn't known until runtime. If this library is
OpenGL or GTK, rendering may get a huge improvement.
3) can adjust code and data locality depending on the size/capability of the CPU's cache
4) security. Depending on the mechanism used, can be used to "sandbox" code.
Try not to focus too much on the "interpreted" aspect of bytecode. Instead, just think of bytecode as simply a compressed version of source code. For instance, Java's bytecode format is so verbose, you can pretty much decompile it back to the original source without much loss.
Java is just the first step, but in the future, when the world is filled with ubiquitous CPUs everywhere, we're going to need ways to prove assertions about untrusted code, and we're going to need ways to run it wherever you are, so you aren't tied to any one device.
Re:Good article. Sun rules. (Score:1)
In the same period of time, Sun has gone from $4 to $112.
SUN has stayed on a clear course of providing workhorse servers and workstations based on the SPARC architechture and SunOS/Solaris OS.
SGI has followed a confusing track on the high end (spliting their offerings by buying Cray (which competed with their own Origin line), then recently announcing to spin it off) and now deciding to move over to the unproven IA-64 technology (whose scalability is seriously questioned). In the workstation market they dabbled in and then withdrew from the IA-32 NT market. Over the last few years SGI has had "focus" on 4 architechtures (MIPS,IA-64,IA-32,Cray) and 4 operating systems (IRIX, UNICOS, NT, Linux). They have made a series of bad business moves (ex: the bus technology for the Sun Ultra Enterprise Servers is based off of technology SGI sold Sun) and have been laying off employees for the past several quarters.
True, SUN has not swept the world by storm with Java, but they have been very successful at their cord business - selling servers.
Re:Vertical Multithreading (Score:1)
Also, NASA did a great review of the TERA architechture. HERE [esgeroth.org]. For many scientific computing tasks a MTA can blow away even a high end Origin or cluster. Let's face it - we can make our CPUs as fast as we want, but the memory bottleneck is just getting worse. I believe that low-level multithreading could solve a lot of these problems.
Re:Maintains Status Quo. HUH? (Score:1)
HUH?
Sun already licenses SPARC, motherboards, and just about all their other technology. You can go out and buy third party Sun clones right now and they are significantly cheaper. I bought one myself.
Why isn't there more "software" for Solaris (apart from the 15,000 apps already available?) Because Solaris is a server OS, not a consumer OS, that's why.
Re:Stanford (Score:1)
Re:More scaleable? (Score:1)
Re:JIT compilation is the future (Score:1)
You are right ... (Score:1)
As much as others seem to dislike, I think that this will be the future for independent paltform/CPU/Language code porting. I'm not refering only to Java here, but every compiler can be *easily* modified to make such PCODE based output. Even further, the ideea of parsing some text file (C++/Java/HTML/...) and getting a PCODE output(or something like this) can also be generalized. There's some comercialy available product also that parse various syntax languages. I've seen it in DDJ, but i forgot it's name
MAJC = E2K = Transmeta (Score:1)
can't you see?
its so simple
Re:Vertical Multithreading (Score:1)
While a compiler could register allocate in the way you suggest to keep software-based thread context switches cheap, it couldn't address the cache miss issue which needs to be implemented in the CPU itself.
Re:Good article. Sun rules. (Score:1)
Embedded machines are not the only domain for this chip.
First off, I was replying in the context of the original message on this thread, which said Sun was going to take over the embedded world with this chip (this also happens to be my domain of expertise). And I'll stand by my arguments in that context. As for the others...
Java
People keep saying that; perhaps I'm overlooking the obvious, but I sure don't see this. Can you offer examples (really, I'm not trying to be a smartass here; the server apps I see tend to be large database apps with a SQL backend and various frontends (perl/tcl/HTML/Visual Basic). I'd love to know where these Java apps are going.
JIT is not the same as interpreted (which you are suggesting).
I'm aware that this isn't a JVM on a chip (that would be the picoJava chip line). However, JIT, by definition, doesn't care what the back end is. So if JIT is the solution, why do we need this beast at all?
One of those optimizations could be discovering paralellism. And its exactly this type of optimizations that is beneficial on a MAJC architecture.
If you're looking for paralleism in the instruction stream, we call that pipelining. This is not a new idea. If you're looking at a higher level, that becomes a function of the compiler, which again is independant of the back end.
Sun's claims that this architecture will benefit from multi-threading seem to revolve around their catchily named "Space-Time Computing" (which translates into speculative execution; again, not a new idea) and high cache coherency (always a good idea, which is why Intel and Motorola already have solutions; they are however honest about them and don't pretend that they will scale to hundreds of processors).
In short, all the claims they make about benefitting from multi-threading can be equally applied to other CPUs. What's left is marketing spew, positioning this as "Java-optimized". Which sure isn't going to help them in the market I know.
You seem to be under the impression that its not going well with SUN at this moment.
Things look good right now (hell, I'm typing this on a Solaris workstation!). But I don't see a good future; as I said in my original message, they're under attack on all sides, and if this is their response (let's take on Intel and Motorola!), I don't see a rosy future.
Linux doesn't have to be a threat to companies like SUN. SUN gets its most revenue from hardware, not from software.
It's Solaris that sells the hardware. Price/ performance for Sun hardware is miserable. As an example, my development group recently built a proof-of-concept 16-node Linux cluster using off the shelf parts (el-cheapo K6 motherboards) which costs 10% of the price of the Sun Enterprise server it's competing with, and performs about 25% faster. If that's not a threat, I don't know what is.
Probably your boss won't consult you for this.
Truth in that; I'm more of a software guy.
Re:Good article. Sun rules. (Score:2)
People keep saying that; perhaps I'm overlooking the obvious, but I sure don't see this."
OK, I can't put any examples out of my big hat and I'm to lazy to go and search for it. But considering the huge amounts of investments made by SUN, IBM and others on getting Java to work on any platform you can name, they must expect some return on their investment. You do the math, but that tells me Java is increasingly succesfull on the server.
"So if JIT is the solution, why do we need this beast at all? "
As I explained later on in my post, a dynamic compiler like hotspot can do something a static compiler (typically used for C) namely using information gathered at runtime to perform optimizations. This is a major advantage on architectures that require the compiler to optimize the instruction stream for the processor. Further more Java and threads are a good combination (one of the reasons for Java's success on the server). And MAJC is very good for running multithreaded stuff.
"If you're looking for paralleism in the instruction stream, we call that pipelining."
VLIW chips like MAJC and IA-64 do the pipelining in software (at least discovering paralellism and optimizing the instruction stream). It's not new I know. I just explained why a dynamic compiler has an advantage over static compilers in doing so.
"In short, all the claims they make about benefitting from multi-threading can be equally applied to other CPUs. What's left is marketing spew, positioning this as "Java-optimized". Which sure isn't going to help them in the market I know."
The market you are working in is changin rapidly. With fast chips getting dirt cheap, the rules of the game are changing. Coding everything in assembler is not really an option anymore because that slows down time to market. Similarly, more and more is implemented in C++ rather than C on many embedded platforms. I work together with Axis (a swedish company building embedded machines) for my research and I know their main problem is the fact that they have to maintain a huge source tree of C++ code (100K+ LOC). This slows them down in introducing new products.
So because of this, Java will become an option in your market too.
"In short, all the claims they make about benefitting from multi-threading can be equally applied to other CPUs."
I don't see that, the article was quite convincing in comparing IA-64 and MAJC. Only time will tell which is the faster processor since neither is available at this moment. Something tells me IA-64 will be a major disappointment. Maybe SUN will screw up on MAJC but the architecture doesn't sound half bad. More likely is that intel and motorola will 'borrow' some of the ideas for this processor in their next generation CPUs.
As has been pointed out before, Linux is not yet a good math on high end server platforms because it lacks certain features. Probably this will be resolved in due time but for now linux is not competing in that area.
"Price/ performance for Sun hardware"
Hardware and OS licenses are only small portion of the cost on big servers. These things have to be maintained by expensive staff and they have to run expensive, tailored software. If price performance of the hardware/OS was the only consideration, they would have been out of business long time ago.
On the long term I can see Linux replacing Solaris. Obviously all the UNIX giants (Sun, IBM, SGI) know this and are generally cooperative towards Linux (at least more then a certain Redmond based company).