Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Sun Microsystems

Sun's MAJC vs Intel's IA-64 64

Shauna writes "Ars Technica has an informative article on the differences and similarities between Sun's new MAJC and Intel's IA-64 plans on the design-level. I think it shows in a round-about way that MAJC is going places while Intel sleeps. " The article is definitely designed for the chip fetishists in the audience.
This discussion has been archived. No new comments can be posted.

Sun's MAJC vs Intel's IA-64

Comments Filter:
  • Oh the times, they are a-changin'. I forsee it now: Sun is going to make serious claims on the embedded market. I enjoyed the article.
  • It has always been the case that Sun made boxes that started just below the iNTEL peak and went up substantially from there. The question Sun needs to answer is "why dose IA32 have more apps running on it than all the other platforms ?". It's about price and openness and when Sun starts letting other people in atr all levels of it's market it can start increase that apps support gap. I.e. License the motherboard and bus architecture, rent out the chip masks, Open the complete API ( SCSL isn't OSS but it dose this ). In short realize that iNTEL's success has more to do with the likes of VIA, AMD and Compaq than it dose with Microsoft. Cause it sure aint about performance. Sun already has some segments of the high performance corporate market sewn up. Just ask Oracle.
  • It has always been the case that Sun made boxes that started just below the iNTEL peak and went
    up substantially from there. The question Sun needs to answer is "why dose IA32 have more apps
    running on it than all the other platforms ?".

    It's about price and openness and when Sun starts letting other people in atr all levels of it's market it
    can start increase that apps support gap. I.e. License the motherboard and bus architecture, rent out
    the chip masks, Open the complete API ( SCSL isn't OSS but it dose this ).

    In short realize that iNTEL's success has more to do with the likes of VIA, AMD and Compaq than
    it dose with Microsoft. Cause it sure aint about performance. Sun already has some segments of
    the high performance corporate market sewn up. Just ask Oracle.

  • I don't know much anything about processor design, but this article was easy enough for even me to read. It's all very fascinating..

    Especially I liked the concept of Space-Time Continu.. err Computing, which allows the processor to spawn a speculative instruction stream in case of the main execution stream is working on a loop, or what not. This speculative instruction stream can run ahead, and after the main execution stream gets its job done, both of them can join (possibly, I'll assume this is not always the case). The speculative execution stream can be given to a whole another processing unit in case of a processor cluster (it seems the idea is that there are several MAJC processors per die, mmm, truly parallel threads).

    Clustering MAJC processors in a single die makes some interesting prospects for threads. One MAJC processor has enough registers for holding the contexts of four different threads at once (according to the author). This should make some real difference regarding context switches.

    Fascinating stuff.

  • Lisp machines, Intel i384(?) [ OOP ], etc. have failed because their abilities were emulated in faster-evolving general purpose chips. A special purpose chip may only be upgraded every three years or so versus, one or two times a year for Intel, PowerPC, etc. The solution for Sun is stay RISC'ish, but to adjust the instruction mix to be favourable to that used in JAVA code.
  • Is anyone familiar with What this bad boy's gonna sit on in terms of bus architecture? Right now that's the most significant limiter to the celerity of intel boxen. Having a bigass processor is wonderful, but it can't do much without a way of pumping a large amount of data in and out of it.
    -Spazimodo

    Fsck the millennium, we want it now.
  • by ChrisRijk ( 1818 ) on Tuesday November 09, 1999 @05:28AM (#1549296)

    MAJC developers connection [sun.com]

    A chat with the lead designer for the MAJC architecture [sun.com]

    MAJC 5200 chip press release [sun.com] From MAJC docs page: [sun.com] First MAJC implimentation presentation [sun.com].

    Some comments... Firstly, I don't think it was made clear that the MAJC architecture can execute compiled C/C++ as easily as Java. You'd also probably be using a dynamic compiler like HotSpot rather than a JIT compiler. I think the guy got it wrong about the functional units being data agnostic - the registers definately are though. Still, you can (pretty much) execute 4 of any type of instruction at once - the first block is slightly different compared to 2-4 (which are indentical), though I don't know how. The very interesting STC concept is much easier to do with the Java programming model (because of certain issues with data) compared to C/C++. They don't say how easy it would be to apply STC to C/C++ though - might be impossible in the general case, though possible in some "limited" cases. The MAJC does also have scoreboarding for instructions for dynamic execution times eg loads, though I'm not sure if this applies to things like FP multi/div/sqrt etc. There's a couple of other interesting things the guy missed - the cross-bar data switch and the steaming data ports, for example. Though apart from that, I think it was a pretty decent review.

    With regards to the first chip - the MAJC 5200, it's supposed to "tape out" (get first physical implimentation) by the end of the year and sample in Q2 next year. The 5200 has 2 CPU units on chip (with a shared L1 data-cache and a seperate L1 instruction caches), will be made on a .22 micron process, run at 500Mhz and consume 15W.

    Here are some "here's how fast it is" stuff from the PR:

    • This chip is expected to be able to handle over one hundred voice-over-IP channels while enabling encryption and decompression of the packets over a 10 gigabit-per-second Ethernet connection. This means that a very large number of simultaneous phone conversations could be supported in a small-footprint gateway server device. In the area of image processing, the JPEG 2000 test set is anticipated to run at 78 MB/s while encoding 8-bit sample images. For networked video, two streams of MPEG-2 data representing five Mbps interlaced sequences have the ability to be simultaneously decoded, with additional processor capacity still available. For teleconferencing under the H.263 standard, six decodes and one encode are expected to be handled in real time. In advanced audio applications, an AC-3 decode of 5.1 channels at 384 kps is predicted to use only seven and one-half percent of the capacity of one of the 5200's two processors.

    btw, Sun have a 2000 processor array for simulating their UltraSparc-III chip and they do do some pretty accurate simulations, including things like booting Solaris. In the MAJC 5200 PDF/PS file, they also quote some (estimated) speed-ups gained from using the STC technique for the SPECjvm98 suite of programs - they get from 40-60% or so, which I think is very impressive. The MAJC 5200 also has a graphics pre-processor (it's going to be used in Sun's new high-end graphics systems) and they quote some triangle processing figures with different levels of lighting detail. I don't really get what they mean, but they quote from 60 million triangles/s to 90M/s or so, which is in about the same region as the PlayStation2, or about 4x faster than the fastest current mainstream PC graphics card. However, that doesn't mean you can use 60m-90m in real world stuff...

    In general, the chip is aimed at "low end" (though for Sun, "low end" equates to less than $100,000 generally) embedded solutions.

  • The first MAJC chip, the 5200, has a 32-bit 400Mhz embedded Rambus 'bus' for main memory - though it's actually a point-to-point link. If you read the details you'll find that (like with most new high end stuff) the MAJC uses a cross-bar switch instead of a bus, which is much faster and more scalable, but more expensive. It also has a 66Mhz (64bit?) PCI connector, has 2 250Mhz UPA connectors (UPA is Sun's equivilant of PCI). You can also line up a load of 5200s in a row, connecting one of the output ports to the next's import port, so you can pipeline a complex algorithm (eg graphics) across multiple chips.
  • I just re-checked. The figures for the % speedup for the SPECjvm98 benchmark is between 51% and 67% depending on the each of the sub-benchmarks in the SPECjvm suites. Here's the actual values for each of the sub-benchmarks:
    compress: +51%
    jess: +56%
    db: +62%
    javac: +67%
    jack: +64%
    mpegaudio: +55%

    Personally, I think that's very impressive.

  • it won't do you a damn bit of good to have those specs, since (afaik) there's no way around the microcode.
  • It's hard to believe threaded programming is used as little as this guy says it is. Every hiring manager who hears the word "thread" turns green and starts vomiting cover letters from people who write threaded programs it's just so trivial. Parallel programming is so simple your development times drop and your productivity shoots up.
  • ? The R10000 is pretty old news...the R12K has been out for a while. Neither particularly kick ass.
  • no, PPC has type bound registers (int, fp, vec)
  • Is this a bit of sarcasm that I am missing?

    Multi-threaded programming is not trivial, and can be hellish to debug. If you're referring to the usual Java version which is "Throw a mutex around a functions and your done" then it is not difficult, but when you have to take into account deadlocks then things can get trickier.
  • Ahem .... cross-bars are NOT more scaleable than buses .... their complexity goes up with the square of the number requesters while buses go up linearly.

    However given that - an on-chip cross bar for a chip with a small number of requestors and LOTS of layers of metal for routing may be a better solution than trying to build something with an on-chip tri-state bus (we silicon guys try to avoid on-chip tri-state things if we possibly can)

  • Information from Sun is a little woolly at present. They claim amazing things but, as far as I can tell, aren't going into much detail on the architecture.

    There are some big new concepts with MAJC. It needs code optimisation done for each implementation, uses speculative threads (which will be competing with the known and proven magic of Vector processing) and has a lump of registers with local/global separation.

    I can't imagine any of these optimisations are going to kick butt in useful applications until more information comes out. Intel withheld information on the Pentium (Appendix H) and this resulted in compilers producing slower code when Pentium optimisations were turned on (for about a year).

    Meanwhile, general purpose and graphics processors will continue to evolve with the advantage of mature tools supporting them. Embedded chips can also go to tightly coupled SMP on chip at low cost.

    Exciting times.

    PS: Context switching on cache misses has been around for at least ten years. When everyone thought that superconductors would take over the world there was discussion on how to deal with the monster memory latency and and one architecture (forgotten the name but it may have come from Japan) proposed running 5 threads concurrently and swapping execution on memory accesses.

  • As an embedded developer, I can heartily say: not bloody likely.

    Would you risk the future of your company on a processor that is optimized for an interpreted language (ahh, memories of old Wang minis with BusinessBASIC CPUs!)?

    They had a shot at the embedded market with the microSPARC. They missed badly.

    Sun needs to pick a direction, and quickly. SGI is hot on their tail in the high-end workstation / super-server market. Linux / FreeBSD is eating away at their mindshare in the ISP / internet market. Motorola is killing them in the embedded market.

    Oh yeah, the Javastation is going to save them. Sure.

    Just about as likely as me going to my boss and suggesting building a system around a mutant CPU from this drowning company.
  • Embedded machines are not the only domain for this chip. It's scalability should make it the ideal choice for a server with hundreds of thousands of threads. So it not a bad thing at all to produce something like this.

    Also Java has been rather unsuccesfull on the desktop but is quite succesfull on the server and is also gaining momentum in the embedded machines world. JIT is not the same as interpreted (which you are suggesting). JIT stands for just in time compilation. Another thing sun is working on is dynamic compilation which JIT taking advantage of profiling information collected during execution. One major advantage of dynamic compilation over static compilation is that you can use information that is only available during runtime for optimizations. One of those optimizations could be discovering paralellism. And its exactly this type of optimizations that is beneficial on a MAJC architecture.

    Also the ability to combine a MAJC core with a DSP on one die seems like a killer feature to me for embedded machines where every extra chip increases the price of the product.

    You seem to be under the impression that its not going well with SUN at this moment. Truth is that SUN one of the more succesful companies of the last few years, especially in the server market.

    SGI on the other hand has had quite some reorganizations over the past time.

    Linux doesn't have to be a threat to companies like SUN. SUN gets its most revenue from hardware, not from software.

    "Just about as likely as me going to my boss and suggesting building a system around a mutant CPU from this drowning company."

    Probably your boss won't consult you for this. In case you haven't noticed, I responded to each argument in your post, if you think I forgot one or don't agree: do reply.
  • It seems to me that SGI is the drowning company here, not Sun. SGI last month reported losses much worse than expected [cnet.com] (37 cents per share, when the street expected only 7 cents per share) and recently Sun announced workstat ion aimed directly at SGI's market [cnetinvestor.com] (visualization and simulation).
  • The Ars review incorrectly claims that Vertical Multithreading (switching threads at the hardware level when there's a cache miss) is unique to the MAJC architecture.

    In fact, this is the basis of Tera Comupter's [tera.com]MTA (Multithreaded Architecture) processors that are already being evaluated at the San Diego supercomputer center.
  • Threading is fine if/when you need it, but if used inappropriately can slow rather than speed development. If you need true concurrency, then you have no choice, but if you're using it as a way to "get around" blocking I/O, you're better switching to a single threaded event driven approach. Debuggging explicit state representation is easier than debugging multithreaded apps.

    IMHO multiprocessing is easier to debug than multithreading since you don't have the potential problems of shared data structures unless you're explicitly using shared memory.
  • Hmm, wasn't Stanford the university Sun got their base for HotSpot technology as well? Could there be a connection here between MAJC's Space Time computing and the fact they're relying on dynamic (or just-in-time) compilation?

    Curious.
  • all it takes ...

    sorry I was @ work and I know that alot of people where not familer with HP and IA64 I was trying to point out the fact that HP worked on IA64 more than intel have done most of the research has been done with them

    its intresting to notice that alot of DATA centers where they process data run HP PA-RISC and HP-UX

    HP-UX is a dog just because you find that nothing you would think would be standard is the manual might as well read

    THE STANDARD IS ... but it was SLOW so we did THIS ...

    a good referance is the trimaran project

    http://www.trimaran.org/ [trimaran.org]

    which has been Ported to LINUX now you too can mess with compilers !


    I am under NDA over the compiler but I can tell you that they have been very clever with the registers makeing full use of the hardware and that
    A) the compiler sees most of what the programer wants to do so can preform most of the checking

    B) the hardware has good V. good prediction built in and the compiler tells it what it thinks is going to happen

    the compiler is the most important part of the system this was learnt from OS2 and from GCC how much does the GNU Compiler Collection get used !!

    regards

    john

    (bit upset about the troll)



    a poor student @ bournemouth uni in the UK (a deltic so please dont moan about spelling but the content)

  • Whether it is Java, or building "GCC" into the kernel loader so that you can execute a tarball file (and have it compiled on the fly and cached), we need binary independence.

    Now, it's temping to say "just compile the source", but if the source is a 20 megabyte monster like Mozilla that takes a loooong time to compile, you have an end-user problem.

    The future is going to be awash in processors of all kinds, and we need a way of porting binaries across CPUs. If we don't, we won't have competition in the CPU market anymore, and one architecture will dominate, just like the Windows API dominates.


    Also, once you get to IA-64 like devices, code will have to be recompiled for each new generation of IA-64 depending on how many execution units it has. If one generation can do 4-way execution, and the next can do 8-way, a JIT compiler is needed if the code is to be optimal for each CPU.


    One approach would be to simply take the middle-layer output of a compiler like GCC, write it out in a byte code format, and then have operating systems backend-compile peephole-optimize this code when loading it.

    Whatever mechanism used, JIT compilation (it's not just Java) has several benefits:

    1) binary independence, register independence

    2) can perform optimizations that even C/C++ can't. C's linker can't inline polymorphic or indirect function calls. A JIT compiler could actually inline calls from a shared library that isn't known until runtime. If this library is
    OpenGL or GTK, rendering may get a huge improvement.

    3) can adjust code and data locality depending on the size/capability of the CPU's cache

    4) security. Depending on the mechanism used, can be used to "sandbox" code.


    Try not to focus too much on the "interpreted" aspect of bytecode. Instead, just think of bytecode as simply a compressed version of source code. For instance, Java's bytecode format is so verbose, you can pretty much decompile it back to the original source without much loss.

    Java is just the first step, but in the future, when the world is filled with ubiquitous CPUs everywhere, we're going to need ways to prove assertions about untrusted code, and we're going to need ways to run it wherever you are, so you aren't tied to any one device.


  • I would disagree that SGI is got on anyone's tail these days, especially SUN. Look at the stock prices: SGI has basicly been going down from a height of $40 in 1995 to its peresent around $8.
    In the same period of time, Sun has gone from $4 to $112.

    SUN has stayed on a clear course of providing workhorse servers and workstations based on the SPARC architechture and SunOS/Solaris OS.

    SGI has followed a confusing track on the high end (spliting their offerings by buying Cray (which competed with their own Origin line), then recently announcing to spin it off) and now deciding to move over to the unproven IA-64 technology (whose scalability is seriously questioned). In the workstation market they dabbled in and then withdrew from the IA-32 NT market. Over the last few years SGI has had "focus" on 4 architechtures (MIPS,IA-64,IA-32,Cray) and 4 operating systems (IRIX, UNICOS, NT, Linux). They have made a series of bad business moves (ex: the bus technology for the Sun Ultra Enterprise Servers is based off of technology SGI sold Sun) and have been laying off employees for the past several quarters.

    True, SUN has not swept the world by storm with Java, but they have been very successful at their cord business - selling servers.
  • This technology was also used in other places. The first low-level multi-threading I know of was on one of the I/O controllers for Cray's CDC 6600. It could interleve 10 threads to hide latency.

    Also, NASA did a great review of the TERA architechture. HERE [esgeroth.org]. For many scientific computing tasks a MTA can blow away even a high end Origin or cluster. Let's face it - we can make our CPUs as fast as we want, but the memory bottleneck is just getting worse. I believe that low-level multithreading could solve a lot of these problems.


  • HUH?

    Sun already licenses SPARC, motherboards, and just about all their other technology. You can go out and buy third party Sun clones right now and they are significantly cheaper. I bought one myself.


    Why isn't there more "software" for Solaris (apart from the 15,000 apps already available?) Because Solaris is a server OS, not a consumer OS, that's why.

  • HotSpot is Urs Hozle's work. He's a professor at UC Santa Barbara, but is a visiting scholar at stanford, as well as being the CTO of a company and working with Sun.
  • go re-read what you replied to. he never said crossbar archs don't scale. he said their complexity increases faster, which is exactly what you just said. *sigh* i wish they taught more reading comprehensive in schools
  • JIT compiled bytecode is basically software based microcode. direct execution has less overhead, and i seriously question if all the JIT hacks in the world could compensate for the cpu time and cache footprint of JIT compilation (though, i admit i know little about JIT compiling). and i know someone out there is thinking: "so what? cache the compiled bytecode to a file, for faster execution later". now all you've done is turned bytecode into nothing more than than compressed logic/pre-parsed source. places where JIT compilation does have an advantage is when you don't know what could run the code (such as a browser, where speed isn't critical), don't want to execute native instructions for security reasons, and/or the platform that runs it doesn't have a place to store the code in its only native instructions.
  • Posted by cookieman.k:

    As much as others seem to dislike, I think that this will be the future for independent paltform/CPU/Language code porting. I'm not refering only to Java here, but every compiler can be *easily* modified to make such PCODE based output. Even further, the ideea of parsing some text file (C++/Java/HTML/...) and getting a PCODE output(or something like this) can also be generalized. There's some comercialy available product also that parse various syntax languages. I've seen it in DDJ, but i forgot it's name ... . I also am working(design phase) at some sort of parser that will implement much of those things described above. I do not mention more because I would have to kill you :)
    You are right, we must do some steps to not depend on binaries. Any alternative solution ? Nope, for now... Greets:
  • Posted by Nr9:

    can't you see?
    its so simple ...
  • The whole idea of Vertical Multithreading as implemented by MACJ or MTA is to prevent pipeline stalls due to cache misses; if the current thread would miss, then the CPU switches to another thread that wouldn't. In the case of the MTA, it can do this between every instruction, at zero cost! This addresses the growing disparity between CPU and memory access speeds, while keeping cache requirements reasonable.

    While a compiler could register allocate in the way you suggest to keep software-based thread context switches cheap, it couldn't address the cache miss issue which needs to be implemented in the CPU itself.
  • Sorry about the slow response...

    Embedded machines are not the only domain for this chip.

    First off, I was replying in the context of the original message on this thread, which said Sun was going to take over the embedded world with this chip (this also happens to be my domain of expertise). And I'll stand by my arguments in that context. As for the others...

    Java ... is quite succesfull on the server

    People keep saying that; perhaps I'm overlooking the obvious, but I sure don't see this. Can you offer examples (really, I'm not trying to be a smartass here; the server apps I see tend to be large database apps with a SQL backend and various frontends (perl/tcl/HTML/Visual Basic). I'd love to know where these Java apps are going.

    JIT is not the same as interpreted (which you are suggesting).

    I'm aware that this isn't a JVM on a chip (that would be the picoJava chip line). However, JIT, by definition, doesn't care what the back end is. So if JIT is the solution, why do we need this beast at all?

    One of those optimizations could be discovering paralellism. And its exactly this type of optimizations that is beneficial on a MAJC architecture.

    If you're looking for paralleism in the instruction stream, we call that pipelining. This is not a new idea. If you're looking at a higher level, that becomes a function of the compiler, which again is independant of the back end.

    Sun's claims that this architecture will benefit from multi-threading seem to revolve around their catchily named "Space-Time Computing" (which translates into speculative execution; again, not a new idea) and high cache coherency (always a good idea, which is why Intel and Motorola already have solutions; they are however honest about them and don't pretend that they will scale to hundreds of processors).

    In short, all the claims they make about benefitting from multi-threading can be equally applied to other CPUs. What's left is marketing spew, positioning this as "Java-optimized". Which sure isn't going to help them in the market I know.

    You seem to be under the impression that its not going well with SUN at this moment.

    Things look good right now (hell, I'm typing this on a Solaris workstation!). But I don't see a good future; as I said in my original message, they're under attack on all sides, and if this is their response (let's take on Intel and Motorola!), I don't see a rosy future.

    Linux doesn't have to be a threat to companies like SUN. SUN gets its most revenue from hardware, not from software.

    It's Solaris that sells the hardware. Price/ performance for Sun hardware is miserable. As an example, my development group recently built a proof-of-concept 16-node Linux cluster using off the shelf parts (el-cheapo K6 motherboards) which costs 10% of the price of the Sun Enterprise server it's competing with, and performs about 25% faster. If that's not a threat, I don't know what is.

    Probably your boss won't consult you for this.

    Truth in that; I'm more of a software guy.
  • "Java ... is quite succesfull on the server

    People keep saying that; perhaps I'm overlooking the obvious, but I sure don't see this."

    OK, I can't put any examples out of my big hat and I'm to lazy to go and search for it. But considering the huge amounts of investments made by SUN, IBM and others on getting Java to work on any platform you can name, they must expect some return on their investment. You do the math, but that tells me Java is increasingly succesfull on the server.

    "So if JIT is the solution, why do we need this beast at all? "

    As I explained later on in my post, a dynamic compiler like hotspot can do something a static compiler (typically used for C) namely using information gathered at runtime to perform optimizations. This is a major advantage on architectures that require the compiler to optimize the instruction stream for the processor. Further more Java and threads are a good combination (one of the reasons for Java's success on the server). And MAJC is very good for running multithreaded stuff.

    "If you're looking for paralleism in the instruction stream, we call that pipelining."

    VLIW chips like MAJC and IA-64 do the pipelining in software (at least discovering paralellism and optimizing the instruction stream). It's not new I know. I just explained why a dynamic compiler has an advantage over static compilers in doing so.

    "In short, all the claims they make about benefitting from multi-threading can be equally applied to other CPUs. What's left is marketing spew, positioning this as "Java-optimized". Which sure isn't going to help them in the market I know."

    The market you are working in is changin rapidly. With fast chips getting dirt cheap, the rules of the game are changing. Coding everything in assembler is not really an option anymore because that slows down time to market. Similarly, more and more is implemented in C++ rather than C on many embedded platforms. I work together with Axis (a swedish company building embedded machines) for my research and I know their main problem is the fact that they have to maintain a huge source tree of C++ code (100K+ LOC). This slows them down in introducing new products.

    So because of this, Java will become an option in your market too.

    "In short, all the claims they make about benefitting from multi-threading can be equally applied to other CPUs."

    I don't see that, the article was quite convincing in comparing IA-64 and MAJC. Only time will tell which is the faster processor since neither is available at this moment. Something tells me IA-64 will be a major disappointment. Maybe SUN will screw up on MAJC but the architecture doesn't sound half bad. More likely is that intel and motorola will 'borrow' some of the ideas for this processor in their next generation CPUs.

    As has been pointed out before, Linux is not yet a good math on high end server platforms because it lacks certain features. Probably this will be resolved in due time but for now linux is not competing in that area.

    "Price/ performance for Sun hardware"

    Hardware and OS licenses are only small portion of the cost on big servers. These things have to be maintained by expensive staff and they have to run expensive, tailored software. If price performance of the hardware/OS was the only consideration, they would have been out of business long time ago.

    On the long term I can see Linux replacing Solaris. Obviously all the UNIX giants (Sun, IBM, SGI) know this and are generally cooperative towards Linux (at least more then a certain Redmond based company).

//GO.SYSIN DD *, DOODAH, DOODAH

Working...