Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Sun Microsystems

Looking at UltraSPARC III 212

argonaut writes, "I saw a cool article about the UltraSPARC 3 at Ace's Hardware. They have some of the usual intro stuff about Sun in the beginning, but then get more in depth about the technical specs. The best part is the second page where they talk about ILP, pipelining, and scalability (up to 1000 cpus!). There are some excellent examples of ILP and load latency. "
This discussion has been archived. No new comments can be posted.

Looking at UltraSPARC III

Comments Filter:
  • by Anonymous Coward
    I'm still using a IPX. How things have changed.
  • by Anonymous Coward
    > Remind me again why 8Mb of L2 is needed when programs have 95+% cache hit rates with 1Mb (often less; hmmm...)?

    "Programs" (ie executables) main code fit's easily into a Meg, but try fitting a very large database, where it's very easy to line up Megabyte's upon Megabytes of sequential data, onto a 1Mb cache without having to update your cache every second. That's what that 8Mb L2 is intended for, and not running a quake server.

    > Wow! You got Sun to give you free copies of Solaris for Sparc? Last I checked you still had to pay a hefty $90k (!) for an OS with nearly equivalent functionality as Linux. I call that a bad deal.

    Wrong, Solaris is free for ANYBODY on either Sparc or Intel up to 8 processors. And apache, gcc, etc. compile and run just as well on Solaris as they do on Linux. And apache and gnu tools are included starting with Solaris 8.

    > Forget laptops to E10ks, how about Linux on a Palm to Linux on an IBM/390 Mainframe?

    And you can run the same executables from an IBM/390 on palm linux? And even if you could, it would at least require a recompile. Not true for Sparc, same bianary compatability from an Ultra1 to an E10000.

    My money's on Linux on the front end and Solaris on Sparc on the backend.
  • I fail to see how 1000 CPUs is of any advantage. A few maybe (up to 8 or so). Go overboard and they'll burn cycles just waiting for access to memory, etc. And if your task is modular enough to keep CPUs and their memory mostly sepaated, then why not go with some sort of clustering of cheaper machines (Beowulf)? Besides, I'm sure you could get much more than 1000 machines for the price of a single 1000-CPU machine.
  • by Anonymous Coward
    Take a look [slashdot.org].

    You say in one post that Jesus uses vi, now you say that you use emacs. Which is it?

    You might be the way, the truth, and the light, but you damn sure don't know what editor you use. Somebody should nail you to a... oh wait, never mind.
  • when did anyone say this was going to run linux?

    Well, whether anyone cares is another matter, but it almost certainly will be an option. Linux already runs on everything up to an E6500 (known) and theoretically supports the E10k (nobdy's ever gotten hardware to test on though). I would certainly consider running Linux on US3-based systems. Up to about 8 cpus it beats the shit out of solaris. Past there people assume it would lose, but it's hard to know for sure as the most we've ever seen tested is 14. (shrug) Maybe nobody talks about running linux on these things, but certainly it's possible.

  • Try again and gcc is terminating with some obscure error.

    You may have a broken cpu or mobo. I actually have this behaviour on one of my systems. Put one cpu in. Works fine. Put second cpu in. Breaks. Remove first cpu. Still broken. Switch second cpu to known-good first slot. Still broken. Thus we conclude that the second cpu is bad. Since cpus these days have cache on board, it's entirely possible that the cpu itself is fine but the cache is bad. This seems reasonable, since cache is still memory and bad memory is the known cause of random sig11s. HTH.

  • There were no 100 MHz microsparc[-ii] or turbosparc cpus. 110 perhaps? Also, AFAIK there is no possible way to put 164MB in a SS5. (64+64+32+4??? There were no 4MB modules...) I know I'm nitpicking, but... Now, getting rid of CDE, there's an idea I think we can all agree on. :)
  • On the systems that I deal with on a daily basis (Enterprise 250's and 450's) they have interlocks like Compaq Proliant servers that will power-off the server once the case opens (unless you short it out). Therefore hot-swapping PCI would be pointless.
  • There are multiple levels of Enterprise systems. The lowest level is the Enterprise 220, 250, 420, 450 level. These are more like desktops on steroids, they have much of the same deep-down archetecture as the Ultra 30's and 60's with some hot-swap components like SCSI drives.

    Next is the Enterprise 3x00 - 6x00 series. They have from 4 to 16 "Boards" which are like small Motherboards that plug into a main backplane. Each system must have at least 1 I/O Board, and 1 populated (with at least 1 CPU and 1 bank or Memory) CPU/Memory board to stay up, but you can do what you wish with the others.

    Finally, there is the Enterprise 10,000. It has 16 Boards that can handle I/O and CPU / Memory. They can too be swapped out.

    Hope this clears it up a bit.
  • I'd also note that I work for a LARGE isp, and a certain Microsoft partner that I can't name uses all Microsoft / Compaq computers for the front end web servers and Sun E450's for the Oracle Database backend.
  • Gee that's funny, I don't remember anything in the PCI spec about having to have PROMs... ;P

    It's in the Open Firmware (IEEE 1275) bindings for PCI.

    The reason is so that the firmware can initialize and use the card (ethernet, video, etc.).

    Is it reasonable to expect a UltraSPARC to have to run x86 code to initialize a PCI card?

    Because if it's not a FCode (OF's byte-compiled variety of Forth) driver, then that's the only other option.

    This is bad for two reasons. First, I hate it when vendors screw with the PCI specs. It was adopted as a spec for a reason, not so vendors can then change it so it only works with their HW. Just ask linux-kernel how much they love broken PCI workarounds...

    Sun has not "screwed with the PCI specs", FCode and PC BIOS drivers can co-exist in the same declaration ROM. PCI UltraSPARC systems can use PCI cards without FCode drivers, but such cards may be less useful (consider a video card for instance, waiting for the kernel to be loaded before any video is seen is undesireable).

    Reason 2 is that "plug and play" (a Micro$soft term BTW) can be had for PCI without having those PROMs on board. The reason Sun uses those PROMs is to get licensing fees from hardware vendors to get that "Sun Compatible" moniker. Creative revenue generation no doubt, but it prevents PCI interoperability, which is a Bad Thing.

    If "interoperability" is defined as "is either x86 or emulates it", then yes, it's prevented.

    It's not Sun's fault that most vendors of PCI cards are x86-centric and don't see the value of FCode drivers on their cards.

    It's also sad that only two major architectures use Open Firmware (SPARC and PowerPC).

    Intel has even gone so far as to reimplement OF poorly in its "Extensible Firmware Interface" rather than simply defining IA64 and IA32 bindings for OF.

    I'm not familiar with Sun's branding program, but it's not just "creative revenue generation".

  • by jd ( 1658 )
    I don't believe this article! What rubbish! 1000 CPUs in a box, indeed! Whatever will they say next? I think these should be put next to the UFO guys and the Loch Ness Monster.

    Of course, if they want to change my mind, they could always deliver one of these to me, with, oh, a gig or two of RAM, a SCSI RAID array and a 21" monitor. A decent T1 connection would be nice, too. That might convince me.

  • They use an SCI network to access memory. Think of it as a beowulf system with 1000 individual workstations networked together with an amazingly fast network (Dolphin interconnect's SCI). The operating system handles making the 1000 workstations look like a single box. I'm sure Linux could be hacked to work in the same way.

    BTW, Dolphin do SCI for Linux if anyone's interested.:
    http://www.dolphinics.com/

  • 1000 workstations, each with it's own I/O, so yup, the I/O will scale. The difference is that the OS makes it look like a single machine.
  • Personal experience - 3 DOAs out of 4 boxes, 2 then went on to die again within 3 months of being fixed. E450s and an E10K so we're not talking elcheapo workstations here. I'm still pissed off about it.

    NEVER had this kind of trouble with IBM R6K systems. Given a choice, I wouldn't touch Sun hardware again.

  • http://www.dolphinics.com/
  • Sun just aren't there yet. IBM are.

  • My web/mail server is a LX with maxed out RAM (96M).

    I ain't changing anytime soon. That little lunchbox is built like a ROCK.
    Older SPARCs make great personal domain servers (cheap too!)
  • IBM's S/390 is. absolutely. there's more than UNIX out there

    Don't forget, the S/390 runs at least three Unix like OSes -- AIX, UTS, and Linux (and I expect there is an AOS for it as well).

    Plus the S/390 can run more then one OS at once (one of the more popular OSes for it is/was the single user CMS, just run one per user). That is because the S/390 runs a very low level OS (VM/SP) which offers virtual disk controlers and virtual timers, and the like (including a virtual MMU) to the host OS.

    As for reliable, sure. There were S/370s with multi-year uptimes. I would expect the same from the S/390.

  • by morven2 ( 5718 )
    I wouldn't say that Solaris is yet QUITE so stable as VMS, however. In terms of hardware, though, I'd reckon about the same. The legendary VAX/VMS uptimes were as much to do with a stable OS as anything else.

    IBM? I really don't see an RS/6000 as being any better than a Sun, reliability wise.
  • Yes, and Microsoft sells 85% of the world's
    operating systems. And McDonalds sells a lot
    of hamburgers. What's your point?
  • Does Sun "community source" the code to parts allowing this mass use off SMP CPUs?

    Can we "learn" something from their code and (hehe) "clean lab it" into the Linux kernel or into the BSDs? What are the legal ramifications?
  • qouting Ace's article, pg.2 [aceshardware.com]: Since they also wanted to get 1000+ CPUs in a single system, they also had to design the on-chip memory system and interfaces to be able to handle this. When you have a multi-processor system, you need to keep the data coherent in all caches (having the same data at the same time, or else you get bad results) one way or another, and this becomes increasingly difficult as more CPUs are added.

    I guess there are alot of issues which are specific to Sun's hardware, so mabye my thought of learning from Sun's SMP code was a moot point. Moot is a fun word to say. Moot. Almost like saying Meept.


  • Looking at UltraSPARK III

    Sun Pyrosystems | Posted by Grogg [humboldt1.com] on 12:15 PM February 25th, 12000 B.C.
    from the ooga-booga dept.

    Kragga [no.link] write, "I saw good article about UltraSpark 3 [believe.this] at Ace's Rock Field. It goes into depth about technical specs of this fire-burning technology. Best part is second page where they talk about mammoth cooking, roasting, and flamability (up to 1000 BTUs!) There are some excellent examples of mammoth cooking, and fire-starting latency."

  • Electronic design is done mostly on Suns here.
    There are no production tools for linux, for
    chip or board design.
    So who cares......
  • I would think that there's much more bandwidth inside a machine than through any of it's external connections (any form of ethernet, SCSI, firewire). So long as the tasks that each CPU is doing takes a short time to finish, they'ed probably enjoy the added bandwidth. For tasks that act like Seti@home or any of the distributed net tasks, there'd probably be no advantage, since each CPU gets a chunk of work and then works on that for a while and then asks for more work.

    Also, applications need to be written for Beowolf, as far as I know. IF the operating system natively supports upto 1000 processors (i'd assume 1024 would be logical), then that means you can run the same exact binary on a single CPU workstation all the way up to a supercomputer. It'd probably be great for developers of supercomputing applications.. They could test their apps in the same exact atmostpher that they'll be ultimately running in.
  • Ummm, solaris is free (except for the $75 "media kit") for ANY use, so long as you use 8 CPU's or less.

    Sun hardware is much more finely made than intel hardware. IT's meant for running enterprize applications for months and years at a time.

    Besides that, the argument once again shifts to application availability. So long as there are tons of applications available for Solaris that aren't available for Linux, Sun has a purpose in life. Is there a 64-bit database available for Linux? One that's supported by a major vendor? I don't think so... Oracles there, but it appears to be available for intel only.

    And lastly... You get what you pay for. WHo cares if it's overpriced to start with if it lasts much longer with less headaches?
  • Solaris is much slower than Linux on single CPU congifurations, but slap them both into a 64 CPU box and watch which one actually scales higher... Linux will drop off around 8 CPU's.
  • Now that's lasting value. Not a cutting edge system any more by any means, but it's quite something to still be using a system that old for a production server ...

    Exactly. We still use quite a few SS5s and 20s (for those not needing any fast graphics) where I work. Sure they take a lot longer to boot, and logins are slow, but they do the job running the Same OS as the U2s, U60s, and U80s we use for higher-end work. Try taking a measly low-end 486 or 386 from 5-6 years ago and running the same version of NT on it for people to use with the same disk and memory that they used with win 3.1 or whatever back then.
  • Well you would need 10,000 Ultra Sparcs to get any decent performance. Long live the Alpha EV6.7

    "However, from what I can tell, they have mostly hit their design targets, and for the SPEC95 benchmarks, a 600MHz US-3 (the initial clock speed) will be about the same as a 700 MHz Alpha 21264A (EV67)."

    Considering how fast the 21264s are, that's one hell of an improvement from the current USparc chips. I for one can't wait. I admit though that Alphas will probably still lead a little, it's too bad they've not yet gained as widespread an acceptance as Sun, HP and x86 CPUs have.
  • What's interesting to me about Sun is how well they've done by keeping control of their key technology instead of relying on either Microsoft or Intel to supply pieces. Sun has control over their own operating system(s), they're successfully pushing their own language, Java, and they don't have to depend on Intel for their processors.

    If this is such a strategy for success, how do you explain the sheer force of Apple in the server or serious workstation marketplace? They've pursued basically the same strategy, yet I can't think of a single company that's bet their datacenter on Macintoshes. Sun's success probably owns more towards being in the right markets at the right time as the internet woke up than anything else.

    HP originally entered into the alliance with Intel because of fears it could not afford the next generations of chips, only to be caught in the nightmare situation of having to extend the product lifecycle while waiting for Intel to deliver a product that every other competitor will be able to use anyway.

    I always thought the deal for HP was that they had an automatic leg up on all the other vendors by having such intimate access to IA64 before the other vendors did. HP will likely have complete, highly optimized systems for sale on IA64 long before MSFT and the rest of the Wintel crowd does, and a percentage of the profits of IA64 from competitors.
  • you're comparing software to hardware. that makes no sense. it's like saying cars are better than gasoline.
  • those proms don't vialilate(yeah, yeah, sp) any pci spec and there's nothing stopping you from putting one in an x86 box. the prom contains arch independant forth drivers (as do mac cards) instead of arch dependent x86 bios drivers (which are why there are x86 real mode emulators for ppc and alpha).
  • nitpick: 64bit registers don't give you more bandwidth. i think you're thinking of the system bus. 64bit registers eat cache, but are great for long longs.
  • Most of ftp.gnu.org's software library precompiled. FOr some reason, that's a HUGE selling point of Linux.

    ;-)

  • I tell you why I prefer using SOlaris/Sparc as opposed to Linux/X86 when building a distributed tightly integrated computing facility.

    1. Linux NFS still blows goats.
    2. Linux NIS+ support still blows goats
    3. Solaris jumpstart installs are way better than redhat kickstart (it blows too)
    4. You cannot boot the x86 machine off the network
    5. With solaris/sparc. You never have to worry about hardware support. The OS will find it, detect, configure it.. I laugh at you all Linux zealots saying Linux is easy to install. Try to install Solaris on an UltraSparc, you will see what I am talking about
    6. 3,4, and 5 mean Linux distributions lack support for completely unattended installs.

    I admin such computing facilities (100s hosts) and supporting linux would be a nightmare because it was never designed for that. It is Ok as a stand- alone server or workstation though. But in our environment I prefer solaris.

    Remember Sun's "Network is the computer" slogan?
    They weren't lying about that ..
  • Right, but you can't really use all of 3D capabilities of TNT ultra in Linux either. The glx drivers that nvidia released are still broke.

    Also, the kind of video boards that use used in sun's grahpics workstations (read not U5) Such as Elite3D m6 are used for a real work not for games. I'd like to see install Elite3D board in your Linux box.
  • the Be fare is $0, too. ;)

    --
  • The article says that it would use a ccNUMA-like design. SGI used this technology to scale to 512 processors with their Origin line.
  • I'm on an ethernet lan and slashdot is a dog. And yes, the ethernet is the slowest link.

    Ryan
  • Actually both SGI and sun pick kernels for a particular architecture (for example, IP27 or UltraSPARC II). They aren't actually identical. Sure, it comes from the same codebase and in most cases it's possible to run identical kernels on different machines, but usually it isn't done that way. The best example I've seen of actually doing it that way is UltraLinux which can do all sun4c, 4d, and 4m including SMP with a single kernel (Solaris _has to_ use separate kernels for 4c - not that anyone cares since 4c is essentially useless now). Even they can't do 4m and 4u from the same kernel though, nor can SunOS. :)
  • but 2 Terabyte memory support under Linux is IMHO much more interesting than the latest rumormongering from Sun.

    There's nothing special about 2TB support. First, you can already do that on UltraSPARC, and second, Alpha is a fairly obscure platform (right or wrong, it is). What would be real news is if

    working for a company which has nearly completed the process of dumping Sun in favor of FreeBSD and Linux solutions[...]run an operating system which has no compiler included

    Linux runs great on Sun systems. You imply that Sun and Linux are mutually exclusive; they are not. JMHO of course, but I find that Linux on UltraSPARC is far superior to Linux on peecees. If you like Linux on peecees, you'll like it on Sun hardware too, and in that realm the hardware isn't nearly as dodgy as peecees. If your budget makes Sun hardware impossible, fine. But don't imply that Linux on peecees can touch Linux on Suns. It can't even come close, and the hardware is clearly the limiting factor. It may meet your needs, and that's fine, but there's no reason to imply that a good, complete, low-cost OS and nice hardware are mutually exclusive. It sounds to me like you should have used your existing hardware and simply switched operating systems if the OS was giving you trouble. Oh well, more used Sun equipment for me to buy cheap.

  • Yes indeed. However, that's not the whole story. SGI's high-end systems (and the E10k for that matter) blur the line between cluster and single machine. As a low-end example from the SGI world, you can attach two Origin 200s using the Craylink and to the OS it appears as one machine. Is that a cluster, or one system? Well... Similarly, the Craylink technology is used with the O2k series to interlink systems as well as build composite systems. There even exist routers, hubs, and so forth. So it's really just an exceedingly fast networking technology, but with some clever hardware and OS support can also be used to make multiple systems tied together look like one. Very clever, very nice, and very fast. Expect to see more like this.
  • But I've been stuck working on our Sun Ultra5 worstations far too often, and they are TERRIBLE. Sooooo sloooooow. They're configured with 128M of RAM and, if I remember right, a 300 mHz sparc. My PII-350 with Linux and the same amount of RAM is much more responsive. Not to mention that they go down with alarming frequency, and they cost four times what I paid for my intel box around the same time period.

    Yep. We have those too, and they are complete shit. As people say, "the only thing Sun about an Ultra 5 is the price." I'm sure the Sun technical people are sick of being hated over the U5/10. If you look at it, the decision to develop that kind of system could only have come from marketing. There's no way anyone at Sun really believes an U5 is worth having. If you want to judge Sun's workstations, get an Ultra 2 or an Ultra 80. These machines typify Sun's capabilities. Expensive but worth it.

    Will SparcStations be able to survive the onslaught?

    Well, the most recent machine to carry that name is obsolete, and only the fastest versions of it are still useful. So I'd say no. :) If you mean "will SPARC survive" then the question is more difficult. There will always be a market for something that isn't Intel (ie doesn't carry the baggage of the 4004 along with it). Whether that will be SPARC I don't know. Your question about McKinley destroying its competition is likewise unanswerable. Intel is betting a lot on what is really unproven technology while the traditional RISC makers are improving their technology one step at a time. By the time Intel finally ships their sooper-dooper new processors they may be well behind the "older" technology of other vendors like Decompaq and TI/Sun. If so, SGI would be foolish to ditch MIPS. It'll be interesting to see how this plays out.

  • You're thinking too small. Most PeeCee type systems rarely run more than, say, 60 or so processes at once.

    You haven't seen my workstations... :-)

    New XFMail home page [slappy.org]

  • I've seen a couple outages this year on the SPARC cluster that I help support, mostly relating to I/O problems. (Disks and controllers "going bad.")

    My suspicion is that IA-64 is going to have less impact on server configuration than one might expect, certainly less than the "Gartner cheerleading" used to indicate. After all, on servers, the important thing is not the CPU, but rather the combination of I/O subsystems, whether:

    • RAM, or
    • Disk

    I could readily see "workstations" getting "killed off," what with PC's getting more and more powerful.

    But the big deal for anything higher-end is the buses, and not merely the CPUs, which makes the bluster about CPUs pretty moot...

  • I'm still undecided if Linux/*BSD is more stable

    Gee, I'd better tell my financial services and telecommunications clients to put all their mission critical application development on hold while you make up your mind.

    I mean, any OS that uses CDE and comes with csh and ksh as the shells

    Right, because these are integral parts of the operating system. I see.

  • "Wish they weren't so secretive sometimes though."

    That "sometimes" was carefully put in ^-^ On Sun's site I once came across a policy document explaining that they don't think pre-announcing things too much is a good idea. (of course, they don't always do this, particularly with completely new products, though this is more understandable - particularly when you want 3rd party developers to get on board).

    I value this, but on the other hand, it still does get annoying sometimes ^-^

    At their 4th quarter 1999 results annoucement, they were asked when US-3 systems were going to become available. They said that the final US-3 design had been finished - ie it was completely tested and ready for production. They also quite clearly said that they won't give release dates (even vague ones) partly so that competitors won't get a chance to start laying on the FUD beforehand...

  • Things might be reasonably easy for up to 128 CPUs - the most you'll see (for a bit) in a single box. This wouldn't require a huge effort to get working under Linux, I think because it would mostly be evolutionary from the software point of view. However, I don't know if Linux will work on Suns' 64 CPU Starfire at the moment. (I heard of NetBSD running on a Starfire at NASA about 2 years ago, but I don't remember hearing about Linux...)

    To get to 1024 CPUs as one system, you'd use clustering, using special interconnects, which would also require a fair amount of custom software. Some of this would be a bit like Beowulf, but not quite.

    Still, as general members of the public, you'll be able to get the source code to Solaris 8 in about 2-3 months, and this will include their clustering software. So, you'll be able to see how Sun do it at least.

    Besides, given that Linux (currently) doesn't scale that well (certainly well behind Solaris), there isn't a great deal of point, from a technical perspective, about doing a "port".

    I've no idea about the legal side of doing clean room versions either. The license for the Solaris source code isn't available yet.

  • Try reading the whole article. Near the end is:
    • Later in the year, Sun's replacement for the Starfire, code named Serengetti will be launched. This will have
    • up to 128 CPUs in one box, and you can cluster them (so that they appear to be 1 machine) using a special fiber interconnect, for up to 1024 CPUs, probably using a COMA memory structure (similar to ccNUMA), something Sun have been working on for many years [sun.com].
  • Maybe I should have said "custom edge triggered flip-flop". I know it's a basic component, but there's lots of ways of physically making them. Doing this sort of thing is fairly common though... I think I was being too brief in the article.

    If you're interested, here's the relevant paragraph from the IEEE Micro paper:

    • With clock rates continuing to increase, the part of the cycle time allocated to flip-flops comes under great pressue. To improve this, we designed a new edge-triggered flip-flop. This partially static output, dynamic input flip-flop does not require setup time and is one of the lowest D-to-Q delays for the power and area in use today. The flip-flop design's dynamic input stage effectively allows up to tuck in a full logic stage without increasing the D-to-Q delay. The noise immunity increases by an input shutoff mechanism that reduces the effective sample time, allowing the design to be used as though it were fully static.
  • What, coloured tables not good enough for you? (just kidding)

    When the first US-3 samples came out, there was a pic with Scott McNealy holding a US-3 in his hand, but it was never posted on Sun's site and the original copy of it has long since gone. Since the part isn't actually shipping to customers yet, there are no "official" pictures yet.

    I did think about doing some graphics for the article, but couldn't think of something that would really help...

  • I don't know if Linux will work on Suns' 64 CPU Starfire at the moment.

    See /usr/src/linux/arch/sparc64/kernel/starfire.c:

    /* $Id: starfire.c,v 1.2 1998/12/09 18:53:11 davem Exp $
    * starfire.c: Starfire/E10000 support.
    *
    * Copyright (C) 1998 David S. Miller (davem@dm.cobaltmicro.com)
    */

    I've personally had it running on an E4000. Apparently, Sun gave davem access to a starfire to allow him to add the support.

  • I don't have an IPX, but I do have a nice little LX at home. And the other day I got hold of an SS5 too :-) Our sysadm is trying to get rid of a lot of older Suns, and you just can't let them go to waste... A few Ultra 1's are next to go, but I don't think I can get one of those. They are still too useful - they will go to other departments instead :(

  • What's greatly needed for larger systems is often a lot of main memory. The standard PC limit of 1 GB main memory is a major pain in the @$$ when you need some serious computation done.

    At my department we need machines with lots and lots of memory. Currently we have a 4-CPU Sun with 4 GB memory, but that's really too small. Around 24 GB would be more along our needs. (Coding and crypto research, eats lots of MB's).

    You can get a lot of memory by using clustered machines (or an Origin 2000, SP/2 or whatever), but it's kind-of silly to use a parallel computer just to get the memory, but not the parallelism...

    Anyway, the sooner we can get more than 1-4 GB into a standard PC the better. The need is already here.

  • What's interesting to me about Sun is how well they've done by keeping control of their key technology instead of relying on either Microsoft or Intel to supply pieces. Sun has control over their own operating system(s), they're successfully pushing their own language, Java, and they don't have to depend on Intel for their processors.

    I think that in the long run Sun is going to do a lot better than say HP who has spread themselves really thin trying to be all things to all people. HP originally entered into the alliance with Intel because of fears it could not afford the next generations of chips, only to be caught in the nightmare situation of having to extend the product lifecycle while waiting for Intel to deliver a product that every other competitor will be able to use anyway. And Intel leveraged the alliance into getting HP to give away compiler and other technology for free. Nice for Intel, not so nice for HP.

    DEC had the Alpha but got caught trying to rely on Microsoft for NT. Too bad NT on the Alpha was always an unwanted stepchild, but that's what happens when a company is dependent on another company. You're screwed if you're not a priority to them.

    SGI same thing, total failure trying to sell their own NT workstations on Intel hardware.

    I don't get it, it just seems common sense to me for companies to keep control of technology. That's why Sun is beating Microsoft like a drum in court today over Java, because they own it.
  • That's a license for non-production use.

    A commercial license of Solaris 7 for Sparc costs more than a month's rent for me . . .

  • your 95% cache hit rates are on workstation-class systems, running workstation-class process loads, from where i sit.

    You're thinking too small. Most PeeCee type systems rarely run more than, say, 60 or so processes at once.

    I've seen Sun E450's spawn thousands of processes at once, and, instead of bogging down, saturate the hell out of a fiberchannel raid array and still have enough cpu time left over to fight for more.

    8 megs of on-die cache means that a lot more programs can have that 95% cache hit ratio than could hope to on a lesser processor.

  • AMEN! Geeze, yeah, if anyone wants to play with Sun4c hardware i'll give you my old Sparcstation IPC for free. Just come over to my house and ask, it's yours . . .

  • I hope that's a typo because I wouldn't want to be the guy paying the utility bills for a U-SPARC datacenter. I think the SPARC is being out performed by other cheaper processors but I also think alot of people are forgetting something. SPARC boxes running Solaris will crawl at times with a single processor but as you add processors the system's speed increases dramatically. Try building a 16 processor Xeon box running Linux, without a major kernel overhaul it won't run very well. Solaris runs right out of the box on as many processors as you want. Anyways, back to the processor. That thing is a monster using a .25 micron die. Using such a huge die (compared to MPPC and x86 processors) has the disadvantage of producing alot of heat. Sun needs to invest in shrinking down their die sizes to get more computational power for the same price instead of just making things more complex. Maybe even a .22 or .18 micron SPARC? If it were my datacenter I think I might go with the U-SPARC 3. Oh and for you people calling Sun/SPARC/Solaris slow because you used an Ultra 5, grab some more RAM, a graphics adapter add-on, and use something other than CDE and I think you'll see a performance boost.
  • My school uses exclusively Sun servers and all the admins I've talked to sing high praises of them. But I've been stuck working on our Sun Ultra5 worstations far too often, and they are TERRIBLE. Sooooo sloooooow. They're configured with 128M of RAM and, if I remember right, a 300 mHz sparc. My PII-350 with Linux and the same amount of RAM is much more responsive. Not to mention that they go down with alarming frequency, and they cost four times what I paid for my intel box around the same time period.
    I can understand why Intel-based machines (both Win32 and Linux) are making so much market headway. It'll be interesting to see what RISC workstations really survive after McKinley comes out and people like SGI start producing the kind of first rate hardware (graphics, bus, etc) that has been differentiating Sparc/Mips/PA-RISC workstations up until now. Will SparcStations be able to survive the onslaught? Should Sun really care if they do (especially since workstations are a low-growth market while the server-side growth potential is enormous)?
    --JRZ
  • But then, what is the reliability gain over slightly bigger clusters of el-cheapo hardware?

    Reliability is more than just having redundant hardware. For disks, it works to a certain extent, RAIDs are popular, but you pay a price. Bigger clusters of hardware might be cheaper when it comes to buying hardware, but building a realible system out of that is more complicated, and requires more maintenance. Besides, those systems aren't readily available.

    -- Abigail

  • Wish they weren't so secretive sometimes though. If you actually look at Sun's site, there's almost nothing about the US-3 technically.

    In a way. On the other hand, I think it's kind of cool Sun doesn't make all kinds of promises and delivery dates, only to ship something with errors or getting scored at for not keeping their promises, but instead, they just work on it with an "it's ready when it's ready" attitude. And just as you wrote in your article that for Sun reliability is more important than performance, reliability is also more important then fanfare.

    Makes you think there's still some hacker culture not taken over by marketing droids left.

    -- Abigail

  • Hell, just water-cool the suckers. Last time I saw a "bare" Sparc chip, it was topped with a metal plate and 2 cylinders (like pins, kinda fat though). Onto those pins you attached a pair of round heatsinks (stacked discs). Replace the heatsinks with some kind of water "jacket" and away you go. Fewer problems than with refidgeration. Might want to use a non-conductive liquid rather than water, now that I think about it, unless it's pure H2O.
  • Don't make opinions without the data to back it up.

    Oohhh, you opened yourself up on this one... :)

    Fallacy #1:

    The CPU's are *MORE EXPENSIVE* yes, overpriced, no. Look at a comparison in the CPU's on just a very simple level. The CPU has 8 Megs of L2 Cache. Not 256k, not 512k, not 1 meg, 8 Megs. That Cache is running at CPU Speed. If there's anything at all that's slowing their speed down, its the large amounts of L2 Cache they run with their servers.

    Remind me again why 8Mb of L2 is needed when programs have 95+% cache hit rates with 1Mb (often less; hmmm...)?

    They really are overpriced. I am certain an Alpha 21264 can be had for a fraction of the price of these things, and its specmarks are int-27, fp-58, which is too close to make a big deal out of.

    Fallacy #2:

    Now getting back to PCI cards being overpriced, in Sun's specifications, it dictates that all hardware MUST have a PROM with the drivers on it to be certified as Sun Compatable. At boot time, all of the PROM's are polled and all of the drivers are loaded at the hardware level. Plug and play that really works, imagine that...

    Gee that's funny, I don't remember anything in the PCI spec about having to have PROMs... ;P

    This is bad for two reasons. First, I hate it when vendors screw with the PCI specs. It was adopted as a spec for a reason, not so vendors can then change it so it only works with their HW. Just ask linux-kernel how much they love broken PCI workarounds...

    Reason 2 is that "plug and play" (a Micro$soft term BTW) can be had for PCI without having those PROMs on board. The reason Sun uses those PROMs is to get licensing fees from hardware vendors to get that "Sun Compatible" moniker. Creative revenue generation no doubt, but it prevents PCI interoperability, which is a Bad Thing.

    Fallacy #3:
    > The OS is waaay overpriced.

    Free, yeah way too expensive.


    Wow! You got Sun to give you free copies of Solaris for Sparc? Last I checked you still had to pay a hefty $90k (!) for an OS with nearly equivalent functionality as Linux. I call that a bad deal.

    Fallacy #4:

    First, all of the workstations and servers have TRUE plug and play. There processors scale from Laptops (anyone remember Tadpoles) all the way up to Mainframe-sized computers (E10k). Also - hot-swappable I/O and CPU/Memory in the Enterprise systems. The E10K can scale up to 64 450 Meg processors with 8 megs of L2 Cache, 64 Gigs of Ram, and can run 4 Virtual Machines that can be dynmically allocated on the fly.

    Don't be fooled into thinking only Sun has hot swappable drives and IO. Geez, Dell Proliants have had hot swap SCSI since 1997. Hot swap IO? That has IBM written all over it as well. However, the best argument is scalability. Forget laptops to E10ks, how about Linux on a Palm to Linux on an IBM/390 Mainframe? What two extremes could you possibly supply that's wider than that? (E10ks are toys compared to S/390s).

    My money's on Linux. If you want scalability and interoperability, Linux is the answer. As for reliability, Linux has a little ways to go to catch up to Solaris/VM/MVS/BSD, but it's getting there.

    With all that said, the UltraSparc-III looks like a very good design from Sun. You rarely see appropriate amount of thought applied to the reality of processor shortcomings these days, and they hit the right aspects.
  • You made some good points, but some really bad ones. Here's a few corrections:

    6) What does Sun do that Lintel cannot?

    A lot of things.
    Nothing.

    First, all of the workstations and servers have TRUE plug and play.
    If you're buying pre-installed you don't care. You can actually say that everything in a pre-installed Linux box is plug-and-play, you just don't have to plug. We're talking about Suns vs. PC clones, so Plug 'N Play(tm) as a strict definition does not enter into it. Suns do not adhere to that standard, so the only measure of plug-and-playness is the convinience of your devices being recognized and supported.

    There[sic] processors scale from Laptops (anyone remember Tadpoles) all the way up to Mainframe-sized computers (E10k).
    Laptops. Like Dell, Toshiba, etc. Mainframe-sized computers like the Cluster City from VA/Linux and other large lintel cluster arrangements.
    Intel does not scale gracefully, but we were discussing capabilities not grace, and once you buy it in a package, you really don't care how hard it was to get there.

    Also - hot-swappable I/O and CPU/Memory in the Enterprise systems.
    If you're using a cluster like the Cluster City, then entire systems are hot-swappable

    The E10K can scale up to 64 450 Meg processors with 8 megs of L2 Cache, 64 Gigs of Ram, and can run 4 Virtual Machines that can be dynmically allocated on the fly.
    Let's see, a Cluster City with 20 2x2's (ignoring the admin server) means 40 600MHz (700 available?) processors with 20MB of L2 cache, 40GB of RAM and IS 20 machines that can be dynamically re-allocated on the fly.

    So, the question is still: What can Suns do that Lintel cannot? The answer of course is nothing. The only stumbling block to total Linux acceptance is the application porting. I still can't get most of the high-end application servers for Linux, even though most of them are based on Java. This sort of thing will change, and has been for years.

  • 2) Motherboards are overpriced.
    I honestly can't say I've ever priced a Sun Motherboard. There is no such animal.

    Then what is the SPARCengine Ultra AXe-300 [sun.com]. :) It is A Low Cost, High-Performance Motherboard for Thin Servers, Server Appliances and Configured Servers.

    You can find a tech manual in pdf here [sun.com] .

    Noel

    RootPrompt.org -- Nothing but Unix [rootprompt.org]

  • Thanks for your reply. ctcm measures `movsd` which is a load and a store for each word. If I should have 4112 MB/s @ 539 MHz, that's 7.6 bytes read and 7.6 bytes written each clock. Not very likely unless I've got a 128bit path or dual ported SRAM. But you are right, ctcm is more a measure of bandwidth than latency.

    I dug out my pseudrorandom access asm timer. I measure 10.7 Mreads/s from DRAM (9.1 busclocks), 20.0 Mreads/s from L2 (27 Celeron CPU clks) and 525 Mreads/s from L1 (1.03 CPU clks). So L1 seems single cycle, but L2 looks oddly slow, perhaps due to unintended thrashing.

    As for the power/die budget, I'm afraid I don't know enough about chip feature design. But from all the micrographs I've seen, L1 is a fairly small portion of the die, so doubling it wouldn't be too painful. It also appears disproportionately large compared to L2, so something like this has probably been done.

  • First, high compliments on an outstanding and insightful article on one high-end of the computing business. It is easy to forget there are other aspects than the max-CPU performance sought by hobbyists.

    The discussion of architectural performance benefits was very clear and insightful. There are obvious limits to multi-issue architectures.

    A few corrections, if indeed I am correct: Main memory fetch is _not_ the oft-quoted "hundreds of CPU cycles". Typical SDRAM timing is 6-1-1-1, or 9 bus cycles per 32byte cache line. For a 600 MHz CPU with a 6x multiplier, this is 54 cycles, plus perhaps a few for page misses, etc.

    Also, AFAIK at least Intel's P6 x86 core has a 1 CPU cycle latency L1 cache. Such a fast cache is necessary to make up for the risible shortage of x86 registers and helps considerably with stack-based operations such as often generated by `c` code. I do know that I can realize three RISC-type uops per clock cycle when 33-50% of the uops are loads from L1.
  • I think the latencies you gave are for L2 cache. As for power budgets, I hardly think the P6 core is that great. But it _is_ alot better than an Alpha 21264 633 MHz with 107W (47Amps @ 2.35V).

    As for SDRAM latency, I've measured ~9 Mreads/sec for pseudorandom P6 addresses. Now that _is_ 11 busclocks, but remember the P6 always accesses DRAM by full cache lines, so latency is (11-3)=7 busclocks (more for later bytes). So 42-66 CPU clocks if the multiplier is 6x.

    As for L1 latency, I don't recall my read rates. But they'd have to be _very_ fast to allow ctcm to report 2700 MB/s @ 539 MHz in `movsd` to L1. This sounds like 1 clock/transfer to me, and the L1 might even be double (read & write ) ported.

    -- Robert

  • What do you see in the crystal ball?

    1000 CPUs. 500 of them each serving thier dumb little SunRays. 20 of them serving web contents. I see network congestion and a bankrupcy.

    moral of the story: Yes, you got 1000 CPUs. Can your I/O handle it?
  • You can do far worse than that.. Just take a P166, say, like my computer.. Then run a copy of xlock on it (-delay 0) you can find modes that'll use >10,000 context switches per second.. Then run a few hundred copies of 'cat /dev/zero >/dev/null', nice them to 10 or 15. Under X, it does get a little annoying to use, but from a console you don't even notice the load. (And with amp's realtime playback, your MP3's go through with nary a pop or stutter.

    True, I haven't also tried a forkbomb [while (fork()>=0) ;] concurrently to all of the above, but I expect it to handle that too.

    Linux can handle that load fine, even on a little old P16, so your anecdote doesn't carry much weight, at least with me. :)
  • Even if you could, there's no IO to get that much video bandwidth out of the computer - your screen won't do more than, what, 120 hz? Your eyes barely do 60..
  • The OS is waaay overpriced.

    Be fare. I don't see how "free" ( + media charge ) can be considered as overpriced.

  • There have been machines that almost worked that way. One of the early hypercube machines (N-Cube? IIRC) had a master node, and 2**N small nodes with a CPU, RAM, and communication connectors. It took care of virtual memory by assigning each process as many nodes as it needed, rather than assigning blocks from a shared memory space.
  • Gee, I'd better tell my financial services and telecommunications clients to put all their mission critical application development on hold while you make up your mind.

    Your clients can do whatever the hell they like. If they want to run them on Solaris, fine. If instead Linux/*BSD, fine. NT, IRIX, or SCO, fine with me. Since they are not my clients, I really don't give a flying fuck.

    Right, because these are integral parts of the operating system. I see.

    No, ksh and csh are not integral parts of the OS (much less CDE, as I've gotten on quite fine without it for most of my Unix life, thank you). A SHELL is an integral part, and ksh and csh are, IMO, not very good choices for these particular itegral parts (when tcsh, bash, and zsh are far better). A C compiler is often considered an integral part of the OS - what would you think of a Linux distro that shipped gcc 1.0 as the system compiler? As far as I'm concerned, that's about equivalent.
  • Solaris 8 ships with GCC. A Shell is an important part of your interaction with an OS. If you can't manage that yourself (writing your own aliases, as well as choosing your own shell), you shouldn't be wasting your time with Solaris.

    Well, that's wonderful! When I have time, I'll go and upgrade all of the ~400 machines in the department to Solaris 8 and we can all celebrate. Anyway, the new commercial Sun compilers work pretty well and I'll stick with them on Solaris.

    As far as the shells are concerned, I do choose my own damn shell, which is bash. That's it. No arguments. I just don't see why Sun feels it's really necessary to ship obviously outdated and obsolete tools with their system. And I'm annoyed that I had to suffer with csh up until I got bash all nice and cozily installed this afternoon.

    In any case, I'm hardly "wasting my time with Solaris" - I get paid pretty decent money to admin these machines. I can think of at least a half-a-dozen OSes I would choose over Solaris for home use (easily: Linux, FreeBSD, OpenBSD, BeOS, NT, 2000).
  • What's your point?

    Its fairly obvious - anyone who passes off ridiculous statements like "x86 has no place in production environment" clearly hasn't been in one, ever.

    Before you reply, decide whether you consider half the web companies running multibillion operations on linux/BSD running on x86 not to be in "production".

  • Uh oh, there's a counter example. Guess that burns my whole argument.

    I guess all the multibillion dollar web operations I saw running on x86 boxes over at globalcenter and exodus were just illusions.

  • I've found that the extra quality you get with a Sun box is irrelvant - you upgrade it due to performance issues inherent in any system long before you deal with MTBF and other issues that may have "quality" ramifications.

    Like it or lump it, disposable computing is the way to go. If you're going to upgrade a box in 18 months, why get fleeced on the price?

    As it stands, Sun boxes at the high end do have nice features - at the low end, the quality is typically far inferior to what you get in name brand PCs.

  • Running x86 in a production environment is laughable

    Don't tell that to nearly every company running a server farm at any colocation I've ever been to in Silicon Valley, or to nearly any Frotune 500 company that invariably uses Intel boxes in almost all environments.

    Intel sells 85% of the world's CPUs. They're everywhere. Deal with it.

  • Sun equipment continues to eat away at the mainframe market, and Lintel equipment continues to eat away at the Sun market.

    Sooner or later Sun will have to combat the Lintel market directly - the low end is where its at for web companies in particular (no, no one runs Apache on an E10k).

    Sun's current strategy is to continue to go higher up the food chain, but they're soon going to find out that IBM is defending their mainframe turf vigorously, with uptimes and sustainability that even Sun boxes can't touch.

    Meanwhile, companies like VA are eating Sun's lunch at the low end.

    I predict that pressures from both directions will invariably force Sun to choose the weaker opponent - VA - and attack the low end vigorously. Thats going to mean lower prices for the same equipment. Look for lower Sun profits as the Linux freeware brigade takes it toll on Sun's fat margins.

  • What's interesting to me about Sun is how well they've done by keeping control of their key technology

    No, you're confusing "control" with "closed". Sun used to actually be about open systems - now its about Sun end-to-end solutions that are out of step with trends in open computing.

    Sun has control over their own operating system(s),they're successfully pushing their own language, Java

    You don't follow standards proceedings, do you? Sun's recent double-talk attempt at "opening" Java was met with deserved jeers - Sun wants to control the code in a closed fashion while having the moral legitimacy of an blessed standard. Thankfully other companies joined with ISO and ECMA to derail this ludicrous strategy. Sun's moves with Java smack of pure McNealy arrogance.

    SGI same thing, total failure trying to sell their own NT workstations

    SGI was already doomed when they took this step. Their downfall had little to do with their strategy with regards to NT.

    I don't get it, it just seems common sense to me for companies to keep control of technology.

    Like Microsoft keeping undocumented calls in its API?

    If the existence of the Internet hasn't convinced of the value of open standards, then really there is no hope for you.

  • Excuse me to go into the petty details of... graphical gratifiacation... but 3 pages of specs, no PICTURE!? is there a pic of the damn thing? did i miss it?
  • by Anonymous Coward on Friday February 25, 2000 @10:32AM (#1245727)
    1000 processors... that's enough to spell check an article that hemos wrote! maybe there is hope after all.
  • by sjames ( 1099 ) on Friday February 25, 2000 @02:55PM (#1245728) Homepage Journal

    WOW! And people think that Intel chips (and Alphas) consume a lot of power!

    They are a bit power hungry, but for applications where you need them (bad enough to cough up $10,000+), you won't care! Let's face it, these are not PCs we're talking about here.

    The large die size is required to cram everything they want (for performance reasons) on a single die. I imagine that they're speced at .25 because it's a lot easier to move to a finer process than to a coarser one. Also, nobody minds if you come in better than spec.

  • I fail to see how 1000 CPUs is of any advantage. A few maybe (up to 8 or so). Go overboard and they'll burn cycles just waiting for access to memory, etc.

    In an SMP machine, that is absolutly true. On a bus, 4-8 is about the limit. a crossbar connection can scale to more like 32 or 64 (but the OS becomes a mess with all the locks). After that, NUMA (Non Uniform Memory Access) is in order. In those systems, CPUMemory access is kept off the common path as much as possable (sort of like splitting an overcrowded ethernet segment in half with a brouter).

    The 1000 CPU machine will be less tightly coupled than SMP, but more tightly coupled than Beowulf. (On that scale, uniprocessor is trivially the most tightly coupled, and a sort of distributed net over floppies would be the loosest).

    The 8M cache is a big help in any event.

  • by ChrisRijk ( 1818 ) on Friday February 25, 2000 @11:33AM (#1245730)
    Later on in my article it suggests that they'll be moving to 0.18 much quicker than indicated by the IEEE paper. It seems to me currently, that they'll start at 0.18 micron instead of 0.25 (partly because, it's late, so easier to start at 0.18). This'll help reduce power consumption.

    Sun's high-end kit doesn't take a standard mains socket either ^-^ But no prob - most places you're likely to install them will have the required power supplies. The Starfire can have up to 5 redundant power line cords, each of which has to be able to handle 24 amps...

    The reason why the power consumption is so high is that there's so many pins on the packaging, there's so many high-bandwidth data pipes etc. Ie it's both because they're using slightly out of date fabs from TI, and because of the design. The UltraSPARC-IIs consume much much less power - they're a lot smaller and were originally designed for a 0.45 micron process, I think it was.

  • by slothbait ( 2922 ) on Friday February 25, 2000 @11:14AM (#1245731)
    Its good to see a decent review of a chip from an architectural standpoint. Sites like Ars are starting to address such things, but don't go into much technical detail.

    The cache discussion is very interesting. Its true that most academic papers make large simplifying assumptions. (You spend that much time running hardware sims, and you'll look for ways to simplify your life, too.) Its interesting that other companies maintained those assumptions in their designs, even when they weren't particularly valid.

    This paper is also good for illustrating the simple fact that processor performance relies on a hell of a lot more than just MHz. I think any serious computer user should learn atleast some basics of computer architecture, so that they will be better informed when comparing different hardware systems.

    Most software folks I know (except the compiler guys) are fairly ignorant of computer architecture as a field. Articles like this are good for drawing people in a bit. Many techies are drawn to Linux because they can see what's "under the hood". Its also good to know a bit about what's "under the hood" of your hardware.

    --Lenny
  • by morven2 ( 5718 ) on Friday February 25, 2000 @11:53AM (#1245732)
    However, make sure you're comparing like for like. It's easy to say 'Well, I can buy a 450 MHz processor, 18GB of disk and 256MB of RAM as a PC for ~$1000, and as a Sun for ~3000, so Suns are overpriced' but that's not the full story.

    Sun systems are made to a much higher quality than any PC I've ever found, even the high end servers from Compaq et al. [this doesn't mean that a few products of theirs haven't been total dogs, but in general ...] Also, Sun systems generally have better memory bandwidth, IO bandwidth, etc. than PCs of seemingly equivalent spec. And they last *forever*.

    I'm involved in running the web site for a public radio station, running on hand-me-down Sun equipment obtained from the affiliated university.

    We're serving a web site, doing audio streaming in both GTS's Java technology and Shoutcast, DNS service, plus email and interactive logons for about 50 staff members ...

    On what hardware?

    One SPARCstation 5. Single SPARC processor, I think 50 (50!) MHz, 128Mb memory, old scsi disk. The system must be six years old at least.

    Now that's lasting value. Not a cutting edge system any more by any means, but it's quite something to still be using a system that old for a production server ...
  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Friday February 25, 2000 @10:44AM (#1245733) Homepage Journal
    The things that they do don't require as much CPU as they need disk and memory speed. Sun delivers in that department.

    I'm working at IBM, and our AIX servers are pretty much the same. Slow CPU's, but pretty good disk storage and plenty of RAM. This is exactly what we need to run DB2 and Apache. And we've got the 2nd biggest web site (dollar wise) on the internet. These are the things that are important.

    Microsoft has a serious problem in this department. Their OS only runs on Intel platforms, and for sheer IO power, the Intel platforms lag behind the others. Even if W2K is a sweet reliable OS, it still can only go as fast as the hardware.
  • by FreeUser ( 11483 ) on Friday February 25, 2000 @12:05PM (#1245734)
    Why was this posted and the article on 2 TB memory support on Alpha Linux by SuSE that I submitted rejected, not once, but twice? SPARC is very cool, but the article isn't all that exciting IMO.

    I have to concur. I am generally not one to complain about editorial choices here, but 2 Terabyte memory support under Linux is IMHO much more interesting than the latest rumormongering from Sun. At the very least, both stories could have been linked.

    However, a story I forwarded from the mp3.com mailing list a while back (about the RIAA suit against them) was also dumped in favor of a movie review, mere days after the Motion Picture Association of America had begun thoroughly stomping the testicles of the Open Source community in the form of lawsuits against DeCSS, etc. Even something as dramatic as that didn't seem to have much affect on /. content (I mean, come on, helping the very crooks to market their product through reviews days after they've declared war on the community you purport to support?). Given that editorial history I doubt your complaining, or mine, will have any significant effect.

    However, all is not lost. Commander Taco, Hemos, et. al. have been kind enough to release the sources to slashdot under the GPL, so you and I both are free to take our sour grapes and ferment them into the wine of another, parallel open source site. :-) And despite all of the flaws, there is still sufficient good content here for me to keep coming back, reading the stories that interest me, and posting comments (most of them much more on topic than this).

    As a final aside, working for a company which has nearly completed the process of dumping Sun in favor of FreeBSD and Linux solutions, I found the entire story rather amusing. While there are certainly specialized applicaitons which will demand 1000 processor in parallel hardware, just about any job can be achieved far less expensively, and with far more flexibility, simply by using a beowulf, or similar, cluster of inexpensive PCs on the Open Source operating system of your choice. Of course, Sun Marketing will undoubtably convince some that they absolutly cannot live without the latest UltraSparc Millenium Parallel Honking Machine From Hell/1000, which can be yours for a mere $8.7 x 10^16 and will even run an operating system which has no compiler included (such "add-on" parts sold seperately at still greater cost) and still, to this day, defaults to "ed" whenever an unfortunate user attempts a "crontab -e".[1]

    [1]setting the EDITOR environment variable to "vi" or "emacs" will override this, but that doesn't make the default any less inane.
  • IF the operating system natively supports upto 1000 processors (i'd assume 1024 would be logical), then that means you can run the same exact binary on a single CPU workstation all the way up to a supercomputer.

    Yes, this has always been one of the good points of Sun. I used to work for a company where developers had single CPU workstations (from Ultra 5's down all the way to Sparc Classics), but production machines would be multi-processor machines (up to 32 processors at some clients). No recompilation needed. Sun hardware really scales well - of course, kudos should go as well to the kernel, because if the kernel doesn't support scaling to multi processors well, the hardware won't do you much good.

    -- Abigail

  • by randombit ( 87792 ) on Friday February 25, 2000 @08:04PM (#1245736) Homepage
    My school uses exclusively Sun servers and all the admins I've talked to sing high praises of them. But I've been stuck working on our Sun Ultra5 worstations far too often, and they are TERRIBLE. Sooooo sloooooow. They're configured with 128M of RAM and, if I remember right, a 300 mHz sparc. My PII-350 with Linux and the same amount of RAM is much more responsive. Not to mention that they go down with alarming frequency, and they cost four times what I paid for my intel box around the same time period.

    Yeah, the deptartment where I study (CS) and the one where I work (Physics) both run Suns, and I've had pretty much the same experience, except the Sun Enterprise 1 [formerly the NFS server] on my desk only has 96 megs of RAM. :( However, there are a pair of Ultra10s across the hall that I can pop over and use if I like, which run pretty well. And for big servers, at this point, Sun is still the way to go, despite the usual zealots (yes people, I run Linux at home too) who claim that anything can be done on i486 Beowulf clusters.

    Now only if Solaris didn't suck so much... OK, it scales well and is pretty stable (I'm still undecided if Linux/*BSD is more stable), but it's a real pain at times. I mean, any OS that uses CDE and comes with csh and ksh as the shells just sucks (I just installed bash this afternoon).

    Damn, it's a pain in the ass to get used to using a PC keyboard after using a Sun one all afternoon... oh, on the subject of hardware - Sun stuff may cost a lot but it is quality stuff. Before they were replaced last month, the CS department had a bunch of old SPARCStations (mostly SPARC5s, I think), which actually ran pretty well despite being who-knows-how-old (about as fast as a Pentium II-200 with 96 megs of RAM, if I was guessing for a PCish equivalence). And Ultra2s are fucking awesome... spec on at Sun's website sometime, you'll be amazed at how cool (and how insanely expensive) they are.
  • by Alpha_Geek ( 154209 ) on Friday February 25, 2000 @10:46AM (#1245737) Homepage
    People may hate to admit it, but Sun hardware is probably the most reliable hardware out there. That is what you are paying for. Their stuff is designed for very large companies who will pay a premium for reliable systems. The problem with PC hardware is that so many different people make different parts that compatibility issues can and do arise, and that is not acceptable for critical servers. Also as much as people gripe about the OS it is also the most scalable OS out there, way better than NT, Netware, Irix or even (*gasp*) Linux for massive systems.
    -
  • Let me take it upon myself to defend Sun one line at a time from your complaints.

    1) The CPU's are overpriced.

    The CPU's are *MORE EXPENSIVE* yes, overpriced, no. Look at a comparison in the CPU's on just a very simple level. The CPU has 8 Megs of L2 Cache. Not 256k, not 512k, not 1 meg, 8 Megs. That Cache is running at CPU Speed. If there's anything at all that's slowing their speed down, its the large amounts of L2 Cache they run with their servers.

    2) Motherboards are overpriced.

    I honestly can't say I've ever priced a Sun Motherboard. There is no such animal.

    3) Memory is overpriced.

    Yes, yes it is. Buy Kingston.

    4) The funky hot-swap PCI cards are overpriced.

    First off, I'm Sun Hardware Certified, and I don't know of a single system in which you can hot-swap PCI cards. You can do this to drives and I/O Boards (on the Enterprise 3500+ systems), but not individual cards. Now getting back to PCI cards being overpriced, in Sun's specifications, it dictates that all hardware MUST have a PROM with the drivers on it to be certified as Sun Compatable. At boot time, all of the PROM's are polled and all of the drivers are loaded at the hardware level. Plug and play that really works, imagine that...

    5) The OS is waaay overpriced.

    Free, yeah way too expensive.

    6) What does Sun do that Lintel cannot?

    A lot of things. First, all of the workstations and servers have TRUE plug and play. There processors scale from Laptops (anyone remember Tadpoles) all the way up to Mainframe-sized computers (E10k). Also - hot-swappable I/O and CPU/Memory in the Enterprise systems. The E10K can scale up to 64 450 Meg processors with 8 megs of L2 Cache, 64 Gigs of Ram, and can run 4 Virtual Machines that can be dynmically allocated on the fly.

    7) Even a Farm of Lintel boxes can be had for less than that sun.

    Sometimes, true. If you had a farm of 386 Linux boxen, (~$5 apiece) will cost less than a fully loaded E10K (~$10,000,000). Realistically, the cost/performance is about 50/50. UltraPenguin is runs better IMHO than Alpha Linux or x86 Linux.

    Don't make opinions without the data to back it up.
  • by Masker ( 25119 ) on Friday February 25, 2000 @11:07AM (#1245739)
    So this is a 600 MHz RISC processor using .25 micron fabrication processes; that should be pretty fast. However, it consumes 75W power? AND the 750 MHz will consume an estimated 90W power (at .25 micron)?!?!

    WOW! And people think that Intel chips (and Alphas) consume a lot of power! The heat dissipation of these puppies will be monsterous! If you had a dual CPU workstation with 2 600MHz US-3s, the CPUs alone would require (at most) 150W of power. What sort of power supply would that need? 300W+, right? I'd really rather not have one of these sitting under my desk, considering the fan noise from the power supply, case and CPU fans.

    Why can't they use a smaller die size (which should reduce the power reqs and heat dissipation)? Is it just Sun's fabs, or is there some architechtural reason? Or are the power consumption specs they quote just OFF?
  • by ajiva ( 156759 ) on Friday February 25, 2000 @02:05PM (#1245740)
    Its all about how fast the system can service request and not how fast a single app runs. My Ultra10 at work is very responsive even under heavy load (Loads of > 1.0). Plus Sun machines are very balanced. You don't have the CPU waiting for the memory, disk, etc. Unlike PC's today where the CPU's are fast, but are hindered by ATA disks, high latency caches and memory
  • by ChrisRijk ( 1818 ) on Friday February 25, 2000 @11:05AM (#1245741)
    Just thought I'd let you all know that I used emacs to write the whole article in HTML. (though the webmaster for Ace's Hardware did some final formatting to fit with the rest of the site). Written on a FreeBSD box too...

    I've already started writing a 2nd article, this time on Sun's MAJC chips, which have lots of interesting features. Yummy. The reason why I'm doing a bit about Sun hardware is because (a) I tend to follow what they're up to because they do occationally do pretty interesting stuff, and (b) nobody else has written much...

    Wish they weren't so secretive sometimes though. If you actually look at Sun's site, there's almost nothing about the US-3 technically. Still have to wait until Sun start actually selling US-3 hardware before can be certain of anything...

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...