Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Sun Microsystems Technology

Sun To Release 8-Core Niagara 2 Processor 214

An anonymous reader writes "Sun Microsystems is set to announce its eight-core Niagara 2 processor next week. Each core supports eight threads, so the chip handles 64 simultaneous threads, making it the centerpiece of Sun's "Throughput Computing" effort. Along with having more cores than the quads from Intel and AMD, the Niagara 2 have dual, on-chip 10G Ethernet ports with cryptographic capability. Sun doesn't get much processor press, because the chips are used only in its own CoolThreads servers, but Niagara 2 will probably be the fastest processor out there when it's released, other than perhaps the also little-known 4-GHz IBM Power 6."
This discussion has been archived. No new comments can be posted.

Sun To Release 8-Core Niagara 2 Processor

Comments Filter:
  • Trust me... (Score:4, Insightful)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Friday August 03, 2007 @04:17AM (#20098443) Homepage Journal
    ...If they put THESE under the GPL, along with the T1, they'd be getting more press than they could imagine. If they used these a bit more aggressively - such as using them as a graphics processor on a PC - they'd be getting some amazing press. If they keep them locked in a server closet, it's only then that nobody will care.
    • by utnapistim ( 931738 ) <<moc.liamg> <ta> <subrab.nad>> on Friday August 03, 2007 @05:04AM (#20098615) Homepage
      ... will a beowulf cluster of these run linux, or blend?
      • Re: (Score:3, Interesting)

        by cyphercell ( 843398 )

        can it blend? - yes I'm sure it can, the iphone blended.

        speaking of which how much does this processor cost, and why doesn't Sun Microsystems make laptops, I was looking for Unix machines recently and I decided to go with the Mac book pro, rather than the Linux machines (laptops) at Dell, because of the hardware and general lack of processing power, which doesn't seem to lend itself to virtualizing other Operating systems.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          why doesn't Sun Microsystems make laptops

          They do. Ultra 3 Mobile [sun.com].

          There are also the units from Tadpole [tadpole.com], and I'm sure others

          • by yoder ( 178161 ) *
            Looks like the Ultra 3 mobile might have beeen a nice laptop, but it's no longer orderable. I didn't look to see if they have anything to replace it.
            • Last I heard, they don't. However, the UltraSPARC line isn't really what you want in a laptop anyways. Much better to get an X64 laptop and run Solaris10/x86 on it. You can use the list of tested and proven hardware for Solaris x86 [sun.com] to make sure it'll run without fiddling.

              There's been a lot of (justified) doubt in the past about Sun's commitment to Solaris x86, bit it clearly is the future of consumer-directed Solaris. And it rocks.
      • Only one silly meme per customer please.
        • Re: (Score:2, Funny)

          by TheBOfN ( 1137629 )
          ...In Soviet Russia Linux running Beowulf clusters blends You!
          • "...In Soviet Russia Linux running Beowulf clusters blends You!"

            I for one welcome our new Soylent Green producing, waterfall powered overlords..
    • Re:Trust me... (Score:5, Informative)

      by LarsWestergren ( 9033 ) on Friday August 03, 2007 @05:45AM (#20098791) Homepage Journal
      ...If they put THESE under the GPL, along with the T1, they'd be getting more press than they could imagine.

      http://www.opensparc.net/ [opensparc.net]

      They are openly discussing making the Niagara 2 available as open source as well, but note that there are some roadblocks such as the US government's restrictions [opensparc.net] on crypto technology.

      • by BuR4N ( 512430 )
        It would be interesting to know if there are any actual hardware out there generated from Opensparc.
        • Re: (Score:3, Informative)

          by afidel ( 530433 )
          There are tons of research chips made from the OpenSparc designs and Simply RISC [opensparc.net] claims to have an embedded processor made from a single core T1 design.
        • Re: (Score:3, Informative)

          by mhall119 ( 1035984 )
          I believe Motorola makes Sparc-compatible processors, not sure if they're based on Opensparc of if they licensed it from Sun.
    • Re: (Score:3, Informative)

      by wild_berry ( 448019 )
      using them as a graphics processor on a PC

      Good enough for raster graphics, not so good for vector graphics or 3D due to there being only 8 FPUs on the die, with only twice the floating point throughput of the terrible-at-floating-point T1. Unless you do swap some of the throughput for soft-floating-point.
    • Re:Trust me... (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Friday August 03, 2007 @07:30AM (#20099207) Journal

      If they used these a bit more aggressively - such as using them as a graphics processor on a PC - they'd be getting some amazing press
      A modern GPU is fairly similar in design to the T2, but there are a few key differences:
      • The T2 is mainly focussed on integer ops with only one floating point pipeline per core. A GPU typically is close to 100% floating point pipelines, and doesn't bother with integer arithmetic.
      • The T2 uses multiple contexts to hide memory latency, mostly caused by incorrectly predicted branches. A GPU typically doesn't bother much with branch prediction, since it runs code that is very light on conditional branches (on average, branches happen every 7 ops in general purpose code. In GPU code, they happen every few hundred).
      • GPUs usually focus on 4-way vector instructions, since most of their data is of this form (RGBA colours, XYZW vertexes). The T2 only has scalar instructions.
      I posted in my journal recently suggesting that it would be easier to produce a modern GPU than an older card, since modern GPUs have much less application-specific logic and do more in software, relying on just having lots of cores / pipelines to give speed.
      • Re: (Score:2, Interesting)

        I posted in my journal recently suggesting that it would be easier to produce a modern GPU than an older card, since modern GPUs have much less application-specific logic and do more in software, relying on just having lots of cores / pipelines to give speed.

        Which makes me wonder ... if most of the work of a video card these days is done in software, and one of the biggest complaints about Linux is the lack of good free/open source drivers for high-end NVIDIA/ATI graphics cards, then why, exactly aren't FOS

        • Which makes me wonder ... if most of the work of a video card these days is done in software, and one of the biggest complaints about Linux is the lack of good free/open source drivers for high-end NVIDIA/ATI graphics cards, then why, exactly aren't FOSS developers working on one? Get some chip fab to produce some cards based on an open GPU design, write our own drivers and -- bam -- the LinuXtreme3D Graphics Accelerator! Screw NVIDIA and ATI.

          Because the sunk cost to get involved in graphics cards is huge

          • Where are you, Google... in our time of deepest need? Geeks around the world are on their knees, crying out to play Grand Theft Auto without any WINE or Virtualised Politically Corrects,;to have the freedom to accelerate in threedee that which can otherwise only be seen through the Windows.. please help us to feel lucky today!
        • Re: (Score:3, Informative)

          by CryoPenguin ( 242131 )
          You're looking for the Open Graphics Project [opengraphics.org]. But hardware is hard to design and expensive to fab, you're not going to get an Xtreme3D Graphics Accelerator competitive with the latest from NVIDIA or ATI.
    • by rbanffy ( 584143 )
      I would love to see Sun going back to the "distinctive look" "macintosh-beautiful" workstation business. Their current lineup looks like dull (if serious) PCs. I miss the Frog Design look.

      Of course, I know that generic x86 boxes (running Linux or, gulp, NT) killed the workstation market and that it would be hard to justify any development in this direction.

      It seems the Niagara 2 is more fit for desktop workloads than the first one. Maybe they can do it again. I would love to see.
      • by gig ( 78408 )
        If their boxes were distinctive people would link to their porno shots and blog about how drool worthy Sun's stuff is. Generate buzz. If the guts are unique then you can explain that with unique external features. Every bit of investment in this pays off as free advertising. Watch the buzz on the next iMac.

        The white box PC look is like a disguise.
        • by rbanffy ( 584143 )
          Yet, the guts of a current Mac are more or less exactly like the guts of my HP notebook, which is not ugly, but, certainly, is far less pretty than any MacBook.
    • Re: (Score:3, Interesting)

      by MoxFulder ( 159829 )

      ...If they put THESE under the GPL, along with the T1, they'd be getting more press than they could imagine. If they used these a bit more aggressively - such as using them as a graphics processor on a PC - they'd be getting some amazing press. If they keep them locked in a server closet, it's only then that nobody will care.

      I for one wish that they'd slap the UltraSPARC Niagara and its chipset on a standard ATX motherboard with PCI and PCI-Express support.

      There'd be a Linux port in practically no time, and I know a bunch of us Linux power users would adopt that setup in no time... cheap commodity hardware coupled with a high-throughput RISC processor would be great for desktop multitasking, software development, file serving, etc.

      • Re: (Score:3, Informative)

        by allenw ( 33234 )
        Linux is already running and certified [ubuntu.com] for Niagara.
        • Indeed! I'm saying it'd be nice if I could take that chip and slap it in my commodity PC, using my existing drives, PSU, wireless card, etc. Because right now, you can only get a Niagara system by buying it complete from Sun, which means paying a lotta markup on all the other components.

          It'd be great if you could buy drop-in ATX boards with SPARC/MIPS/PowerPC/whatever processors. That might lead to the rapid demise of x86 if they performed well!
  • by imroy ( 755 ) <imroykun@gmail.com> on Friday August 03, 2007 @04:22AM (#20098457) Homepage Journal

    This processor will also have a floating-point unit for each core, unlike the UltraSPARC T1 (Niagara) which only had one shared amongst all 8 cores. This should make it much more suitable than the T1 for a wide variety of applications. The T1 did great on multithreaded server-type tasks (e.g web, email, database) but would have been pretty hopeless for anything doing more than a bare minimum of FP work.

    • Re: (Score:3, Informative)

      by dread ( 3500 )
      Correct. At my last employer we found this out the hard way. Most servers were getting great performance but the one that actually did some (and it wasn't much really) FP work was horrible. This should really remedy that problem.

      On the other hand, SUN still suffers from the fact that ETCA is getting more and more mindshare in the telco arena which has been one of their major cash cows. It will be real interesting to see how that pans out in the end.
    • Re: (Score:2, Informative)

      by Anonymous Coward
      It has a cryptographic unit per core too. The PDF prezo linked by the page below says that bandwidth of the 8 crypto units is enough to run the on-chip 10 GbE ports encrypted. Sounds like an opportunity for some interesting applications -- VPN, SSL, SAN/NAS encryption, anyone?

      All that and the 64 threads run at 84 watts maximum (not TDP).

      http://sun.systemnews.com/articles/108/3/hw/17688 [systemnews.com]
  • Yes, but.. (Score:2, Funny)

    by aerthling ( 796790 )
    Yes, but will it run Vista?
  • Interesting (Score:5, Interesting)

    by ShakaUVM ( 157947 ) on Friday August 03, 2007 @04:40AM (#20098521) Homepage Journal
    I like it. In my work with high performance computers, a significant limiting factor in a lot of our tasks was the interprocessor bandwidth. The Niagra2 has a crossbar, with a huge amount of bandwidth available between the different cores and their L2 caches.

    I'd like to see some benchmarks, and more technical specs, on these babies.
    • Re: (Score:3, Interesting)

      by Jasin Natael ( 14968 )

      If anybody is planning to benchmark this running common apps, I'd also be very interested to see how the approach to hiding memory latency works on more pedestrian applications like video encoding and pattern recognition (and maybe even thread-heavy GUI's).

      IIRC (I researched this proc years ago for a University paper), it tries to hide latency by switching thread contexts whenever there is a cache miss or branch misprediction. The crossbar should help a little with cache-related stalls, but the core would

  • by Rob Simpson ( 533360 ) on Friday August 03, 2007 @04:45AM (#20098539)
    (nt)
  • by Eukariote ( 881204 ) on Friday August 03, 2007 @04:48AM (#20098549)

    Along with having more cores than the quads from Intel and AMD...
    What quad from Intel/AMD? Intel is selling two dual cores on a cracker. The "quad" bit is just marketing, the actual silicon chips are pure dual core designs that have to talk across the front side bus just as in a two-socket server. And AMD has so far only been previewing their quads, you can't buy them yet.
    • by OrangeTide ( 124937 ) on Friday August 03, 2007 @04:54AM (#20098569) Homepage Journal
      customers just want to fit 4 cores in one socket. That's all that matters. That you can get a 1U with two sockets and put 8 intel cores in it under under $2k is a big deal right now.

      That said I've always wanted to get my hands on some of these new multicore UltraSparcs. I think they have a lot of potential, and the new ones seem extremely powerful.

      Now if only Sun would but the low end one in a mac mini form factor and sell it as a java developers kit then maybe I could play with one. The low end sun fires are something I could almost afford, but I don't really want to keep a 1u on my desk just to try out the technology.

      I think the big 64-bit address space and the ability to run lots of threads seems to fit well with Sun's Java. Not that I am a Java developer, I just think it's a good match, and it seems to be that's why people were using the older CoolThreads systems, enterprise Java.
      • Re: (Score:3, Interesting)

        by Eukariote ( 881204 )

        Customers just want to fit 4 cores in one socket. That's all that matters.

        Note that the post was about the number of cores/threads in the Niagra chip design. In terms of chip design, the circuitry on the silicon is what matters, not how you package, integrate, or market it. Moreover, it does matter to a customer if marketing speak fobs him with two dualcore chips on a cracker instead of an integrated four core design.

        Performance does not scale purely with the number of cores, it also matters how efficie

        • by brucmack ( 572780 ) on Friday August 03, 2007 @06:17AM (#20098909)

          In terms of chip design, the circuitry on the silicon is what matters, not how you package, integrate, or market it.

          I agree with you on this point.

          Moreover, it does matter to a customer if marketing speak fobs him with two dualcore chips on a cracker instead of an integrated four core design.

          I don't agree with you here. What matters to the customer are costs and performance. They shouldn't have to care about how the package works, as long as it works correctly.

          From Intel's perspective, they had two options:

          1. Start with a new design that integrates all four cores on a single chip.
          2. Put two existing chips onto one package. Chips that they've been manufacturing for quite some time, so yields are good and there's headroom for higher clock speeds or lower power consumption.

          From the customer's perspective, those two options correspond to:

          1. A chip that performs a bit better, but probably costs more and definitely comes on the market later.
          2. A package that's got some performance drawbacks in certain situations, but is available now at a reasonable price.

          What do you think Intel and their customers prefer?

          • by Sycraft-fu ( 314770 ) on Friday August 03, 2007 @07:15AM (#20099147)
            Also Intel's seems to have shown that having two units that need to communicate across the FSB doesn't really cause any problems. Worked fine for their Pentium Ds (2 single cores) works fine for the quads. While bus contention assuredly becomes a problem at some point, with just two units it doesn't seem to be for normal tasks.

            Thus it makes it a worthwhile design to go with. I could see it continuing too. Maybe their next gen chips are 4 cores on a single unit which goes mainstream, and then an 8 core 2 unit job for higher end stuff. At some point there may be too many cores per unit to do with without bus contention, but them maybe not since the speed of the bus keeps getting increased. Also I could see OSes being made aware of this, if it continues, and knowing that each X number of processors is a unit and you can shuffle all you like withing that, but shuffling across units incurs more penalties and thus isn't done unless it has to be. So if a process had 4 threads, and a unit was 4 cores, it'd make sure all the threads were running on the same unit.

            Regardless, you are correct that at this point it is an excellent idea. Doesn't matter if it is the most technically correct solution or not, what matters is that it works well and is cheap.

            We make concessions like that all the time in the computer world. Memory would be a good example. For a good while on desktops, memory, the FSB, and the processor ran at the same speed. You had a 30MHz 386, you were running 30MHz memory. Multipliers weren't a things you worried about. Then, we started to run in to limits of what memory could do. We could scale processors faster than RAM, or at least faster than RAM could be done cheaply. Thus the start of clock multiplied chips. This works, but at some point the memory is just too slow. So then we start getting in to tricks like DDR RAM, which transfers twice per clock cycle, and interleaving RAM, so that the processor has two channels to get faster access and so on. Currently you can have a CPU at one speed, an FSB at another, and memory at a third. Right now I've got a 2.66GHz CPU, a "1333MHz" FSB (it's not really 1333MHz, FSBs are quad pumped so it really runs at 333MHz) and "667MHz" RAM (again not really, it's DDR so the actual memory clock is 166MHz, bus clock is 333MHz, it just does 667 million data transfers per second hence the rate) and this is not an uncommon setup.

            None of this is an ideal setup. Ideally, the FSB would run at the same speed as the processor and so would the RAM. This would lead to the processor having almost no wait time for memory data and very little need for trickery to try and prefetch data and such. However alas, if it were possible at all it would be too expensive to do. Thus we have this somewhat hacked solution. However in reality it matters little, though a hack it may be, it works real well. It has given us memory that can get the data to the CPU in a timely fashion and doesn't break the bank.
          • Re: (Score:3, Insightful)

            by TeknoHog ( 164938 )

            I'd be glad to have any kind of 4-way SMP system. Whether they're all on different chips or all on the same, I'd still get 4 CPUs of processing power. Of course, inter-CPU communication makes a difference in certain applications, but people have worked with traditional SMP systems for decades, and we know how to make good use of them. Putting them on the same die won't solve the basic problems of parallelization.

      • by dbIII ( 701233 )

        That you can get a 1U with two sockets and put 8 intel cores in it

        It's even better that you can have two boards like that side by side in 1U, had a few since early this year. For some things this Sun is really going to walk over a pile of processing nodes like that - but it won't be as cheap.

    • by ricky-road-flats ( 770129 ) on Friday August 03, 2007 @08:52AM (#20099785) Homepage
      It's already been said, but that's a big glossy load of poop.

      The quads from Intel provide four physical cores per socket. That is the definition of a quad in this context. The exact workings of how many bits of silicon there are, how they talk to each other and to the rest of the system is, to 99.999% of users and computer buyers, background fluff.

      This was the same as when Intel put two single-core chips into a package to release a 'dual core'. Lots of people like you jumped up and down and pointed out it wan't *real* dual core, and how the FSB issue would cripple performance. Amazingly, it wasn't the case - they sold in droves, and real-world performance was good enough to carry Intel through to the 'true' dual core, the Core 2 Duo.

      If the competition had anything out that was the same cost and performed significantly better than the 'fake' quad cores, you would have an argument. But they haven't and you don't. Bear in mind I'm talking about the huge x86/x64 market, not the relatively low volume non-x86 server market.

      What Intel did back then and again now is perfectly sensible. They have millions of high yield, robust dual core chips being churned out, and they have built into the infrastructure the ability to put two into a package, lower the speed a bit to drop the per-core heat output, and sell reasonably priced (now) quad core chips. When the drop to 45nm happens, they will release their 'real' quad cores, and pretty quickly put two of those into a package to start selling oct-core (whatever we're going to call them). And so it goes.

      What's the alternative? Not sell quads until 45nm comes out? Not working out too well for AMD is it? I've asked the question before here and on realworldtech.com - at what point will the FSB problem actually become a painful problem for the Intel chips? Well, not yet (4 core) is the answer, despite dire predictions from the AMD camp for years. My gues is that, shock of shocks, Intel have actually thought it through - and that's why CSI is coming. When the number of cores gets to the point where FSB will actually hurt performance relative to the AMD architecture, that's when CSI will kick in. Maybe at 8 cores, maybe at 16.

      What, you don't need quad core yet? Fine, stop your bitching and choose what's right for you. Vive la difference, and 3 cheers for a market that gives us the choice.

      • Re: (Score:3, Insightful)

        by Eukariote ( 881204 )

        The quads from Intel provide four physical cores per socket. That is the definition of a quad in this context.

        Well, yeah, and if I say the context is the motherboard, then I can define a four-socket board holding single-core CPUs to be a quad core chip. It would be equally ludicrous in the context of chip design.

        ... the FSB issue would cripple performance. Amazingly, it wasn't the case - they sold in droves

        I take it you haven't looked at proper SMP benchmarks. Sitting on the same FSB sucked rather badl

    • by LWATCDR ( 28044 )
      I like AMD.
      But you are wrong from the consumer point of view.
      I can get an inexpensive Intel system with four cores in a socket. That is a selling point.
      I can get a two socket system with eight cores for not that much money as well.

      What you will not see are people pushing four socket sixteen core systems that way. At that point yes the Intel two duels on a cracker falls apart.
      BUT and this is a big BUT for a lot of people the Intel hack works and works well.

      I do hope that AMD does well with their four core cp
  • huh? ethernet ports where? anyone care to explain?
    • by Cheesey ( 70139 ) on Friday August 03, 2007 @05:44AM (#20098779)
      High-speed CPUs are all limited by a bottleneck - getting data on and off chip. Putting the Ethernet controllers on chip helps to offset this.

      In the future, it is likely that all the wired buses in your motherboard will be replaced by an internal Ethernet-like network. We are already seeing a trend towards simpler and faster interconnects such as SATA. The next step is to use Ethernet-style connections for every chip-to-chip link, and within the chips themselves too. If this seems unlikely, consider that your PCs memory bus already is basically a network connection. The device at one end (CPU) is in a different clock domain to the device at the other (memory). Data is sent in packets (called bursts) to offset the latency of setting up a transfer.
    • Not "ports" but controllers. The controller for (2) 10GbE ethernet is built into the chip itself, so you don't have to go to an off chip device to control them.
  • by Anonymous Coward on Friday August 03, 2007 @05:27AM (#20098717)
    Am I the only person who read the headline as "Sun to Release 8-Core Viagra 2 Processor"?
  • by zeromemory ( 742402 ) on Friday August 03, 2007 @05:58AM (#20098835) Homepage
    Sun donated one of the original T2000 (based on the original 8-core 4-thread/core Niagara processor) systems to a campus organization where I'm a volunteer system administrator, so think I have quite a bit of experience with this processor. Here's my take on the Niagara2, based upon my experiences with the Niagara1:
    • No, this processor is not going to be the 'fastest' processor out there; this processor is designed primarily for workloads that don't require floating-point calculations (web servers, mail, etc), so it's not going to be the go-to processor for places like rendering farms. In fact, float-point performance on the Niagara1 was so terrible that Sun included a special cryptographic accelerator to help with SSL performance (the primary consumer of floating-point calculations on most web servers).
    • This processor architecture absolutely rocks for the purpose it was intended, though. It consumes very little power, but handles service loads amazingly well. We also have a Sun v40z (8-core Opteron server) that would barely be able to keep up with the our T2000 (that's saying a lot), and our T2000 consumes only a little more than half as much power going into our v40z (2.6A @ 120VAC compared to 4.6A @ 120VAC).
    • The inclusion of 10GbE support is going to be absolutely essential and will help make servers based upon the Niagara2 stand-out compared to servers from competing vendors. Why is 10GbE so important? I mean, we already have GbE, and most places barely have an infrastructure for that in place, right? The answer is SAN. 10GbE is going to be necessary if you're going to be using iSCSI to consolidate storage and deliver reasonable performance, and most places are heading in that direction, especially the target market for these systems.
    • Solaris Logical Domains (not to be confused with Sun Containers or Zones) is a hardware-based virtualization technology that was packaged with the Niagara1 and will probably be included with the Niagara2. Using Logical Domains, you can create independent virtual servers running different operating systems and divide hardware resources up between them, down to the individual CPU thread and PCI Express bus leaf level. Unlike software virtualization solutions, all your virtual servers are never dependent on any single virtual server (global zone, dom0, etc). This technology is making hardware virtualization a possibility for many places.

    I think the Niagara is a pretty solid design, but it's not the processor to end all processors. For service workloads, I don't think you can get a better processor, but you probably don't want one of these processors in your workstation. Sun Microsystems is also headed in the right direction, establishing an open-community around these processors and Solaris.
    • by Alioth ( 221270 ) <no@spam> on Friday August 03, 2007 @06:09AM (#20098873) Journal
      The floating point performance of the new processor should be like night and day compared to the old one you had: the old one apparently only has 1 FPU for the entire device - the new one has an FPU per core.
      • Re: (Score:2, Informative)

        by TheRaven64 ( 641858 )
        Note that this is per core, and not per context. With eight contexts per core, it's still going to be a bottleneck if your code is more than 1/8th floating point calculations. On the other hand, a big part of the performance problem came from register copying from the individual cores to the FPU and back on the T1, and this should be fixed with the T2. It's still not going to be a great floating point chip, but it should be a bit better.
    • Re: (Score:2, Interesting)

      by BDeblier ( 155691 )
      In normal circumstances public key cryptography doesn't touch floating point. It's multi-precision integer calculations that are required for this. But UltraSparc cpus have such a bad integer multiplier that you need to resort to floating point trickery to get a slightly better performance. It's no miracle they had to add a dedicated crypto processor to the Niagara line.
    • by phoebe ( 196531 )

      This processor architecture absolutely rocks for the purpose it was intended, though. It consumes very little power, but handles service loads amazingly well. We also have a Sun v40z (8-core Opteron server) that would barely be able to keep up with the our T2000 (that's saying a lot)

      A $16,000 machine barely keeps up with a $21,000 machine is saying a lot?

      Sun Fire V40z Server
      $ 16,995.00
      4 Dual-Core AMD Opteron - Model 885

      Sun Fire T2000 Server
      $ 21,495.00
      1 x 1.2 GHz UltraSPARC T1 - 8 Core

    • by slew ( 2918 )

      Sun included a special cryptographic accelerator to help with SSL performance (the primary consumer of floating-point calculations on most web servers).

      Interesting, comment about the crypto-processor, except SSL servers usually don't use much floating point (RSA key stuff is large integer mult/modulus stuff, not floating point) so I don't think this factoid is related to the FP performance of T1 (or T2). I always thought that sun had their crypt-accelerators as add-in system boards, hmm, I'll have to take

  • Niagara (Score:2, Funny)

    by Cctoide ( 923843 )
    Niagara? I don't want to know what happens when one of these has to compute an integer overflow, do I?
  • To me the most exciting part is that they're putting 2x10Gb ethernet ports directly on the CPU. The crypto is cool too: I hope it's not encapsulated entirely in the ethernet, so apps can call it directly.

    If they made these CPUs cheap enough, we could put them on PCI-e cards in a Xeon, and run a Linux cluster over the PCI-e, coordinated by apps running on the Xeon. Or maybe stuff a Niagara/PCI-e box with extras, like we used to do with Mac Quadra 950/NuBus cards. But this time with 20Gbps ethernet per node,
  • The new Sun Moto: (Score:3, Insightful)

    by teknopurge ( 199509 ) on Friday August 03, 2007 @09:35AM (#20100263) Homepage
    "Do No Evil"

    It's like it's 1999 all-over again, except this time Sun actually has revenue in-line with expectations. I continue to maintain Sun is this century's Bell Labs and Xerox PARC all rolled into one.
  • I have been actively interested in the T1 and T2 series for a while. Currently, my backup server at work is a v880 (Sparc III) with 8 GigE interfaces.
    I could replace it, and get more throughput from a T2000, but the issue was doing restores would lose that edge from poor single thread performance

    The Niagara 2 series is set to have 1.4X the single thread performance, plus the higher simultaneous threads (Though a slightly longer pipeline).

    Since I am moving away from tape and going to Virtual Tape Library te
  • by MOBE2001 ( 263700 ) on Friday August 03, 2007 @02:54PM (#20105245) Homepage Journal
    Each core supports eight threads, so the chip handles 64 simultaneous threads, making it the centerpiece of Sun's "Throughput Computing" effort.

    Wow! Only 64 threads, eh? That's the problem with threads, you can't have too many of them because switching from one thread to another is very expensive, cycle-wise. In other words, as long as threads remain the only multitasking mechanism used by the computer industry, super fast, fine-grained multiprocessing will remain a dream. It gets worse. There is another problem with threads that is even worse than this. Threads are inherently asynchronous. Until and unless the computer industry comes to its senses and realizes that asynchronous processing makes it impossible to implement programs with deterministic timing, we will continue to pay the heavy price of software unreliability. Switch to a non-algorithmic, signal-based, synchronous software model (with the supporting CPU architecture), and the problem will disappear. Threads suck! Period. One man's opinion.

Work is the crab grass in the lawn of life. -- Schulz

Working...