Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Silicon Graphics

U.S. Helps Finance New Cray Development 61

Durinia writes "SGI has announced a few details on their next Cray vector supercomputer. The press release is mostly about them getting government support for the R&D. It does, however, mention that it will be combining the powerful Cray vector processors with SGI's ccNUMA architecture for big-time scalability."
This discussion has been archived. No new comments can be posted.

U.S. Helps Finance New Cray Development

Comments Filter:
  • Now that Belluzzo is on the Dark Side of the Force, I believe SGI management may be planning to revive Cray.
  • by Troy Baer ( 1395 ) on Wednesday September 22, 1999 @10:09PM (#1666093) Homepage

    A supercomputer doesn't have what you'd consider an operating system. It's a front-end computer that does all I/O, provides the usual operating system services, and controls the supercomputer. Linux is perfectly practical for the front-end. It would be nice to see a Linux in there.

    Bruce, what are you talking about? I suspect you haven't seen a Cray machine for quite some time.

    Most of the current Cray vector machines (like our T90) have their own OS (UNICOS) and IO subsystem. The IO's in a physicially separate box, like in a classical mainframe, but it's still part of the machine. There are generally a couple workstations (usually Suns) attached directly to the machine, but those are system consoles and monitoring stations. A Linux box might be appropriate for one of these monitoring stations, but that's about it. And if you think a Linux machine could handle the I/O that a Cray's capable of, you're insane. We're talking multiple GB/s.

    The idea for the SV2 (as it was explained to us at a Cray User Group workshop last fall) is that it piggybacks on an MIPS-based SN1 (next generation SGI Origin ccNUMA machine). That implies IRIX (with features ported from UNICOS), not Linux. I doubt Linux on Intel or MIPS will be ready for the kind of prime-time SGI's going to be selling the SV2 for by the time they ship.

    (Before you flame me for putting down Linux in this particular context, consider the following: I've been using Linux as my primary OS at home since '93, and I'm one of the guys working on the Beowulf cluster of SGI 1400Ls at OSC. I'm rooting for Linux too, but it's not always the right answer.)

    --Troy
  • you need to remember that all these assumptions on the hardness of cracking keys depend on the fact that NP-Hard problems are not tractable in polynomial time...

    this is only a conjecture as for now.

    laurent

    ---
  • OK, I'm willing to accept that it's not done that way any longer. I did write a lot of microcode for a baby supercomputer, a 4-datum SIMD called the Pixar Image Computer that was driven by a Sun or SGI, but that was 10 years ago. My surmise is that the vector processor is really a separate device from a general-purpose CPU, even if it's in the same box, but I'd be willing to be enlightened on that detail as well. Bruce
  • The NSA is probably the #1 supercomputer customer in the world. There used to be a Cray field office in Laurel, Maryland (near Fort Meade). I'm not sure if it is still there. The NSA has been a major supporter of high-end computing since the 1950s.
  • Well, you are half right. Linux beowulf is really good for high latency, high bandwidth processing, such as in a render farm. In that environment, a cluster will perform equally as well or better than a cray. However, in low latency processes like floating point calcs spread about 1024 processors, the cray will leave a cluster sitting in the dust.

    Steve Ruyle
  • The Arctic Region Supercomputing Center does use Cray/SGI systems (currently on are a Cray T3E and J932). One previous machine was a YMP. I don't remember any non-Cray supercomputer ever being powered on over at ARSC.

    You can read more about the current hardware at http://www.arsc.edu/resources/Hardware. html [arsc.edu]
  • Right on! Price-performance is of course an interesting factor, but at the very high-end it becomes very much secondary.

    I have just sat through a Beowulf-type cluster presentation yesterday and the upshot is that they simply cannot touch some problems. If there is significant communication going on between nodes, especially if it comes in bursts, present-day Beowulf does not cut it.

    We are seeing 95%+ efficiency on 64 or even 128 CPUs on a Cray T3E, while a Intel cluster connected through switched 100 BT shows something like 75% at only 16 nodes. Going to Gigabit ethernet, this figure does not improve much, mostly due to inefficient Gigabit drivers. One is then forced to look into much faster, but much costlier interconnects.
    Then there are also the CPU-to-memory bandwidth and the latency of disk I/O to consider.

    Simply stated, Beowulf-type clusters today cannot touch 'real' supercomputers for many types of applications. Those applications do not decompose into embarassingly parallel tasks like rendering frames or brute-force attacks on encryption do.

    Now, if only SGI/Cray would develop a T3F based on the 21264 Alpha CPU.
    We'll keep dreaming and hoping ...
  • no money for education but plenty of hand outs for corporations in their death throes.
  • by Anonymous Coward on Wednesday September 22, 1999 @05:44PM (#1666103)
    Think of a vector processor as being super, super pipelined with dedicated hardware for jamming stuff into the pipe as quickly as possible. Similar to, but not exactly like an array processor. Both can be classified as examples of the Single Instruction-Multiple Data stream (SIMD) architecture, although they're implemented very differently.

    Vector processing has one big advantage over massively parallel processing. Essentially, if you have a uniprocessor vector machine, and any appreciable amount of your code is vectorizable, you reduce total CPU time as well as wall-clock time. With massively parallel systems, you always pay for reduced wall-clock time with increased CPU time to synchronization overhead. A search on Amdahl's Law should turn up some interesting reading.

    Looking back at some of my old documentation (~1990), I have these stats for a Cray Y-MP with a memory cycle time of 6 ns, and 2 FPUs/CPU:

    1 CPU : peak throughput 333 MFLOPS
    8 CPUs: peak throughput 2667 MFLOPS

    These throughputs assume 100% vectorizable code. For the single CPU Y-MP running LINPACK with a vector size of 300 gets you 187 MFLOPS. However, with a LINPACK vector size of 1000, the throughput is 308 MFLOPS, which is approaching the peak throughput pretty closely.

    I wish I had something that stated just how deep the vector registers are on a Y-MP. I suspect it's somewhere close to 100 (i.e., 100 stage pipeline!).

    Anyway, this post is too long. Bye now.

    Bill Wilson
    The Keeper of Cheese [netdoor.com]

  • by LL ( 20038 ) on Wednesday September 22, 1999 @06:01PM (#1666104)
    I'm a little amused by people asking for a comparison of the costs of a Cray SV2 compared with a Beowulf. For a certain class of applications, they remain dominant and some groups are willing to pay the extra premium for that niche, regardless of the absolute costs (and don't forget the cooling/storage/manpower multipliers). Much like a train (vector computers) is suited for different terrain than buses (shared-mem) or trucks (SMP), vector computers provide very cost effective REAL computing power, often obtaining 50-60% of peak performance whereas you'd be lucky to see beyond single digits for MMPs (and before I get roasted, I'd qualify that by noting decent compilers and reworking algorithms often overcome initial technical limitations).

    As for the US support of Cray, well, jaded veterns of comp.sys.supercomputer and HPCC practitioners are well aware of the historical situation with federal funding, technical advantages and bang-for-buck comparisons with Fujitsu and NEC vector computers. For people interested in what the Japanese are doing, I believe NEC are planning on introducing a 1 Teraflop machine with the goal of hitting 5 Teraflops for their Whole Earth Simulator project [nasda.go.jp] . Some scientists' idea of heaven is a dedicated vector box and for their purpose and types of code, it is a valid desire.

    The SV2 is a curious beast, effectively the first stage in the merging of the Origin cc-NUMA memory subsystem and vector chips. You can think of it as a hybrid box allowing various combination of graphics pipes, MIPS/Merced nodes and vector nodes. The gripe of some people is that they are looking for a successor to the top-end T90 and they are impatient. However, developing at the high end is always trickier than people expect (witness Merced) as you need to increase capabilities along a multitude of dimensions (memory latency, I/O subsystem, heat dissipation, networking) rather than relying from the automatic boost from Moore's Law. Unfortunately there are very few applications which demand absolute performance regardless of actual cost.

    To paraphrase crass consumerism, if you have to ask about the price, you can't afford it :-).

    LL
  • For *some* problems, Cray is not the answer. For others, it is the *only* answer. Any *big* problem using lots of "vector operations" (see Bruce's post, it's kind of like the extra instructions that are just now getting added to the PC-style computers for multimedia, but *much* better) and lots of RAM at once and stuff is made for a Supercomputer. Oh, and anything that scales well to multiple processors, they have techniques of doing this too.

    Supercomputer != Beowulf! Network latency sucks in comparison, for tasks like this. Only readily paralellizable tasks that *don't* need lots and lots of RAM at once, and *don't* benefit greatly from the vector operations are better for Beowulf. In fact, some cryptography problems have been mostly solved on regular computers and then finished on a Cray, *because* the Cray did that stage of the problem so much better.

    Of course, it won't be Cray without Seymour. :|

    It must have been nice to build a supercomputing couch, though, back in the day. :)
  • Cray were the first what?
  • unless someone comes up with new algorithms, big vector computers won't crack all keys. The algorithms we know of are exponential. when you add 1 digit to the key it takes a computer that is K times faster to crack it.

    some other things would profit much more of new supercomputers. what they are really good at is solving Partial Differential Equations either on really big domains (weather forecast) or with a high precision ( nuclear weapons simulation)..and i can understand why the US government might be interested in that.

    Laurent

    ---
  • I'm one of the other posters who was wondering on the price/performance comparison.

    One could very well chalk up the price difference towards reducing communications/messaging latency between computing units, cooling solutions, memory architectures, etc.

    And hasn't it already been shown, for at least brute force algorithm cracking techniques, that massively parallel computing does work? So that a Beowulf is okay for such a system?

    Of course this is different than using a more elegant and efficient algorithm to handle encryption/decryption attacks, but that is out of my/our ken as well.


    -AS
  • You're thinking of the OTHER Cray. Sure Seymour founded Cray but when he passed away it was after leaving and starting up ANOTHER Cray company, not the one SGI bought.
  • It's also called "SIMD" (Single Instruction Multiple Data), and yes, it's been around forever.
    It was discovered by some chip makers a couple of years ago (*grin*).
  • Some programs cannot be structured well enough to be easily parallelized, and for them the time lost resynching the Beowulf's machines makes it a net loss.

    For example, some data structures are so irregularly organized (unlike the average matrix inversion) and use algorithms that alter their structure in mid run. All you can do is parallelize some of the for(;;) loops and then resynch constantly.

    For those you have no choice but better big iron.

  • Not true. Cray's have had Unix-based systems (called UNICOS) for well over a decade. The Cray 2, released around 1986, had a full SysV system as its sole OS. It also had its own I/O facilities with disks, tapes, and all the typical stuff. UNICOS had everything you would expect from a Unix system, such as X, Motif, emacs, and more.


    The confusion may center around three points.

    • The T3D, which wasn't a self-hosted machine and required a Cray front-end. The T3E, however, was self-hosted.
    • More recent Crays use a coupla workstations as operator and maintainence workstations.
    • Older Crays, and their OS called COS, were more dependent on frontend systems.

    I hope this clears up some of the confusion.

    -Dean
  • ...based on what I've heard from Steve Jobs we don't even NEED a Cray anymore.



    "Thank you Seargent...you may put away your tank."

  • Although I might agree that this post was a *bit* of a troll, I do wonder...SGI made noises about Cray as a separate unit, selling Cray or some such. Have those "noises" changed?

    For once in my measley life I am not being sarcastic, I think its a question that bears asking. Here are some good things happening to a market that others (and, by the way SGI) have said was "dead." (Do not flame me for that, SGI states in their Q3 1999 report that "The Company believes that the decline in the UNIX workstation and vector supercomputer markets are
    long-term trends...")

    But that was under Belluzzo. To quote "Hopper;" "Oh I see, under new management."

    Things have been quiet at SGI since they changed CEOs, at least from an outsider's perspective. Maybe they'll get noisy again in a good way.

  • by Anonymous Coward
    When are we going to see a beowolf cluster of these babies? I'm sorry... I couldn't help it.
  • Hmm
    Plans call for the system to scale to peak performances of multiple tens of teraflops, many times faster than any supercomputer in existence today. One teraflop is equivalent to a trillion calculations per second.
    So how many Linux'ed Pentiums networked with Beowulf [beowulf.org] would be needed to give it a run for the money?
  • Is there someone out there who knows something about encryption (I don't particularly) who can tell me if the thought of the NSA posessing say, oh, 50 of these machines means that I need to use a longer key?

    Or will it take 20 billion years for even these to crack something?

    Please enlighten me, I lack knowings.
  • A supercomputer doesn't have what you'd consider an operating system. It's a front-end computer that does all I/O, provides the usual operating system services, and controls the supercomputer. Linux is perfectly practical for the front-end. It would be nice to see a Linux in there.

    Thanks

    Bruce

  • by Anonymous Shepherd ( 17338 ) on Wednesday September 22, 1999 @05:12PM (#1666133) Homepage
    Well how about a Beowulf cluster of G4s? Say using the publicized 1 gflop performance of the Velocity Engine, you'd need 1000 of them to get 1 teraflop and 10,000 to hit the 10s of teraflops, and multiples of 10,000 to hit a comparable multiples of 10s of teraflops.

    That's $1600 each, or $16,000,000 per tens of teraflops.

    It may be cheaper on PIIIs, but it would also take more PIIIs as well.

    I'm assuming it's a headless network at $1600 each, btw.


    -AS
  • Oh, the government is helping? What for? Oh, "support ... from several U.S. government agencies, including the National Security Agency (NSA)." The NSA, eh?

    "In addition to critical government applications..."... this doesn't sound good.


    echelon: FBI CIA NSA IRS ATF BATF Malcolm X
    Militia Gun Handgun MILGOV Assault Rifle Terrorism Bomb
  • 512bit keys can be broken easy(been done). 768 no way in hell(now). 1024 no way in hell(for a long time). >=2048 bit keys no way in hell(ever!). This is for RSA, DH seems to even safer, elliptic-curve isn't analysed enough yet(anyone?). You should really be more worried about the RNG used to create the key, having a good passphrase, making sure nobody sees you type your _, making sure not to type on a monitored terminal etc.
    Me? oh i use a 4096bit DH-key(the world can always be invaded by aliens that have a ray-gun that can turn entire galaxies into large computron axelerators that connect in near infinite number of dimensions |) ).

    LINUX stands for: Linux Inux Nux Ux X
  • by Zico ( 14255 ) on Wednesday September 22, 1999 @04:58PM (#1666136)
    See this NASA link [nasa.gov] for more information about vector processing, and how it relates to Crays. Here are a few paragraphs from it:

    Cray Supercomputers perform arithmetic and boolean operations in segmented functional units that divide the operation into a set of substeps or segments, with each segment being performed in one CPU cycle. In a nonvector (scalar) operation, only one segment is performed at any given time because only one set of values is available to the functional unit. The time needed to complete a scalar operation is equal to the number of segments in the functional unit times the CPU clock period of the computer.


    The CRAY C90's functional units in the CPUs are dual units. That is, they are capable of producing two results per CPU cycle during vector operations.

    Vector processing produces high computational speeds by applying pipeline techniques to arithmetic operations. In vector operations, the ability to access sets of data items allows the system to place new values into the functional unit as soon as the previous values have cleared the first segment. This allows every segment in the functional unit to operate simultaneously. The system thus produces two results every clock period (4.2 nanoseconds) once the first set of values in the vectors has moved through all segments in the functional units. The use of vector functional units also reduces the number of instructions that the system must interpret because multiple sets of values are processed by a single instruction.

    A vector can be defined as a set of floating-point data items that the computer accesses as a unit, with the same operations being performed for each value.

    Cheers,
    ZicoKnows@hotmail.com

  • It's what is otherwise called an "array processor". You give it a big array (a vector) and it performs the same operation on every element of the entire array, very quickly. I've seen them used for image processing, but they are usable for many other sorts of problems. The concept has been around forever, of course they keep getting faster.

    Bruce

  • by teraflop user ( 58792 ) on Wednesday September 22, 1999 @07:24PM (#1666139)
    The way the market has been going recently, it was beginning to look as though US vector supercomputers were dead, and only the Japanese were still advancing them. The T90 was the last major vector machine, but had memory synchronisation problems with more CPUs. The SV1 had some interesting specs, but I don't know of any site which actually installed one - certainly press releases were thin on the ground.

    Parallel machines, such as the Cray T3E, IBM SP2+, and to a lesser extent Beowulf clusters just give so many more Gflops/$. But as has been pointed out they are completely unsuited for some problems, for which you simply need all you power concentrated in a small number of CPU's.

    My guess is that there is not enough market for a new US vector supercomputer, and the US government are stepping in so not to become dependent on imported hardware. If most SV1 installations are government, it might explain why we've heard so little about them.

    BTW, many older supercomputers were single-user machines, which required a front end running a mutli-user operating system to schedule jobs. However all recent machines, including the C90, T90, T3E and I suspect the SV1 and SV2 run their own operating system (they are self hosting). In this case it is UNICOS, a Cray Unix which is gradually being merged with SGI IRIX.
  • I cannot pretend understanding all the technology so I wont. But I still have an opinion from an historical point of view. Just before Mr. Cray died a university in alaska procured some funding for a Cray. After the funding was given they decided that an alternative super computer would be more appropriate (Japanese I believe ( I apologize for the lack of information)). At the point of learning this I was indignant. For I believe that it is a matter of heritage and technical ground breaking on the information frontier. It may be just a bit of national pride but damnit they were the first. I personally felt a loss the day that he died even though I literally had no ties to Cray. I assumed that day the comapany would die. If they can find a way to be that pioneer that Mr. Cray once was I am very happy for for them and every one, as we will all benefit. That is, if they truly are pioneering and not just giving their last dyeing gasp.
  • Linux advocates - please refrain from posting things which are utter nonsense. You just make the rest of us look bad.

    The original Cray 1 and 1/S were like this, being parasitic on an IBM or VAX host. Modern supercomputers (Cray, Fujitsu, NEC, ...) all run a Unix OS natively. Cray's Unicos is in version *10*

    Similarly, the first Cray parallel machine, the T3D, parasited - off the back of a Cray vector machine, J90 or C90. The T3E however devotes some of its 21264's to being I/O processors.
  • by RallyDriver ( 49641 ) on Wednesday September 22, 1999 @08:41PM (#1666142) Homepage
    Actually, I think the vector pipes on a Y/MP are something like 8 or 10 units deep.

    These computers are designed for one type of calculation - matrix algebra, which requires simple processing (multiply, add, invert) on enormous 2D arrays of numbers. The important point: the working set of these problems is often the size of the matrices, so caches are ineffective. Cray vector machines do not have ANY data cache between the vector units and main memory.

    The feature of most vector machines that no-one has really pointed out yet is the way the memory system keeps those vector units fed. Unlike a microporcessor which relies on 2 or more layers of cache, the vector machine is quite capable of streaming data from memory fast enough to keep the processor 100% busy.

    The vector instruction results in a number of vector fetches being issued to the memory controller, which is told to fetch a strided vector from memory (start at address x, every ith word, n words in total). The memory controller issues requests to individual banks in an overlapping fashion. Like the vector FPU, after it gets over its latency it starts banging the words out once per clock cycle.

    The way this is done is to have a huge interleave in the memory - Cray T90's use either 1024 or 2048 banks. So long as any one bank is not hit more than one per 100ns or so (the cycle time of the memory) the memory controller is capable of delivering multiple streams of 64 bit words at full clock speed (T90's can have up to 32 CPU's). Typically, to ensure this, arrays in programs are structured to stride along the first dimension (FORTRAN remember) or where this isn't possible, the array dimensions are chosen to be prime numbers.

    The thing that sets these machines apart is not processor speed - the peak speed in MFlops on an NEC SX-5 is only about 2-3 times that of an Alpha AXP. The thing that makes them special is the memory bandwidth to sustain that performance.

    To use my favourite automotive analogy, if a PC is a small hatchback, then a supercomputer is an 18-wheeler, not a Ferrari.
  • Just wondering, because another post wanted to compare one to, say, a Beowulf cluster.

    If you use a G4 PowerMac and their highly advertised 1 gflop rating as a base, at $1,600 each, to reach a 10 tflop rating you would need 10,000 networked machines, so for each 10 tflops would be spending $16,000,000. I don't know how the comparable PIII or Celeron would perform though, or at what price.

    16 mil is a lot to spend. But for government purposes it may just be a drop in the bucket.


    -AS
  • by Anonymous Coward
    According to my calculations, it would take 1,700,000 386sx-16's to be competitive with that supercomputer. The system would have 27 megawatts of waste heat, enough to vaporize lead.
  • It was always said in the supercomputing community that the US government would never let Cray die. Buying in supercomputers from Japan, the only other source of big iron, just wouldn't wash with the likes of the NSA.

    First it was blocking the NCAR deal, and now this.

  • For those who take a passing interest in the history of computing, or who find people like Richard Stallman entertaining, the story of Seymour Cray the man is quite interesting. His methods of designing hardware (consulting the elves) were definitely unique.

    I don't think we'll see anyone quite like that again.
  • No, vector processors are CRAY, ccNUMA is SGI's forte'.
  • It just encourages them.

    Sigh.

    *breath*
    *breath*
    *breath*


    -AS
  • I doubt that the NSA uses Crays for brute force attacks on keys. There are cheaper and more efficient machines for that sort of problem. The NSA has an in-house chip fabrication facility. They could crank out custom chips to do brute force key searchs.

    From the unclassified bits of information available on the NSA, they are very interested in doing sophisticated statistical analysis on encrypted data. I suspect that is the main task of the vector supercomputers.

    The NSA is the biggest employer of mathematicians in the USA. They have probably developed techniques of attacking ciphers that are considerably better than brute force attacks.

  • Certain Japanese companies were attempting to flood the market with their vector processors by selling them in the US at below cost. This prompted a number of companies to feel that the Japanese presented a better solution. They were taken to court over it and lost, resulting on a 400% or so tarrif on their vector processing supercomputers. I'm wondering if this Alaska case was one such case and what the end result was?

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (1) Gee, I wish we hadn't backed down on 'noalias'.

Working...