Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Education

Supercomputer On-a-Chip Prototype Unveiled 214

An anonymous reader writes "Researchers at University of Maryland have developed a prototype of what may be the next generation of personal computers. The new technology is based on parallel processing on a single chip and is 'capable of computing speeds up to 100 times faster than current desktops.' The prototype 'uses rich algorithmic theory to address the practical problem of building an easy-to-program multicore computer.' Readers can win $500 in cash and write their names in the history of computer science by naming the new technology."
This discussion has been archived. No new comments can be posted.

Supercomputer On-a-Chip Prototype Unveiled

Comments Filter:
  • Name ? (Score:2, Insightful)

    by Hsensei ( 1055922 )
    What's wrong with Supercomputer On-a-Chip (c) ?
    • Re: (Score:2, Funny)

      by Anonymous Coward
      What about people-ready chip?
    • by account_deleted ( 4530225 ) on Thursday June 28, 2007 @10:20PM (#19684719)
      Comment removed based on user account deletion
      • by mikael ( 484 )
        But be careful not to get confused with:

        Spearmint Oil Administrative Committee
        Sons of Alpha Centauri (band)
        State of the Art Car
        Submarine Officer Advance Course
        System-On-A-Chip

        (From SOAC Acronym [thefreedictionary.com]
    • I think SOC would SUCK as a product name.
    • Re:Name ? (Score:5, Funny)

      by OctoberSky ( 888619 ) on Thursday June 28, 2007 @11:04PM (#19685037)
      Babywulf Cluster
    • Re:Name ? (Score:4, Funny)

      by hAckz0r ( 989977 ) on Thursday June 28, 2007 @11:24PM (#19685223)

      What's wrong with Supercomputer On-a-Chip (c) ?

      Oh great, I can hear the PR advertisements already; "Put a SOC in it".

      • 'Oh great, I can hear the PR advertisements already; "Put a SOC in it"'

        Better say that instead of "Computer On-a-Chip"

        ServiceDesk Tech: "Sir, I think your COC is over heating and needs to be replaced."
    • I like, for obvious reasons and it's quite appropriate here, "Deep Thought"
    • My vote is for CLustered Units of Multiple Processors.

      CLUMP :P
    • by moeinvt ( 851793 )
      Huh???

      Looks to me like it's a "supercomputer" on a PCB? They wired a bunch of processors together on a circuit board(the size of a license plate). That isn't a "chip". How about SOB?
      • by moeinvt ( 851793 )
        I did RTFA, or more like "skimmed through it" before my comment.

        With a more careful read however, I noticed that they explicitly called this a "prototype". Not many universities have their own wafer fabs, so it makes sense. More importantly, they didn't give all of the specs on the processors used. If they're small enough, maybe this could be implemented on a single chip.
  • "Cell" (Score:4, Insightful)

    by Doc Ruby ( 173196 ) on Thursday June 28, 2007 @09:48PM (#19684509) Homepage Journal
    I call the "supercomputer on a chip" the "Cell microprocessor [wikipedia.org]". Of course, next year, it won't be so super. But there will be a new one that's really super.
    • by julesh ( 229690 )
      To be fair, if this crowd had a version of their chip implemented on 65nm silicon, it would probably outperform the Cell in several key areas. For a start, it has a maximum parallelism of 64 simultaneous instructions -- I believe the Cell can only reach 10 (?). Of course, writing a real program that takes advantage of that much parallelism is a little tricky...
      • Re: (Score:3, Insightful)

        by Doc Ruby ( 173196 )
        How is that "fair"? By the time this new chip is even properly named, TBM will have Cell chips in 45nm silicon. Partly because their engine is simpler. And the Cell is designed for scalable multicore/chip parallelism. Its main magic is its coherent, superfast "elements" bus, which retains coherency even at 1.6Tbps across multiple cores and chips. IBM has 4-core chips in pairs already deployed in public, and 128-core chips in the lab, where a massive new top-predator supercomputer is being built on the new a
  • Taken? (Score:4, Funny)

    by bryan1945 ( 301828 ) on Thursday June 28, 2007 @09:52PM (#19684539) Journal
    "Readers can win $500 in cash and write their names in the history of computer science by naming the new technology."

    Is "Clippy" taken?
    • Re:Taken? (Score:4, Funny)

      by trolltalk.com ( 1108067 ) on Thursday June 28, 2007 @11:37PM (#19685353) Homepage Journal

      Chipzilla would be good, except that's what everyone calls Intel. I guess we'll have to settle for "CowboyNealOnAChip". Or "theChipThatCanActuallyRunJavaProgramsWithinTheUni versesLifetime"

      What gets me is that that there's a dropdown in the entry form to choose your country, as well as asking you for your state or province, but the rules state:

      WHO MAY ENTER: Open to all legal residents of the 50 United States (including the District of Columbia) who are 18 years or older in their respective US state at time of entry. Individuals employed by the University of Maryland, College Park. ("University") as faculty, exempt or non-exempt employees, and members of their immediate family or persons living in the same household, are not eligible to enter or win.

      I hope their chip design is better thought out than the contest form.

    • by crgrace ( 220738 )
      There actually was an innovative microprocessor called "Clipper". It was a nice architecture...

      http://en.wikipedia.org/wiki/Clipper_architecture [wikipedia.org]
  • WTF? (Score:5, Insightful)

    by msauve ( 701917 ) on Thursday June 28, 2007 @09:54PM (#19684559)
    We have microcomputers and supercomputers and nothing in between? Seems to be a bit of hyperbole involved here.
    • Re:WTF? (Score:4, Funny)

      by gardyloo ( 512791 ) on Thursday June 28, 2007 @10:02PM (#19684615)

      We have microcomputers and supercomputers and nothing in between? Seems to be a bit of hyperbole involved here.
      Most. Insightful. Post. Ever. ;)
    • by booch ( 4157 ) *
      Damn. WTF is a much better name than I was going to suggest.
  • My Name (Score:5, Funny)

    by the eric conspiracy ( 20178 ) on Thursday June 28, 2007 @09:57PM (#19684581)
    'Space Heater'

  • Future Slashotting in the Waiting (FSW).
  • I RTFA... It seems to handwave so much about parallel computing, that it seems they haven't discovered anything. All i see is "clock frequency can't increase, so we're going parallel'.... Surely, this can't be the extent of their research. The article claims its 'easy to program', but there are zero specifics about why that would be the case. Can anyone tell me what they've done here (if anything)?
    • Re: (Score:3, Interesting)

      by Holi ( 250190 )
      Well, you should learn to follow links.
      It was quite easy from the article to find more information [umd.edu] about the project.
    • by James McP ( 3700 ) on Thursday June 28, 2007 @11:51PM (#19685473)
      Here's the deal.

      Up 'til now, Parallel Random Access Model (PRAM) computing has been a theory of parallel processing that was a thought model. It hadn't been built. Some people had written programs to emulate a PRAM computer but they were not complete versions.

      It could work at a snail's pace and still be a technological accomplishment as it is the very first, complete, working, hardware PRAM computer. It's on par with the Z3, Colossus and Eniac, the first programmable computers (German, English, American, in historical order).

      Fortunately, they made the algorithms work well, or at least, if the press release it to be believed, work so that 64 75Mhz computers could produce 100x the performance of a current desktop on at least one particular function. Which is pretty impressive in first-time hardware even if it turns out to be an obscurely used math function known only to about a dozen coders.

    • I read most of the article, until it starting repeating itself and I still hadn't read anything new. I see in some other comments references to PRAM so I'll check that out, but they sure didn't do themselves any favors with that article.
  • by Lije Baley ( 88936 ) on Thursday June 28, 2007 @10:01PM (#19684605)
    Vaporac. Vaporlon. Vaporium. Whatever...
    • This brings up a good point. Will Duke Nuke Em Forever require this chip? It's likely to be on the minimum specs for Windows 2012.
      • Only if you have the Smokum Mirrorum add-on.
      • by Dunbal ( 464142 )
        It's likely to be on the minimum specs for Windows 2012.

              The header says it's 100 times faster than current desktops, so I doubt this chip will be powerful enough to run Windows 2012 anyway.
    • Re: (Score:2, Funny)

      Or you could add in a temperature joke and call it the Vaporizer.
    • i860? (Score:3, Interesting)

      by Evil Pete ( 73279 )

      Anyone remember the hype of the i860 [wikipedia.org]? Great on paper, but not so great in reality. I really hope this works though, von Neuman architecture was always supposed to be a stop-gap (even vN said so I think).

      • Re: (Score:3, Interesting)

        by julesh ( 229690 )
        Anyone remember the hype of the i860? Great on paper, but not so great in reality. I really hope this works though, von Neuman architecture was always supposed to be a stop-gap (even vN said so I think).

        As far as I can tell, there's no really significant departure from von neumann architecture here. They have a processor capable of executing 64 concurrent threads, 'fork' and 'join' instructions, and a version of C that has been extended to be able to use them. I'm not sure I really see what's so revolutio
    • by inKubus ( 199753 )
      Cirrus, Cumulonimbus, Stratus. Uh, clouds. Made of vapor.
  • I name it (Score:4, Funny)

    by Kohath ( 38547 ) on Thursday June 28, 2007 @10:12PM (#19684681)
    Bob
  • All the processors in the world won't do you any good if you can't write the software to harness them, and conventional lock-based techniques are really really easy to screw up. I'm really curious to see what those 'rich algorithmic' solutions they've got are.
    • by Ayanami Rei ( 621112 ) * <rayanami AT gmail DOT com> on Thursday June 28, 2007 @10:26PM (#19684765) Journal
      You know, autovectorization looks good on paper. But for most tasks, it really doesn't net you any benefit unless you can separate all your work into non-overlapping chunks. You can't have any interdependancies on your working set (or risk expensive, non-scalable locking), and if you're all pulling from a single data source to split up the analysis work you'll spend a lot of time in contention for the pipe to that resource.

      For example, it wouldn't make searching a database (scratch that, searching any data set) any faster unless the index was already pre-split among the processing units.

      In this architecture the processing units have the same bus to RAM and disk on the front and back ends and have to deal with contention.

      Your system is only as fast as the slowest serial part. Typically this is storage media, a network connection, or a memory crossbar. Processors really are fast enough for the non-embarrasingly parallel stuff. They are at the right ratio with respect to the other slower busses to do most general purpose work.

      If you want to do more than that then its other things; storage media, memory, I/O busses -- that need to be multiplied in density and number. Only then can we see higher throughput.

      Autovectorization is only good for things we already have offloading for anyway (TCP encryption, graphics, sound)... and for those general purpose cases like in Game AI where you might want a linear algebra boost NVidia has beaten these guys to the punch with the GP stream processing in the newest chips and the very flexible Cg language/environment.
    • At the moment, our software is mostly designed as a script. 1, 2, 3 we push the instructions onto the CPU. As you say, sequential.

      But we already have a different way of thinking about getting information, client/server. With the Internet, millions of people get the information they need by asking a server somewhere. Instead of applications running sequentially on a cpu, shouldn't they be parallel by default, little bits of client code querying and updating little bits of server code.

       
      • Re: (Score:3, Insightful)

        by booch ( 4157 ) *
        Wow. You got half way with your idea, but didn't make it all the way.

        Right now, with most programming languages, we tell the computer how to compute the result. We generally do this with a linear list of steps for the computer to take. But that's not the only way to write a program. Another way is to tell the computer what we want it to compute, and let it figure out the best way how to do that. This sounds pretty crazy at first, but it's actually been done. Take a look at the Prolog and Haskell programming
  • Overhyped (Score:5, Insightful)

    by rivenmyst137 ( 467812 ) on Thursday June 28, 2007 @10:20PM (#19684723)
    Oh, for god's sake. I don't understand why this is getting so much press. It was stupid when it went up on Digg, and it's stupid that it's showing up here. This isn't substantially different from any of the other parallel architecture and programming work that's been going on for the last two decades. Their benchmarks are against embarrassingly parallelizable algorithms like matrix multiplies and randomized quicksort, things that any half-intelligent lemur (with a math and cs class or two) could get to run quickly. The hard part is speeding up your average desktop application which, I guarantee you, is not spending the majority of its time doing matrix multiplies.

    On top of that, their "parallel extension of von Neumann" amounts to adding primitives to start and stop threads into the language. Again, any half-intelligent lemur (with a slightly different skill set from the first) could have done that. And I think a few actually have (at the risk of comparing language researchers to lemurs). It doesn't solve the underlying problem.

    Oh, and did we mention no floating point and the lack of any memory bandwidth to get data into and out of this thing?

    This is over-hyped research and shameless self-promotion, and for some weird reason the press seems to be buying it. Stop it.
    • This is over-hyped research and shameless self-promotion, and for some weird reason the press seems to be buying it

      Because it's a contest. Free publicity. Hooray!

      Their benchmarks are against embarrassingly parallelizable algorithms like matrix multiplies and randomized quicksort, things that any half-intelligent lemur (with a math and cs class or two) could get to run quickly

      Dang what kind of lemurs do they have where you're from? We must find them and make them our president! Oh wait, you say we
    • Re:Overhyped (Score:5, Informative)

      by Doppler00 ( 534739 ) on Thursday June 28, 2007 @11:36PM (#19685345) Homepage Journal
      Yeah this article is pretty week. "Woohoo! Look we took a picture of a last generation FPGA development board and wrote some nifty programs for it that prove our pet project!" I think very little of things like this make it outside of academia. I'm not saying this research is unworthy, just not news worthy.

      And "parallel extension of von Neumann" exists. It's called OpenMP and it still takes a skilled programmer to understand.

      Look at that board... it uses "SmartMedia" yeah... that means that:

      1. This is OLD research
      2. The board developers didn't have a clue
      3. A very old development board is being used.
    • by uarch ( 637449 )
      After skimming through the whitepapers I have to agree with you.

      It reminds me a little of the dataflow architectures of the 70's. A quick google search will probably give you several reasons why it wasn't very effective in the real world. This design will suffer from many of the same problems.

      These are the types of white papers we used to tear apart for fun when I was in grad school. They boast all these breakthroughs that aren't very different from anything else that's done (not uncommon even when great
    • Is how it benchmarks against, say, an nVidia Tesla (a GeForce 8800, with more, faster memory and no DVI connectors). I mean ok, you want to limit to just parallel kinds of benchmarks I can live with that, after all it is ok to design more specialized chips. However then let's see it go against a chip designed for that. Ya, an 8800 will eat shit on a calculation that's a single thread with a lot of branching. However you give it a task that can be highly parallelized and is straight through computation (like
  • "Suppose you hire one person to clean your home, and it takes five hours, or 300 minutes, for the person to perform each task, one after the other," Vishkin said. "That's analogous to the current serial processing method. Now imagine that you have 100 cleaning people who can work on your home at the same time! That's the parallel processing method."


    Brilliant! Even my mother had not thought of such an idea.
    • Suppose you had 100 cleaners in your house. They'd all be tripping over each other and all unplugging eachother's vacuum cleaners to plug in their own. And all their minivans would cause a traffic jam in your driveway.

      Pretty much the same with any multi-processor technology: shared resources like buses are the major limitation.

      • Re: (Score:3, Interesting)

        by rbanffy ( 584143 )
        Sun had something with tiny radio interconnects between chips. This way, they could have thousands of "pins" on the chip and the only metal pins you would need would be power and ground. If I remember correctly, I had a server whose memory had to be upgrades about 8 (or 9) modules-with-lots-of-pins a time, so, wide buses are nothing new.

        Intel also had something about optical interconnects, which are also nice, since you can place your "connectors" anywhere in the chip and not just around the borders and, if
    • by Repton ( 60818 )

      "Suppose you hire one person to clean your home, and it takes five hours, or 300 minutes, for the person to perform each task, one after the other," Vishkin said. "That's analogous to the current serial processing method. Now imagine that you have 100 cleaning people who can work on your home at the same time! That's the parallel processing method."

      The kitchen cleaner will grab the bucket and the bathroom cleaner will grab the mop, and neither will be able to get any work done. The rest will be tripping

    • It's also retarded (Score:3, Insightful)

      by Sycraft-fu ( 314770 )
      Since of course that breaks down. Actually maybe it isn't so retarded since the same thing is true in many computing problems.

      For example if you take the cleaning situation sure, adding a second cleaner will nearly double the speed it gets cleaned at. Adding four will probably close to quadruple it. However, it starts to break down after a while. At first the gains just start slowing down, as there's more people they have to spend more time talking and dividing up who does what than actually working, as wel
  • "OMG I gotta have It (TM)" or Deep Silicon :)
  • Second paragraph of the rules:

    THE FOLLOWING CONTEST IS INTENDED FOR PLAY IN THE UNITED STATES AND SHALL ONLY BE CONSTRUED AND EVALUATED ACCORDING TO UNITED STATES LAW. DO NOT ENTER THIS CONTEST IF YOU ARE NOT LOCATED IN THE UNITED STATES.

    Even though there is a country field in the form. WTF?

    They don't mention that on the form page, either. It peeves me just a little bit that they would do that, I mean, how many people actually read these conditions things, anyway? Can't say I'm surprised, though.
  • But I doubt that's worth $500...
  • by cashman73 ( 855518 ) on Thursday June 28, 2007 @11:43PM (#19685407) Journal
    I will either nominate the name, "Giant Douche," or, "Turd Sandwich," depending on which one slashdotters vote for.
  • http://www.dinigroup.com/index.php?product=DN8000k 10pci [dinigroup.com]
    There you go! It's just a vertex 4 development board. Nothing special. I mean, if they would have used this graphic http://www.dinigroup.com/DN9000k10PCI.php [dinigroup.com] it would have been a little more impressive.
  • "VaporWire"

    "Parallel Lies Processor"

    "iProcessor"
  • How about "Wishful Thinking"?

    They describe the same old massively parallel computing idea but gloss over the problems involved. This old chestnut keeps coming to the surface every few years but nobody ever seems to show any working hardware...

  • Transputer? (Score:4, Informative)

    by MadMidnightBomber ( 894759 ) on Friday June 29, 2007 @01:00AM (#19685879)
  • FPGAs (Score:3, Informative)

    by CompMD ( 522020 ) on Friday June 29, 2007 @01:56AM (#19686149)
    It appears to be a few FPGAs. With FPGAs, you can optimize the logic to represent algorithms for faster execution that on general purpose processors. Simply, you use more of the gates available on the chip. That appears to be what these guys are doing. It also appears that there is a single memory controller (I think that is what the QuickLogic chip is) and there is only one DRAM module installed on the board. It would be interesting if the board had a unified memory architecture. There is a separate Xilinx Spartan FPGA on the board that does who-knows-what, but I wouldn't be surprised if it was involved in communication with the processing chips. Of course, this is speculation, but it would seem logical for a board layout.

    Just my thoughts.
  • The stupid web form always complained about illegal characters in a field without specifying which one.
  • Worst Analogy Ever (Score:2, Insightful)

    by FuzzyDaddy ( 584528 )
    From TFA:

    Suppose you hire one person to clean your home, and it takes five hours, or 300 minutes, for the person to perform each task, one after the other," Vishkin said. "That's analogous to the current serial processing method. Now imagine that you have 100 cleaning people who can work on your home at the same time! That's the parallel processing method.

    100 people trying to clean my house at the same time would be slower than 1, because no one would be able to move or breathe. Which is exactly what make

  • From their PDF introduction [umd.edu]:

    The number of cores is expected to double every 18 months for the next decade and
    reach 256 in a decade.

    Right. Not sure I'm with you there. 256 cores is a lot, and I doubt that the infrastructure of (e.g.) memory bandwidth and power supply would be able to keep up with such demands.

    Clock rates of commodity processors have stopped improving since mid-2003. This
    followed several decades in which clock rates have doubled every 18 months.

    Right. You know, I'm sure the fastest desktop

  • I've always wanted a computer named Steve...
  • Chuck Norris does not sleep. He waits.

    This could be the bestest thing in supercomputing EVAR!!1!one!1
  • Here's the webiste of a class at Berkeley that is designing totally new chip architecture, something actually innovative and quite interesting in my opinion. http://research.cs.berkeley.edu/class/fleet/ [berkeley.edu] It's still a few years away from being practical, but they are hoping to have in-silicon test chips very soon now.
  • All that really matters here is how fast it runs Microsoft Word and Excel. You may not like it. You may want to mod me Troll or Flamebait, but to 80%+ of the population, as long as their PC brings up e-mail faster than they can type, shows movies without dropped frames, and quickly runs Word and Excel, that's all they care about. Blazing Folding@Home scores simply don't translate to a computing experience improvement. It's either faster enough in MSOffice, or it isn't. Sad, but very true.

E = MC ** 2 +- 3db

Working...