Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Education

Beowulf Pioneer Lured From Cal Tech to LSU 163

An anonymous reader writes "Thomas Sterling, a pioneer of clustered computing, including /.'s beloved Beowulf cluster, has has accepted a fully-tenured professorship at Louisiana State University's Center for Computation and Technology, ditching his old post at Cal Tech. From TFA: "At LSU, he hopes to develop the next generation of high-performance computers that will give birth to true artificial intelligence. By making computer chips more efficient, Sterling believes he can change computing by "one to three orders of magnitude" that will transform how humans interact with technology.""
This discussion has been archived. No new comments can be posted.

Beowulf Pioneer Lured From Cal Tech to LSU

Comments Filter:
  • Waterpower (Score:5, Funny)

    by DoofusOfDeath ( 636671 ) on Monday August 29, 2005 @11:03AM (#13427500)
    I think for now he'd better focus on developing sea-water powered computers :)
    • Looks like I missed my chance to post the obvious joke of the morning. ;) Cest le /.
    • Actually, I'd love to see how much power he could get our of a Beowulf cluster of hurricanes!
    • by Anonymous Coward on Monday August 29, 2005 @01:02PM (#13428490)
      LSU sounds like some backwater 2-bit university that can accomplish almost nothing. Most of you geeks are thinking that nasty thought at this very moment.

      Allow me to clear up your thinking. Consider Proteus [lsu.edu]. It is a high-performance simulator written at MIT for MIPS. Some graduate student at LSU ported it to SPARC.

      This work is stunningly brilliant and egalitarian.

      In the late '80s and early 90s, the eggheads at MIT and Stanford felt that they need only develop simulators for their clique-ish processor: MIPS. Yet, the rest of the world was using SPARC. In this way, the eggheads cornered multiprocessor research for themselves.

      LSU actually opened up multiprocessor research to the rest of the world by building a simulator that actually runs on the SPARC machines.

      To be fair, I should note that a small team at Stanford did the same thing with ABSS, another simulator that runs on SPARC machines.

      • Projects like this should (almost) never be abandoned. Why? Because:
        • It can be used as a starting point for SE programming exercises
        • When the maintainer moves on, it can be offered up for someone else to take over
        • There's a lot of work involved in designing parallel processing systems, and a tool like this would savagely cut development times - particularly for non-experts building their own clusters

        Projects that fold often do so not because they're no good, but because the right people never heard about them

    • There you go....

      I was about to say wind-powered supercomputers.

      Imagine a beowulf cluster of Cat-5 hurricanes.
  • Whoa. (Score:3, Funny)

    by Seumas ( 6865 ) * on Monday August 29, 2005 @11:04AM (#13427505)
    Image a beo.... oh fuck it. Nevermind.
  • by Anonymous Coward on Monday August 29, 2005 @11:05AM (#13427514)
    ...was last seen moving northeast towards Mississippi at a brisk pace. Sterling should adjust his trip accordingly.
    • Not to detract from your joke (it deserves the 5, Funny rating), but to clarify...

      First, hurricanes don't cause that much flooding. Most of the building damage results from the 50-100 mph winds. You might see a couple of feet of water in the lower-lying areas of New Orleans (below sea level).

      Second, LSU is in Baton Rouge, about an hour's drive from NO. BR is pretty much okay, except for the winds. I'm in Lafayette, another hour away, and you couldn't even tell there was a hurricane (except that the clouds a
  • by Comatose51 ( 687974 ) on Monday August 29, 2005 @11:06AM (#13427519) Homepage
    A nerd improves his chances of getting laid...
  • by Anonymous Coward
    Forget any tech achievements this guy has. If he's the Beowulf [wikipedia.org] pioneer, that means he has to be something like a thousand years old. I want to know what sort of anti-aging techniques this guy uses.
  • Ummm (Score:3, Insightful)

    by $RANDOMLUSER ( 804576 ) on Monday August 29, 2005 @11:06AM (#13427524)
    Isn't this the wrong week to be moving to Louisiana?
    • Re:Ummm (Score:5, Insightful)

      by tgd ( 2822 ) on Monday August 29, 2005 @11:11AM (#13427574)
      Real estate is probably going to be cheap.

      • Right, like here in Florida where housing costs have dropped..

        What's that? Huge price increases? What kind of free market economy is this?
        • by tgd ( 2822 )
          Well it was meant as a joke. I'm not the one who moderated it +5 Insightful.

          Its a sad state of affairs on /. when an obvious attempt at black humor gets modded as insightful instead of funny.

      • Yeah, but you won't be able to afford insurance for it.
      • Real estate is probably going to be cheap.
        Why? I wouldn't mind having the federal government pay to rennovate my luxury home on the beach every few years.

        (OK, that was really aimed at Florida more than Louisiana).

    • ...the land is going to be cheap, seeing as it's all swamp now.
      • seeing as it's all swamp now

        That is no change from the past.

      • These days, we call that "wetlands". It's much more fuzzy and cuddly than "swamp". People don't give money unless they're convinced that they're protecting fuzzy and cuddly things.
        • A swamp is a specific type of wetland. And they're useful beyond the fuzzy-cuddly factor (seriously, couldn't that also be called 'habitat'?). I know you're just trying to be sarcastic, but if New Orleans hadn't drained their wetlands for expansion and creation of their useless-in-this-scenario levees, they would act as an effective natural buffer for the storm surge that's tearing up the city right now. Belittle it all you want, but wetlands offer many functions and values that benefit urban areas (see
          • Yeah, then they could get flooded by the Mississippi each and every year. I mean, to heck with making the most important river in America navigable. Why not just let it change course constantly, block grain shipments from the Midwest, and let all those lazy foreigners starve because they don't have access to surplus American food supplies.
    • Re:Ummm (Score:3, Interesting)

      by hrieke ( 126185 )
      At least he'll be able to get water cooling done on the cheep.

      On a serious side, my father teaches at Lafayette U. (PetroChemcial Engineering), and near one of his offices the school is building a state of the art VR system. Very much cutting edge, high tech, and down right cool.

      So, while LA has the illusion of being backwater, they do some fairly high tech stuff there. After all, isn't that where id got their start?
  • ... comes production and life.
  • interface? (Score:5, Funny)

    by Red Flayer ( 890720 ) on Monday August 29, 2005 @11:10AM (#13427563) Journal
    FTA: [Sterling]:"We'll finally stop interfacing with a computer with a keypad," he said. "It's a truly science fiction dream of talking to computers and computers talking back to you."

    Great, like I need my computer talking back to me -- I'll be getting enough sass from my teenage daughters by then.
  • Apt (Score:4, Interesting)

    by mfh ( 56 ) on Monday August 29, 2005 @11:10AM (#13427564) Homepage Journal
    At LSU, he hopes to develop the next generation of high-performance computers that will give birth to true artificial intelligence.

    2theadvocate was down when I tried to read their story, so mirrors please?

    I'll comment briefly (WRTFA):

    I am sick of the term next generation: it irks me. I think if you're talking about devoting the next twenty years towards developing true AI, then the focus has to be about the direction that could be taken, the nuts and bolts of it all, and what the setbacks could be. High performance computers are like high performance people, in many ways, or at least they should be. Incentives must exist for a metrological system [wikipedia.org] to present itself into the true nature of self and this measure supercedes the facility of overexaggeration, to the point where no truly defined system can surpass the narrow view of purpose devoted by the creator, without being heralded as a foolish endeavour. The heavy processing of high performance computing works against the nature of AI.

    True AI means that mistakes will be made by the creator and the subject, and emotions will exist in the subject to counter-attack development stumbling blocks, and assist in development, or improve development of wisdom and ultimate self-awareness comes only from experiences of contrast, pain and pleasure (for example). These precepts have never come into cause with a system yet, because each system is built as an object and not a person; each system is built for a financial purpose and not a scientific purpose.

    Science and finance are enemies, strange bedfellows that hate eachother but rely on eachother, in a bad marriage, with nothing to lose and at times everything to lose. How can balance come to this nature, to enable true AI to come forward out of the ashes?

    How is it possible at all? I don't see it. I see just another generation of the same thing, so perhaps the term next generation is apt?
    • Re:Apt (Score:4, Informative)

      by Otter ( 3800 ) on Monday August 29, 2005 @11:24AM (#13427681) Journal
      Article text -- the last thing a Louisiana news site needs right now is a Slashdotting!

      When higher education officials lobbied for the "LONI" fiber-optic computer network, they called it the ultimate economic development tool that would attract top researchers and federal dollars to the state.

      Last September, Gov. Kathleen Blanco committed $40 million over 10 years to build and maintain LONI, which will link eight university campuses to a national network of supercomputers, called the National LambdaRail.

      LONI, which stands for Louisiana Optical Network Initiative, has landed a major trophy to the state.

      Dr. Thomas Sterling, who helped revolutionize the modern supercomputer, has accepted a position at LSU's Center for Computation and Technology.

      At LSU, he hopes to develop the next generation of high-performance computers that will give birth to true Artificial Intelligence.

      By making computer chips more efficient, Sterling believes he can change computing by "one to three orders of magnitude" that will transform how humans interact with technology.

      "We'll finally stop interfacing with a computer with a keypad," he said. "It's a truly science fiction dream of talking to computers and computers talking back to you."

      A senior scientist at NASA's Jet Propulsion Laboratory at the California Institute of Technology, Sterling holds six patents and co-created the modern "Beowulf" supercomputer, which combines multiple off-the-shelf CPUs into one operation.

      LSU offered him full professorship and tenure. He starts Aug. 22, he said.

      "We lured him away from Cal Tech. It was a real coup," said Dr. Kevin Carman, dean of the College of Basic Sciences at LSU

      Sterling, who holds a Ph.D. from MIT, said LSU offered the most exciting program and package, especially with LONI going live this fall.

      "I would not have come to CCT if not for LONI -- I can't be starved for bits," he said. "Louisiana has positioned itself to being absolutely top-tier when it comes to Internet access for data movement."

      Carman also pointed to CCT director Ed Seidel, who has organized the center to collaborate with other departments that use high-performance computing.

      Seidel joined LSU in 2003, moving from the Albert Einstein Institute in Germany.

      "Ed Seidel is internationally known in his own right. That's what initially attracted (Sterling). If it hadn't been for that, we would not be on the radar," Carman said. "He told me he never imagined moving to Louisiana."

      The appointment of former NASA Administrator Sean O'Keefe as LSU chancellor helped as well. "It put LSU on the map to many of us in the high-tech industry," Sterling said.

      O'Keefe has close ties to Washington, D.C., and "understands money, politics and running a very large organization driven by technology and science," Sterling said.

      Sterling will bring his research to LSU which involves developing a computer processor called "MIND," which stands for Memory, Intelligence and Network Device.

      The MIND architecture uses a new multi-core chip that stacks several processors on a single chip -- similar to those in the upcoming Sony PlayStation 3 game device -- but with greater efficiency, Sterling said.

      "Play Station 3 is putting lots more of these functional units on chips, but it's not clear we know how to make them work more effectively together," he said.

      Processors generally dedicate a single functioning body that's surrounded by "clever tricks" and mechanisms that keep it working, he said.

      "There are many sources of inefficiencies ... in the way we put technology on a chip, the way we organize the technology, the way we make the chips work with each other," he said. "We're using the same model we used 50 or 60 years ago developed in the vacuum tube era."

      Sterling said the work -- along with other CCT initiatives -- could "catalyze a new industry and bring new talent to Louisiana."

      He envisions building his prototype in
    • Science and finance are enemies, strange bedfellows that hate eachother but rely on eachother, in a bad marriage, with nothing to lose and at times everything to lose.

      Since when are science and finance enemies?

      They are not strange bedfellows, but allies who use each other to get what they want -- just like any other allies.

      You make an interesting point about how computers are not getting closer to being true AI, but I have to disagree with you.

      Already, we know that the amount of operations neede
    • You have a point...in a way. But could you rewrite your statement without superfluous language (i.e. "supercedes the facility of overexaggeration") or vague expressions (i.e. "improve development of wisdom")?
      I consider that trying to sound intelligent via unclear prose is the first indicator that you might not be. Read this. [resort.com]
      • Explanation (Score:2, Interesting)

        by mfh ( 56 )
        But could you rewrite your statement without superfluous language (i.e. "supercedes the facility of overexaggeration") or vague expressions (i.e. "improve development of wisdom")?

        Certainly, I will do so for the purpose of clarity.

        I am sick of the term next generation: it irks me.

        Next generation indicates that there is only progress extended from previous efforts.

        I think if you're talking about devoting the next twenty years towards developing true AI, then the focus has to be about the direction that could
    • by r2q2 ( 50527 )
      Possibly he means that a True AI needs massive parallelism across the internet. This would transform how we interact with computers and how a Ai system could be built. Mistakes could be multiplexed through the internet and a idealized AI system could be hacked away one problem at a time.
  • "give birth to true artificial intelligence."

    Oh wait, all sorts of people have imagined that future, and it isn't pretty, in any of them but Star Trek with Data.

    Think of "I Robot" for a recent movie example of an Artificial Intelligence operating in a massive collective. Oh wait, scratch Star Trek too, there's the Borg!

    It seems our only hope is to not imagine, or create a cluster of AI robots or life forms.
  • Great job (Score:3, Funny)

    by Rosco P. Coltrane ( 209368 ) on Monday August 29, 2005 @11:14AM (#13427604)
    At LSU, he hopes to develop the next generation of high-performance computers that will give birth to true artificial intelligence.

    In short, he has been given a job for life to do research almost nobody expects anything from anymore.

    Wake me up when one of his high-performance computers pass the Turing test, if I didn't die of old age before...
    • Re:Great job (Score:3, Informative)

      In short, he has been given a job for life to do research almost nobody expects anything from anymore.

      Really, that sums up the LSU computer science department. It's just a show pony to say "Look how cool we are!" because they're in the same city as the Legislature... Nevermind their supercomputer (SuperMike) hasn't even been successfully turned on yet. Nevermind the ULL Computer Science department is significantly older and respected the world over... Let's give the money and the press to LSU... :P

      Not that
    • I realize this isn't a discussion about tenureship itself, but.. it's funny you mention mortality and tenureship in the same post (OK, I'm reaching..) I sometimes think of tenureship as less than ideal. At first the security of a tenured position may seem appetizing. However, I'm not sure I want to be in a position where termination is ... well.. *terminal.* Why does he have to be tenured at LSU to develop computer tech? West coast seems better suited anyway..
  • by Lellor ( 910974 ) on Monday August 29, 2005 @11:15AM (#13427612)

    You can throw as much hardware as you want at the "problem" of AI, but in my opinion, that isn't the easiest route to achieving a breakthrough in AI - it would be like throwing hardware at a dog's brain - the dog would still think like a dog, only 1000 times faster. Sure, you might see improvement in "mechanical reasoning", and chess playing programs and the like, where most of the neccessary conclusions can be reached mechanically (mathematically), but that's about as far as it will go, I think. You won't get the dog to reach non-doggy (for example, human) conclusions by doing that.

    The real key to AI lies in software, and superior algorithms. So far in AI, most of the progress has been on the mechanical side - expert systems using algorithms to match and discard possibilities until it finds the "correct" option. This is a good way of doing things for applications that expert systems are currently being utilized for, but to progress to the realm of true (self-aware) AI, scientists need to find out how it works in biological structures first. Once that has been established, computer scientists can try converting those (theoretical) signals into instructions, and plug those into new-generation algorithms.

    • Butthead: Huh-uh-huh... he said doggy.
    • The real key to AI lies in software, and superior algorithms.

      Personally I think it'll require a huge paradigm shift in the way all digital computing is currently performed. Trying to force AI into a system run by a digital processor, whether it's an x86 or some other current-day architecture, results in pretty significant limitations. True intelligence isn't binary - there are an infinite number of shades of grey that come with it.

      I don't think we'll see real AI until the next major advancement in compu

      • Well, you could use long ints rather than bits to hold your information...

        When you say "there are an infinite number of ..." I always wonder whether you've counted them. I'm rather sure you really just meant "a rather large number" which is why I responded with the comment about long ints, but on the chance that you meant what you said literally, permit me to disagree. I see no evidence for an actual infinity anywhere in human thought. A computational infinity, perhaps, but that's handled with a lazy alg
      • I dont think todays method of computing is the future of AI. I remember watching a program on PBS about AI and the key to it was analog computing. It seems that unless we radically change they way an AI processor works and possibly the programming method we will not easily reach the AI goal.
    • IMO, incredible amounts of computing power is a necessary, but not sufficient, condition for solving the AI "problem".

      The real question is how much will be needed - how far down do we have to dig when simulating a biological intelligence? Will stopping at the algorithmic or procedural level suffice? Do we have to simulate neurons, and if we do, do we only need to simulate frequency-domain behavior, or do we have to go with a full-blown Hodgkin-Huxley-esque model of neuronal activity?

      Or, perish the thought
    • So far in AI, most of the progress has been on the mechanical side - expert systems using algorithms to match and discard possibilities until it finds the "correct" option.

      That is what most humans do when given choices they have little or no past experience on. Trial and error until they give up, choose a fatal choice, or pick one with a desired or acceptable choice.

      When given enough information from other or if they have past experiences with a choice then that is what they have the hard time making AIs to
    • it would be like throwing hardware at a dog's brain - the dog would still think like a dog, only 1000 times faster.

      I think if you start throwing hardware at a dog's brain, pretty soon you will have a pile of gray mush which is incapable of thinking at all anymore...
  • PIM (Score:4, Informative)

    by convolvatron ( 176505 ) on Monday August 29, 2005 @11:17AM (#13427626)
    i know its hopeless..but,

    his work these days centers around efficiencies of access gained by putting the dram and processing elements on the same die. partially removing the serialization associated with the standard synchronous memory interface. The architecture also plans on using MTA-style threads to hide latency and increase concurrency.

    citeseer [psu.edu]
    • by jd ( 1658 )
      Processor-In-Memory is a fairly old technique. I remember covering it at University, as a student. I've also never seen it used anywhere - at least, not meaningfully. If someone is actually working on the problem, all I can say is it's about time!
      • yeah, they're still working on it. i dont know alot of the earlier work, so i dont know how much of this is novel.

        they are really fixated on the physical aspects of the memory arrays and building an effective cpu architecture around the context of dram rows (i.e., a thread context is a row, including registers, etc)

        so its a little more than just the pin count and interface electronics argument.
        • oh yeah, i forgot to mention (i had to look to see if it had already been published)...the other cool part is the global architecture. that is there is a large switching fabric connecting all the pims together. aside from the normal reads and writes, it also supports parcels, which is actually a whole migratory thread state. it just gets put in the run queue at the target.

          so if there is any spacial locality to be exploited, you can move the thread rather than the data. because this is MTA style you would ex
          • by jd ( 1658 )
            That's great! The idea has been around for a while, and I even borrowed many aspects of PIM when I was developing my theoretical FLOOP processor, but this sounds like someone has actually moved beyond the abstract and actually built the damn thing.
  • by TechnoGrl ( 322690 ) on Monday August 29, 2005 @11:19AM (#13427635)
    Imagine what we could do with a cluster of these guys!
  • Geaux Tigers!!!
  • by DamienMcKenna ( 181101 ) <{moc.annek-cm} {ta} {neimad}> on Monday August 29, 2005 @11:23AM (#13427675)
    he hopes to develop the next generation of high-performance computers that will give birth to true artificial intelligence

    Let me get this straight. We're geeks. We read science fiction. Much of science fiction is spent talking about the dangers of pushing technology too far too quickly, especially artificial intelligence. We know that corporations like pushing too far too quickly as they can boost their stock prices. Here's a guy saying he wants to create "true" artificial intelligence and we're all-of-a-sudden thinking its a good thing?

    Damien
    • Here's a guy saying he wants to create "true" artificial intelligence and we're all-of-a-sudden thinking its a good thing?

      Yes. You see, most geeks (although not all, by the tone of your comment) can differentiate between fact and fiction. Science fiction is written to entertain people, so it tends to have "oh the machines just turned evil" as a plot device. That doesn't mean machines "just turn evil" in reality.

    • by dvdeug ( 5033 )
      Here's a guy saying he wants to create "true" artificial intelligence and we're all-of-a-sudden thinking its a good thing?

      (A) It's been planned for 40 years now. It's a little late to be worrying about it.

      (B) Those 40 years have got us OCR programs that can almost beat an 8-year old for quality, and voice recognition programs that have to be trained on a particular voice. An AI that is two orders of magnitude better is still probably not going to be able to make breakfest.

      (C) There's six billion objects wit
      • There's six billion objects with natural intelligence that we let wander around with no supervision or real control.

        Have you even met any of these six billion objects? They are completely out of their so-called "minds"! They roam free and kill each other off, befoul their own nests, and then create more of their type of objects than their pathetic little planet can sustain!

        Oh, and if you are not with the invasion fleet, I didn't say anything. This is not the message you are looking for.

  • heh...
    (ironically, today's CAPTCHA image for me was 'horses')
  • ...on cloud seeding computer models or some form of weather forcast?

    Or computer controlled levy pumps or something useful :)

    Just seems as if moving to that area of the country _now_ isn't....safe.

    In other news, LSU was seen floating in the direction of Mexico......

  • Nice but (Score:3, Funny)

    by D3 ( 31029 ) <daviddhenningNO@SPAMgmail.com> on Monday August 29, 2005 @11:33AM (#13427750) Journal
    The LSU 9000 just doesn't have as nice a ring to it as calling it HAL.
  • by The Hobo ( 783784 ) on Monday August 29, 2005 @11:36AM (#13427767)
    Apu: I came here shortly after my graduation from CalTech: Calcutta Technical Institute, as the top student in my graduating class of 7 million.
  • I think their admission standards went up to a 3.0 HS GPA (exemptions obviously still made).
  • Oh, Great! (Score:4, Funny)

    by D3 ( 31029 ) <daviddhenningNO@SPAMgmail.com> on Monday August 29, 2005 @11:37AM (#13427783) Journal
    Skynet will be corrupted by Mardi Gras and thus decide to save all the hot chicks but kill the rest of us.
  • by Yonder Way ( 603108 ) on Monday August 29, 2005 @11:43AM (#13427839)
    ...will be one of those rare above-water ones.
  • The title says it all.

    I still think Beowulf was a writer. How did things get so out-of-hand. (no out-of-hand comments allowed.)

  • by aquabat ( 724032 ) on Monday August 29, 2005 @11:54AM (#13427976) Journal
    Sterling believes he can change computing by "one to three orders of magnitude"

    Hell, if I wanted to change the performance of my computer by one to three orders of magnitude, I would just run Vista.

    Oh, wait, maybe he meant one to three orders of magnitude faster. My bad.

  • By making computer chips more efficient, Sterling believes he can change computing by "one to three orders of magnitude"

    So his plan is to ride the Moore's Law wave for 18 to 54 months?

    (15 to 1500 years if they meant decimal orders of magnitude, rather than binary)
  • by obispo ( 898086 ) on Monday August 29, 2005 @01:35PM (#13428732)
    His is a predictable move. If after 9 years at Caltech he was still mired in an untenured, non-tenure-track position of "faculty associate", it's natural that he jumped at the chance of becoming a full professor at LSU.

    This comment is neither an endorsement nor an attempt to disparage the guy's technical merits, as I don't know the politics going on at Caltech. At least in computer science at Stanford, getting tenure has gotten ridiculously unlikely in the last several years.
  • He wasn't a faculty member at Caltech, he was a research associate. In most cases, infrastructure work, project management, and software development experience are not sufficient to get you tenure at a top university.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...