Beowulf Pioneer Lured From Cal Tech to LSU 163
An anonymous reader writes "Thomas Sterling, a pioneer of clustered computing, including /.'s beloved Beowulf cluster, has has accepted a fully-tenured professorship at Louisiana State University's Center for Computation and Technology, ditching his old post at Cal Tech. From TFA: "At LSU, he hopes to develop the next generation of high-performance computers that will give birth to true artificial intelligence. By making computer chips more efficient, Sterling believes he can change computing by "one to three orders of magnitude" that will transform how humans interact with technology.""
Waterpower (Score:5, Funny)
Re:Waterpower (Score:2)
Re:Waterpower (Score:2)
Score 1 for the Hicks (Score:4, Informative)
Allow me to clear up your thinking. Consider Proteus [lsu.edu]. It is a high-performance simulator written at MIT for MIPS. Some graduate student at LSU ported it to SPARC.
This work is stunningly brilliant and egalitarian.
In the late '80s and early 90s, the eggheads at MIT and Stanford felt that they need only develop simulators for their clique-ish processor: MIPS. Yet, the rest of the world was using SPARC. In this way, the eggheads cornered multiprocessor research for themselves.
LSU actually opened up multiprocessor research to the rest of the world by building a simulator that actually runs on the SPARC machines.
To be fair, I should note that a small team at Stanford did the same thing with ABSS, another simulator that runs on SPARC machines.
However, -2 for abandoning it (Score:2)
Projects that fold often do so not because they're no good, but because the right people never heard about them
Re:Waterpower (Score:2)
I was about to say wind-powered supercomputers.
Imagine a beowulf cluster of Cat-5 hurricanes.
Re:Waterpower (Score:2)
What about a Cat-6 hurricane?
Re:Waterpower (Score:3, Funny)
Ha! (Score:2)
Re:Waterpower (Score:2)
Yeah, but it's the state capital so it's flooded with another kind of toxic sludge, politicians. :-)
Whoa. (Score:3, Funny)
Re:Whoa. (Score:1)
Re:Whoa. (Score:2)
Re:Whoa. (Score:2)
A beowulf cluster of LSU Profs? (Score:2)
LSU's Center for Computation and Technology... (Score:5, Funny)
Re:LSU's Center for Computation and Technology... (Score:2)
First, hurricanes don't cause that much flooding. Most of the building damage results from the 50-100 mph winds. You might see a couple of feet of water in the lower-lying areas of New Orleans (below sea level).
Second, LSU is in Baton Rouge, about an hour's drive from NO. BR is pretty much okay, except for the winds. I'm in Lafayette, another hour away, and you couldn't even tell there was a hurricane (except that the clouds a
In Other News... (Score:4, Funny)
Re:In Other News... (Score:2)
Re:In Other News... (Score:2)
Re:In Other News... (Score:2)
Perhaps the AC is sufficiently well read to be referring to Governor Earl Long and the stripper Blaze Starr.
Forget any tech achievements (Score:2, Funny)
Ummm (Score:3, Insightful)
Re:Ummm (Score:5, Insightful)
Re:Ummm (Score:2)
What's that? Huge price increases? What kind of free market economy is this?
Re:Ummm (Score:2)
Its a sad state of affairs on
Re:Ummm (Score:2)
Re:Ummm (Score:2)
(OK, that was really aimed at Florida more than Louisiana).
Well, God knows... (Score:2)
Re:Well, God knows... (Score:2)
That is no change from the past.
Re:Well, God knows... (Score:2)
Re:Well, God knows... (Score:2)
Re:Well, God knows... (Score:2)
Re:Ummm (Score:3, Interesting)
On a serious side, my father teaches at Lafayette U. (PetroChemcial Engineering), and near one of his offices the school is building a state of the art VR system. Very much cutting edge, high tech, and down right cool.
So, while LA has the illusion of being backwater, they do some fairly high tech stuff there. After all, isn't that where id got their start?
Re:Ummm (Score:5, Funny)
Re:Ummm (Score:2)
Funny.
My question is: why do God and UFOs both seem to favor Louisiana?
Re:Ummm (Score:2)
From death and destruction (Score:1)
interface? (Score:5, Funny)
Great, like I need my computer talking back to me -- I'll be getting enough sass from my teenage daughters by then.
Re:interface? (Score:2)
Re:interface? (Score:2)
Re:interface? (Score:3, Funny)
Thank God I just missed the S the first time I read that.
Re:interface? (Score:3, Funny)
Re:interface? (Score:4, Funny)
Shit!!
No! Wait!
Goddamnit!
Fuck!!
Aaaaaah!!!
Oooooh Nooooooo!!
You pieca shit!!
Hey! It looks like you're writing a leter!
Fuck off!!
Hey! It looks like you're writing a leter!
Fuck off!!
Hey! It looks like..
BLAM!! BLAM!! BLAM!!
Apt (Score:4, Interesting)
2theadvocate was down when I tried to read their story, so mirrors please?
I'll comment briefly (WRTFA):
I am sick of the term next generation: it irks me. I think if you're talking about devoting the next twenty years towards developing true AI, then the focus has to be about the direction that could be taken, the nuts and bolts of it all, and what the setbacks could be. High performance computers are like high performance people, in many ways, or at least they should be. Incentives must exist for a metrological system [wikipedia.org] to present itself into the true nature of self and this measure supercedes the facility of overexaggeration, to the point where no truly defined system can surpass the narrow view of purpose devoted by the creator, without being heralded as a foolish endeavour. The heavy processing of high performance computing works against the nature of AI.
True AI means that mistakes will be made by the creator and the subject, and emotions will exist in the subject to counter-attack development stumbling blocks, and assist in development, or improve development of wisdom and ultimate self-awareness comes only from experiences of contrast, pain and pleasure (for example). These precepts have never come into cause with a system yet, because each system is built as an object and not a person; each system is built for a financial purpose and not a scientific purpose.
Science and finance are enemies, strange bedfellows that hate eachother but rely on eachother, in a bad marriage, with nothing to lose and at times everything to lose. How can balance come to this nature, to enable true AI to come forward out of the ashes?
How is it possible at all? I don't see it. I see just another generation of the same thing, so perhaps the term next generation is apt?
Re:Apt (Score:4, Informative)
When higher education officials lobbied for the "LONI" fiber-optic computer network, they called it the ultimate economic development tool that would attract top researchers and federal dollars to the state.
Last September, Gov. Kathleen Blanco committed $40 million over 10 years to build and maintain LONI, which will link eight university campuses to a national network of supercomputers, called the National LambdaRail.
LONI, which stands for Louisiana Optical Network Initiative, has landed a major trophy to the state.
Dr. Thomas Sterling, who helped revolutionize the modern supercomputer, has accepted a position at LSU's Center for Computation and Technology.
At LSU, he hopes to develop the next generation of high-performance computers that will give birth to true Artificial Intelligence.
By making computer chips more efficient, Sterling believes he can change computing by "one to three orders of magnitude" that will transform how humans interact with technology.
"We'll finally stop interfacing with a computer with a keypad," he said. "It's a truly science fiction dream of talking to computers and computers talking back to you."
A senior scientist at NASA's Jet Propulsion Laboratory at the California Institute of Technology, Sterling holds six patents and co-created the modern "Beowulf" supercomputer, which combines multiple off-the-shelf CPUs into one operation.
LSU offered him full professorship and tenure. He starts Aug. 22, he said.
"We lured him away from Cal Tech. It was a real coup," said Dr. Kevin Carman, dean of the College of Basic Sciences at LSU
Sterling, who holds a Ph.D. from MIT, said LSU offered the most exciting program and package, especially with LONI going live this fall.
"I would not have come to CCT if not for LONI -- I can't be starved for bits," he said. "Louisiana has positioned itself to being absolutely top-tier when it comes to Internet access for data movement."
Carman also pointed to CCT director Ed Seidel, who has organized the center to collaborate with other departments that use high-performance computing.
Seidel joined LSU in 2003, moving from the Albert Einstein Institute in Germany.
"Ed Seidel is internationally known in his own right. That's what initially attracted (Sterling). If it hadn't been for that, we would not be on the radar," Carman said. "He told me he never imagined moving to Louisiana."
The appointment of former NASA Administrator Sean O'Keefe as LSU chancellor helped as well. "It put LSU on the map to many of us in the high-tech industry," Sterling said.
O'Keefe has close ties to Washington, D.C., and "understands money, politics and running a very large organization driven by technology and science," Sterling said.
Sterling will bring his research to LSU which involves developing a computer processor called "MIND," which stands for Memory, Intelligence and Network Device.
The MIND architecture uses a new multi-core chip that stacks several processors on a single chip -- similar to those in the upcoming Sony PlayStation 3 game device -- but with greater efficiency, Sterling said.
"Play Station 3 is putting lots more of these functional units on chips, but it's not clear we know how to make them work more effectively together," he said.
Processors generally dedicate a single functioning body that's surrounded by "clever tricks" and mechanisms that keep it working, he said.
"There are many sources of inefficiencies
Sterling said the work -- along with other CCT initiatives -- could "catalyze a new industry and bring new talent to Louisiana."
He envisions building his prototype in
Re:Apt (Score:2)
Since when are science and finance enemies?
They are not strange bedfellows, but allies who use each other to get what they want -- just like any other allies.
You make an interesting point about how computers are not getting closer to being true AI, but I have to disagree with you.
Already, we know that the amount of operations neede
Re:Apt (Score:2)
I consider that trying to sound intelligent via unclear prose is the first indicator that you might not be. Read this. [resort.com]
Explanation (Score:2, Interesting)
Certainly, I will do so for the purpose of clarity.
I am sick of the term next generation: it irks me.
Next generation indicates that there is only progress extended from previous efforts.
I think if you're talking about devoting the next twenty years towards developing true AI, then the focus has to be about the direction that could
Re:Apt (Score:2)
Imagine a Beowulf cluster of AI robots (Score:1)
Oh wait, all sorts of people have imagined that future, and it isn't pretty, in any of them but Star Trek with Data.
Think of "I Robot" for a recent movie example of an Artificial Intelligence operating in a massive collective. Oh wait, scratch Star Trek too, there's the Borg!
It seems our only hope is to not imagine, or create a cluster of AI robots or life forms.
Re:Imagine a Beowulf cluster of AI robots (Score:2)
I think most of us would rather not. That 120 Hz hum you hear is not the power transformer. It is Isaac Asimov spinning in his grave.
Re:Imagine a Beowulf cluster of AI robots (Score:2)
But the "Three Laws of Robotics" is a completely flawed concept - kinda like a whale with a built-in anchor - it would be the first thing (biological feature) to go.
Great job (Score:3, Funny)
In short, he has been given a job for life to do research almost nobody expects anything from anymore.
Wake me up when one of his high-performance computers pass the Turing test, if I didn't die of old age before...
Re:Great job (Score:3, Informative)
Really, that sums up the LSU computer science department. It's just a show pony to say "Look how cool we are!" because they're in the same city as the Legislature... Nevermind their supercomputer (SuperMike) hasn't even been successfully turned on yet. Nevermind the ULL Computer Science department is significantly older and respected the world over... Let's give the money and the press to LSU...
Not that
Re:Great job (Score:2)
Will throwing hardware at AI suffice? (Score:5, Insightful)
You can throw as much hardware as you want at the "problem" of AI, but in my opinion, that isn't the easiest route to achieving a breakthrough in AI - it would be like throwing hardware at a dog's brain - the dog would still think like a dog, only 1000 times faster. Sure, you might see improvement in "mechanical reasoning", and chess playing programs and the like, where most of the neccessary conclusions can be reached mechanically (mathematically), but that's about as far as it will go, I think. You won't get the dog to reach non-doggy (for example, human) conclusions by doing that.
The real key to AI lies in software, and superior algorithms. So far in AI, most of the progress has been on the mechanical side - expert systems using algorithms to match and discard possibilities until it finds the "correct" option. This is a good way of doing things for applications that expert systems are currently being utilized for, but to progress to the realm of true (self-aware) AI, scientists need to find out how it works in biological structures first. Once that has been established, computer scientists can try converting those (theoretical) signals into instructions, and plug those into new-generation algorithms.
Re:Will throwing hardware at AI suffice? (Score:1)
Re:Will throwing hardware at AI suffice? (Score:3, Insightful)
Personally I think it'll require a huge paradigm shift in the way all digital computing is currently performed. Trying to force AI into a system run by a digital processor, whether it's an x86 or some other current-day architecture, results in pretty significant limitations. True intelligence isn't binary - there are an infinite number of shades of grey that come with it.
I don't think we'll see real AI until the next major advancement in compu
Re:Will throwing hardware at AI suffice? (Score:3, Interesting)
When you say "there are an infinite number of
Re:Will throwing hardware at AI suffice? (Score:2)
Re:Will throwing hardware at AI suffice? (Score:3, Insightful)
The real question is how much will be needed - how far down do we have to dig when simulating a biological intelligence? Will stopping at the algorithmic or procedural level suffice? Do we have to simulate neurons, and if we do, do we only need to simulate frequency-domain behavior, or do we have to go with a full-blown Hodgkin-Huxley-esque model of neuronal activity?
Or, perish the thought
Re:Will throwing hardware at AI suffice? (Score:2)
Re:Will throwing hardware at AI suffice? (Score:2)
If you want that to be taken seriously rather than as a joke, then you need an operational definition of the term "soul". How can an observer know whether or not such an item is present? How could you prove to me that you had one?
In some sense this is a legitimate suggestion, for some definitions of the term "soul". For other definitions, no such feature is either necessary or desireable for a manufactured intelligence. For other
Re:Will throwing hardware at AI suffice? (Score:2)
Gotcha!
Re:Will throwing hardware at AI suffice? (Score:2)
Re:Will throwing hardware at AI suffice? (Score:2)
Re:Will throwing hardware at AI suffice? (Score:2)
That is what most humans do when given choices they have little or no past experience on. Trial and error until they give up, choose a fatal choice, or pick one with a desired or acceptable choice.
When given enough information from other or if they have past experiences with a choice then that is what they have the hard time making AIs to
Re:Will throwing hardware at AI suffice? (Score:2)
I think if you start throwing hardware at a dog's brain, pretty soon you will have a pile of gray mush which is incapable of thinking at all anymore...
PIM (Score:4, Informative)
his work these days centers around efficiencies of access gained by putting the dram and processing elements on the same die. partially removing the serialization associated with the standard synchronous memory interface. The architecture also plans on using MTA-style threads to hide latency and increase concurrency.
citeseer [psu.edu]
Re:PIM (Score:2)
Re:PIM (Score:2)
they are really fixated on the physical aspects of the memory arrays and building an effective cpu architecture around the context of dram rows (i.e., a thread context is a row, including registers, etc)
so its a little more than just the pin count and interface electronics argument.
Re:PIM (Score:2)
so if there is any spacial locality to be exploited, you can move the thread rather than the data. because this is MTA style you would ex
Re:PIM (Score:2)
If we could only clone him .... (Score:4, Funny)
Geaux Tigers!!! (Score:2)
This is a good thing? (Score:3, Interesting)
Let me get this straight. We're geeks. We read science fiction. Much of science fiction is spent talking about the dangers of pushing technology too far too quickly, especially artificial intelligence. We know that corporations like pushing too far too quickly as they can boost their stock prices. Here's a guy saying he wants to create "true" artificial intelligence and we're all-of-a-sudden thinking its a good thing?
Damien
Re:This is a good thing? (Score:2)
Yes. You see, most geeks (although not all, by the tone of your comment) can differentiate between fact and fiction. Science fiction is written to entertain people, so it tends to have "oh the machines just turned evil" as a plot device. That doesn't mean machines "just turn evil" in reality.
Re:This is a good thing? (Score:3, Insightful)
(A) It's been planned for 40 years now. It's a little late to be worrying about it.
(B) Those 40 years have got us OCR programs that can almost beat an 8-year old for quality, and voice recognition programs that have to be trained on a particular voice. An AI that is two orders of magnitude better is still probably not going to be able to make breakfest.
(C) There's six billion objects wit
Re:This is a good thing? (Score:2)
Have you even met any of these six billion objects? They are completely out of their so-called "minds"! They roam free and kill each other off, befoul their own nests, and then create more of their type of objects than their pathetic little planet can sustain!
Oh, and if you are not with the invasion fleet, I didn't say anything. This is not the message you are looking for.
Lightspeed University (Score:1)
(ironically, today's CAPTCHA image for me was 'horses')
Re:Lightspeed University (Score:2)
Perhaps he should focus.... (Score:1)
Or computer controlled levy pumps or something useful
Just seems as if moving to that area of the country _now_ isn't....safe.
In other news, LSU was seen floating in the direction of Mexico......
Re:Perhaps he should focus.... (Score:2)
Nice but (Score:3, Funny)
Obligatory Simpsons Quote (Score:5, Funny)
Re:Obligatory Simpsons Quote (Score:2)
Now you know (and knowing is half the battle).
Actually, LSU is getting better, so I hear (Score:2)
Oh, Great! (Score:4, Funny)
hopefully his office... (Score:4, Funny)
Beowulf of Raincoats (Score:2)
I still think Beowulf was a writer. How did things get so out-of-hand. (no out-of-hand comments allowed.)
Re:Beowulf of Raincoats (Score:2)
Orders of Magnitude (Score:4, Funny)
Hell, if I wanted to change the performance of my computer by one to three orders of magnitude, I would just run Vista.
Oh, wait, maybe he meant one to three orders of magnitude faster. My bad.
Perhaps "riding the wave" is a poor word choice (Score:2)
So his plan is to ride the Moore's Law wave for 18 to 54 months?
(15 to 1500 years if they meant decimal orders of magnitude, rather than binary)
Predictable career move (Score:3, Interesting)
This comment is neither an endorsement nor an attempt to disparage the guy's technical merits, as I don't know the politics going on at Caltech. At least in computer science at Stanford, getting tenure has gotten ridiculously unlikely in the last several years.
not the same job (Score:2)
with that type of attitude... (Score:2)
Hopefully catlech
As an alum I'm a bit disappointed, although I'm not exactly surprized (at caltech, computers have always seemed more about applied science, than a science in themselves)...
Re:with that type of attitude... (Score:2)
Re:Lousiana? (Score:2)
2 years after SuperMike was built, Caltech finally built one faster and caught up with LSU.
<sarcasm>Thank you for displaying your lack of prejudice </sarcasm>
Re:Baton Rouge is an armpit (Score:2)
Unlike you, however, I don't make sweeping generalizations about a population of millions after meeting a few dozen of them.
Pssst...Careful