A $1000 Supercomputer? 143
Sean Mooney
writes "CNN is reporting that $1000 pc that is 60,000 times
faster than a PII 350 may be on the market within 18 months.
Star Bridge Systems is making the field programmable gate
array (FPGA) computer. These are the same guys who are making
HAL, reported earlier. " I'll believe that when
I see it. Although I can't think of a better way to break
Moore's Law.
These People are shysters!!! (Score:1)
They spent some time in Saucelito, CA as Metalithic promoting the same sea of FPGAs, but this time for a more specific music mixdown system. They were never able to get the system to work properly but did succeed in bilking some investors out of lots of money.
The internet will follow these jokers forever, if I were them, I'd learn how to sort vegetables.
FPGAs in nice sea of FPGA configurations are available today from companies like IKOS http://www.ikos.com/ and are programmed using VHDL..... often used to sim BIG chips. And yes you could RC5 _really_ fast if you wanted to.
Massive parallelism - still a long way to go (Score:4)
Your average C program has very little implicit parallelism (= parallelism not explicitly introduced by using some library of parallel operations or so). Even the best compilers on this planet won't make these programs run much faster on a massively parallel computer than on a single processor (on the contrary, the additional communication overhead can easily make the execution slower with each processor that you add).
Remember what a fuzz it has been to make the Linux kernel perform well on SMPs with more than two or three processors; how do you want to make this scale to tousands and millions of parallel processing units? BTW, the last company that went for many small (and slow) processing units instead of a few very fast ones was Thinking Machines (the machine was called CM-2). Do a search on the Web to see where they are now...
Chilli
PS: Such a machine can be useful for some things, called embarrassingly parallel problems/algorithms in the parallel computing community.
Lies, damned lies and statistics... (Score:5)
Their computer is based around FPGAs (Field Programable Gate Arrays), in particular they are using the XILINX family of FPGA. These are devices that are composed of thousands of small logic blocks wired together through a switching network. The functionality of these small logic devices is user definable by setting bits in an SRAM. The connectivity between pins and the logic blocks and other logic blocks is also user definable by setting bits in static RAM.
So what they're doing is setting each of these programmable blocks to implement a 4 bit adder and wiring them together such that they're all operating at once. It isn't actually doing any useful calculation. There performance claim is based on wiring together a bunch of useless logic and running it all in parallel. Once you start doing useful things the amount of parallelism will reduce. It'll reduce a lot. FPGA's aren't very fast devices, they'll only get a few percentage points (if that) of their performance claim for real applications.
Porting code to this machine would be non-trivial as well. Rather than the normal programming languages computer scientists and programmers are familiar with you're actually controlling the flow of electrical signals. They've probably got synthesis tools that will take some variant of a program language and translate it into the native data needed to program the device. The synthesis tools are most likely very crude and to get real performance you'd probably have to hack bits. Not fun. I say this because of my experience with synthesis tools used for ASIC design. They're fine if you're doing boring design of maybe 50 or 100 MHz. Beyond that you're pushing there technology and it will probably break. These synthesis tools are designed by billion dollar companies. It would take massive amounts of man hours and money to create a well designed synthesis package for something of this magnitude.
Re:As an logic designer I laugh! (Score:1)
As too your tone thank you for proving my point that ACs should be disallowed from posting.
"There is no spoon" - Neo, The Matrix
"SPOOOOOOOOON!" - The Tick, The Tick
Re:This isn't right, is it? (Score:1)
On a different note, their website lists the possible tasks of this hypercomputer to be "ultra-fast scalar processing, digital, broadband signal processing and high-speed, low-latency switching and routing." Funny, no mention of vector processing. Without that it will never kill the modern supercomputer. The web site uses too many buzz words for my likeing as well
"massively-parallel, reconfigurable, third-order
programmable, ultra-tightly-coupled, fully linearly-scaleable, evolvable, asymmetrical multi-processors. They are plug-compatible"
If it works, this is a huge step forward, if not, it is a lot of hype.
Sigh... (Score:1)
Re:Whoah! I'm salivating (Score:1)
BTW, just what assembler code would this machine use? Would it have to be written in "ViVa", or could it be written in x86 assembler? I'm all confused now =op
Before criticizing a man, walk a mile in his shoes. That way, when you do criticize him, you'll be a mile away, *and* you'll have his shoes!
In case this company has any credibility left (Score:1)
If their "hypercomputer" was as good as they're saying, it's likely that they'd have somebody who is both famous and technically competent speaking for them. Not Eric Estrada's old cohort.
Food for thought.
Re:Supercomputing ....Yes? Linux ?????? (Score:1)
Re:I think I see what they are trying to do. (Score:1)
Viva Active Experts can become software entrepreneurs by
organizing groups of Viva Developers to write libraries and
application software for Viva and be paid, either in compute
cycles or money.
Re:FPGA supercomputing? (Score:1)
> fit all of Quake 3's rendering pipeline into
> the hardware? If I can, it should
> cream a dedicated processor. If I can't, I
> lose major amounts of speed switching the
> gate array, or to using a less-efficient
> general layout on one part of the array.
> To my understanding, FGPAs are slower and
> larger than dedicated circuitry, which limits
> the transistor count if you're looking at a
> reasonable die size.
Bearing this in mind, I fail to see how useful these devices would be for something like a 3D application. By putting a 3D pipeline on a FGPA you're just using it as a dedicated 3D chip like your typical nVidia TNT, 3dfx VoodooX etc, except that your FGPA is built on slower bigger technology compared to the CMOS competition (TNT, Voodoo etc). Which do you think is going to perform better?
But thinking again, perhaps FGPAs could be produced cheaper than normal chips. Should be possible as you only have to produce one kind of chip, instead of a different chip for CPU, FPU, DSP, 3D, etc. Then instead of buying computer with a CPU and a DSP (sound) *and* a gfx 3D chip etc, you just get a box that's packed full of these cheap FPGAs and configure them for what you need. Since the FPGAs are so much cheaper, you just buy a lot more of them and beat 'standard' computing using sheer numbers (and parallelism). (and then all the 3D chip companies transform into software companies and live happily ever after).
I hope I have made some sense.
--Simon
Re:But does it run Linux? (Score:1)
Nitpickin' (Score:1)
If you read it correctly, you will notice that this didn't come from the writer of the article; this is actually a quote from Kent Gilson, Star Bridge System's CTO. Well, with a CTO like that, I can just imagine what kind of product they will come up with.
Re:Uhh... (Score:1)
Can we say MLM? (Score:2)
"become software entrepreneurs by organizing groups of Viva Developers"
Wow... this sounds like Amway to me.... multi-level marketing crap.
Old news (Score:1)
It was a whole paradigm shift, with on-the-fly FPGA re-programming and all that...
MoNsTeR
Re:Old news (Score:1)
But this would make a great stand-alone rendering cluster.
link to old /. story (Score:1)
so it was Feb, not quite 6 months. blah.
MoNsTeR
60,000 times faster? (Score:1)
if all you want to do is 60,000 4-bit additions.
It might be reasonable at DES cracking too. But for running Quake, you're still better off with a P-II.
Wanted FPGA pci card and linux drivers for ..... (Score:1)
RC5 cracking, MPEG2/3 compression, glquake3
playing , 3D graphics
It will not be that useful in the short-term (Score:2)
Don't start short-selling Intel and AMD yet...
Re:link to old /. story (Score:2)
But does it run Linux? (Score:2)
Their specs say it has a 1600W power supply. Does that come with a wall plug or a set of jumper cables?
Re:FPGA supercomputing? (Score:1)
AFAIK, FPGAs are not cheaper than dedicated ASICs, although this company might change that...
-_Quinn
Re:Old news... Then why am I so skeptical? (Score:2)
Hmmm... that sounds like a hard program to write -- the part that re-optimizes the hardware. How many different virtual hardware processor "personalities" will it need to achieve 60K x PII speeds? Of course, in order to get full advantage from it, it will have to be done frequently. How fast will *that* be?
I can't wait to buy the equivalent of a 21 Tera Hertz PII in 1.5 years. I assume the "hardware compiler" will be ready as well and included in the $1K.
open for visitors? (Score:1)
Their web site says that they are open to visitors. Maybe some slashdot readers in the SLC area should check them out.
ohh...i see....really?? (Score:1)
Uh-Oh... (Score:3)
And what about the human rights, personnel, and vacation time issues concerned with the resulting employee, should the box be owned by a corporation?
If the system had been owned by an individual, should they file manumission papers or would the former owner now be considered a parent responsible for their new cyberchild for the first eighteen years?
And would you want one to marry your sister?
Re:Move along, nothing to see here... (Score:1)
That says, it doesn't mean they have anything, either; I daresay that if the report is correct, they'll be going for the supercomputing, heavy number crunching market, where they can attempt to recover their investment before going for the low-margin, mass commodity, PC market. There are no doubt a few applied mathematics or physics researchers raising a pint in anticipation right now.
FPGA is great for DSP, AI and more. (Score:2)
Now, I recall some news when reconfiguration time was reduced in a significant proportion. I also remember that some guys at Amiga were very keen on it. Hopefully, the FPGA is more than plain old parallel stuff. Wanna see if we can get a hacker's regular hourly thought exercise.
I think reconfiguration is particularly useful if your system is bit wiser than a traditional number crunching procedural system. I'm not suggesting that you can get some NN to let h/w to converge to the ideal. (That's a too difficult problem in itself) Sure I won't. But the thing is, if you let your software know how FPGA can be utilized it can make a difference.
Especially, it occurs to any demo-coder that those tiny cute loops that do the tricks would fit nicely in a hardware design. So, I think you could make your DSP(audio,video,compression,etc.) & 3d stuff really faster. However, I suppose there are other ways in which you could actually improve the existing implementations. A key point is making your algorithms adaptive. Then, they are not the usual kind of "perfect tool" instruments but ones that use some heuristics that try to find the best hardware design for the job.
I suspect that the simplistic kind of translation [ say a 3d algorithm to an FGPA spec., then reconfiguration when the algorithm's needed (probably over one of the custom processors alloc'd for this task) and using it as a subroutine ] might be generalized to implementing a programming paradigm as hardware. It seems that OS and compilation systems would better be revised to get it done effectively, but still it is very interesting in its own right. The array of possibilies might be larger than the excitement in implementing cryptography and NN apps, or fast Java VM's. When I imagine that the cruical parts of an expert system, or an inference engine, or just about any complex application out there could be done that way, I'm awed.
Nevertheless, I don't know the theoretical "sphere" of the work precisely. It would be very satisfactory, for instance, to see some work on the computational complexity induced by such devices. Stuff that says "In x domain, FPGA's are useful" preferred, not the kind of stuff that says "Generally, it's NP-complete" or "Oh no, it's undecidable"....
Fraud (Score:1)
and have "sold one"(!?!) although probably not for
$26 million. The incredible claims constitute
felony fraud in any state if they should prove
false. I think we can see intent in the claims for
applications. (Holography no less!)
Where's the state attorney general?
Re:Read The Resumes! (Score:1)
Is it just me, or does it seem they have several 'sources' underlined to make it seem they links to other resources on the web? Has anyone checked out the other accomplishments to see if they are correct? If one of them supposedly created the world's fastest plotter with this technology, who is using it?
It all sounds too fishy. Notice that they have a partner in the internet search engine market. The partner iCaveo [icaveo.com] has nothing but some intro animations and a comments page. Sounds like they are trying to get the investor who will put money into anything related to the internet, regardless if the company can make any money. Even, the president of the company doesn't look like someone I would trust.
CPUs are NOT the problem, Memory bandwidth is! (Score:1)
Re:Old news... Then why am I so skeptical? (Score:1)
Re:What you COULD do with it... (Score:1)
But if that's all your're going to do with it, it would be considerably cheaper and faster to just put dozens/hundreds of real microprocessors in it.
No Pull Behind It (Score:1)
Okay, I'm not currently in industry doing stuff like this, however I have made enough machines with FPGAs and what not and even a reconfigurable machine, so I know what it involves.
Here is the first thing that makes me skeptical.
Eventually, reconfigurable computing [a term coined by Gilson, referring to the underlying technology behind the hypercomputer] will permeate all information systems, just because it's faster, cheaper, and better," Gilson predicts.
Does this bug anyone else who this guy supposedly coined the term "reconfigurable computing"? I read an article in EETimes (I believe) from 1996 that used this term. Hrmpf.
In addition it surprises me that he thinks his company can sell hundreds of the $26 million dollar boxes. I'm not entirely sure how many StarFire's SUN is able to sell each year, but I doubt its much more than that. I'm pretty sure its less. Sounds like just another start up trying to get noise about themselves.
While I do believe that reconfigurable computing is going to be one of the future trends, I don't think these guys can do it. People are skeptical to pick up on new technology, especially like this. Maybe if Sun or IBM was putting its weight behind it people would do it. But Star Bridge systems? It may work, but I doubt it.
Another 1000:1 compressor (Score:1)
What about the other kid who developed the video compressor that compressed hour long TV shows on a floppy, as long as the screen was black.
The hypercomputer can process all the hundreds of billions of instructions they claim and the whole thing is for real. Except for the one or two highly redundant, staged instructions it runs at hypercomputer speeds don't expect anything else to run faster than a pentium.
Re:Lies, damned lies and statistics... (Score:1)
As an logic designer I laugh! (Score:1)
My friend found that an FPGA makes a good adition to a processor for things like rendering, and photoshop/gimp filters. He found that on dedicated repetative tasks an FPGA is pretty good.
"There is no spoon" - Neo, The Matrix
"SPOOOOOOOOON!" - The Tick, The Tick
Re:What's with the whole "flat earth" thing? (Score:1)
Re:I think I see what they are trying to do. (Score:1)
What do you think Solaris, AIX, IRIX, Digital Unix, HP/UX and all them are? They are Unix OS and they are also propietary products. WinNT is a propietary product as well. I can't see what doesn't make sense there, sounds as if you were implying something is either Unix or propietary, not both.
Alejo.
Re:As an logic designer I laugh! (Score:1)
First of all, as several other people have mentioned, some FPGAs can be reconfigured 1000 times a second today.
Second, yes it is stupid to emulate an existing CPU design instruction by instruction. But in any typical working set consisting of a OS and any number of application, there will be code "hotspots".
That is, tight loops that are executed a lot more often than anything else. There will also be even more cases of instruction sequences occurring in somewhat less often executed loops, all over the place. All in all, there's always some operations and sequences of operations that are more common than others.
So instead of just emulating a generic CPU, you reconfigure the FPGA to handle the instruction sequences that take up most of the execution time at the moment directly in hardware.
I've had programs where 80% of the processing was string compares. And you've mentioned the other obvious examples: rendering, audio processing.
The point in this case is: Yes, a specially configured FPGA will always be more efficient FOR THAT PARTICULAR TASK. But how many people create FPGA configurations for their applications?
However, this concept (reconfiguring to handle commonly executed sequences), will AUTOMATICALLY optimize for the rendering cases etc. It probably won't do it as well as a hand code algorithm would. However, when you hand code an algorithm for a FPGA, you'll stick to the only what is needed to speed up that particular task, while reconfiguring on the fly will optimize for whatever task you are currently running.
Just like Suns hotspot technology do special optimizations and JIT compilation on the java bytecode executed most often. Only that in this case it isn't assembly that is generated, but microcode for the FPGA.
Re:Old news... Then why am I so skeptical? (Score:2)
The REALLY bad thing is that if your problem changed even a tiny bit, the optimization program would have to start over (probably not from scratch, but still a HUGE amount of work).
Re:Leased Computers (Score:1)
but I really really that they could have a tenfold increase just from a download of software. in essance there saying that in 5 years they can 'optimize' there softare 10x.
while CPU densitys may halve every 18 months, I don't think software folowes the same route
---------------
Chad Okere
Build a 1 TeraOp machine for $100! (Score:1)
If you don't include innerconnect delay, you can build your own 1 TegaOp supercomputer for about $100.
Xilinx has come a long way since 1997. They now claim to have 1 Million gate FPGAs, that run quite faster than the old XC4085XL-09.
But if you really want to really go for the TeraOps record, I'd suggest Xilinx's latest Virtex parts, and a benchmark doing 2-bit binary NANDS operations.
It may take some additional work to get such a chip to emulate WinNT, but think of the press coverage your benchmark will get.
Build a 1 TeraOp machine for $100! (Score:1)
If you don't include innerconnect delay, you can build your own 1 TegaOp supercomputer for about $100.
(1/1.6e^-9 * 1600 adders)
Xilinx has come a long way since 1997. They now claim to have 1 Million gate FPGAs, that run quite faster than the old XC4085XL-09.
But if you really want to really go for the TeraOps record, I'd suggest Xilinx's latest Virtex parts, and a benchmark doing 2-bit binary NAND operations.
It may take some additional work to get such a chip to emulate WinNT, but think of the press coverage your benchmark will get.
It's not general purpose (Score:1)
But it is a very special design. Reprogramming the FPGAs may be fast, but it is hard to program them to do a sequence of very different operations.
This is not quite unlike the Connection Machine (from Thinking Machines Corp.). A full CM has 64 thousand processors, but they can only do very specific tasks. If you program a CM to do matrix multiplication, it's lightning fast (or, at least it was in the days of the CM). But if you run a Perl interpreter, or any other not completely trivial or simple (matrix multiplication _is_ trivial and simple) piece of code on it, you will be _very_ disappointed.
Ofcourse these things are justified. Simple operations are done a lot in mathematical modelling. It will be very interesting to see what the supercomputer vendors can make up of a bunch of these FPGA boxes, wired to some standard processor boxes (to do the non-trivial stuff).
But don't think for a second, that we will be putting these things on the desktop, and have them running ``normal'' applications at a speed that is even comparative to a PII.
Re:Uhh... (Score:1)
Complete waste of money IFF your intention is to play Quake 3. NOT a complete waste of money if you want to do many other tasks that computers are good at.
Re:NT is native on the alpha (Score:1)
Valuable computing cycles (Score:1)
Re:Lies, damned lies and statistics... (Score:2)
What these guys are doing is fairly banal
compared to the more interesting research
being proposed.
The speed claims that they make are based on
large arrays of simple adder circuits, doing
no real useful work.
I wouldn't say it was a con, but it is a lot
of marketing hype and mis-information from what
I can see.
The only really interesting thing about their
system is that they took massive amounts of
knackered FPGAs and found a way to make a useful
system from them. This is imporant if they can
use it to hugely increase the usable yield of such
devices. It also means the systems can be very
cheap.
The FPGAs they use arn't really suitable for
Genetic Algorithm type exploration of configurations, as they arn't tolerant to incorrect configurations. Devices like the XC6200
from Xilinx is one of the few that can take
erroneous bitstreams without shorting out.
For this device interesting stuff is being done
evolving the basic logic structures.
However, the research into that depends on
parasitics and temperature effects, which are all
the things that digital design has been classically trying to supress and remove from the
design process. Makes it more a of niche market,
especially if you can't just re-use the bitstream
you've developed on another chip, as it'll have
different characteristics, even across the same
process batch.
But reconfigurable computing is a technology
who's time has come. It isn't even a matter of
when, it is a case of 'how much' will be in the
next generation systems. You'll be seeing a lot
more systems with embedded FPGAs in the future,
providing application specific logic when and
where it is needed.
Starbridge vs. Transmeta (Score:1)
Did you read the bios of the starbridge guys ? The president was a car salesman (the bio spends much space bragging about his ability to build cars since a young age). The CTO, who is supposedly doing all of the technical work, doesn't have any references other than typical wiz kid has been programming computers with one hand tied behind his back since he was 6 months old type stuff.
Transmeta, on the other hand, is run by a former Sun executive, backed by a Microsoft cofounder, and employs a gaggle of engineers with awesome track records (ala Linus).
Starbridge may very well be the next greatest thing, but there is considerable reason to doubt that they will amount to anything. Transmeta may not be the next best thing, but they've got as good a chance as anyone to do something interesting.
Re:Massive parallelism - still a long way to go (Score:2)
A good language makes it easier for the programmer to specify parallelism and easier for the compiler to exploit the parallelism, but in the end, it is a matter of program design (and I wouldn't hope for a significant change of this situation in the near future).
Chilli
PS: I happen to know, as I have written a PhD thesis and a number of research papers in this area. (You can get the stuff from my Web page, if you are interested. There is also a compiler project [tsukuba.ac.jp] targeting massively parallel systems.)
The article is misleading... (Score:1)
computer that's 62,000 times faster (in theory,
as has been pointed out already) is several
million dollars. It doesn't give specs for the PC-
like computer, it just says that it's "like today's supercomputers". Disappointing. But I suppose it will still be interesting to see if their PC is any good...
Re:Libel (Score:1)
Given that I live in Europe. If I had made the above claim, and the company decides to sue, is there anything those guys could do to me?
My opinion? FRAUD! SCAM!
Re:Give me a year and a half... (Score:1)
1) Find VC (if your ideas are good)
2) Find a location (people can be convinced that this will be Something Big(tm))
3) Build fabrication facilities (considering Xylinx makes the chips, this won't be difficult either)
Getting into the swing of production is the important part.
Move along, nothing to see here... (Score:1)
I live not too far away from Star Bridge Systems. If there really were major developments, I would read about them in local newspapers more than once a year.
Whoah! I'm salivating (Score:2)
BTW...the point of they earlier article was an announcement of the companies new HAL systems. This one is reporting the news that they are building PCs with this technology too. And they run windows NT under emulation mode. Wonder if that means they run Linux. Probably does, since it would have to be intel emulation, rather than windows emulation. So they would probably be quite useful, and easily integrated into current applications. Can't see how switches and routers could possibly have a problem integrating. They seldom resemble closely the systems that they are communicating with anyway.
Transmeta anyone? (Score:1)
Deepak Saxena
Project Director, Linux Demo Day '99
ummm.....hmmmm... (Score:1)
Sig? Who needs a fucking sig with a name like this!
Not the Time to Buy (Score:2)
----
Re:It will not be that useful in the short-term (Score:1)
3 orders of magnitude? (Score:1)
Additionally, it's the HAL system that's supposed to be up to 60,000 times faster, not this one.
I don't know if I believe this, as they say they're going to focus on the supercomputers, because they somehow couldn't make money on the home copmuters. If they sell 100 supercomputers a year for a maximum $26 million each, that's only a gross profit of $2.6 billion. Couldn't they give themselves a profit margin of 50% (not great considering the supposed 1000x improvement of price/performance), and sell a lot of pc's and make more than this in a year? At the very least, they should have investors galore trying to give them enough money to do this, or to hire enough people/places to focus on the HAL, and the home use.
"Reconfigurable Computing?" (Score:1)
Second, if programs running on the amorphous processor themselves need to morph, what's changing the processor's configuration.
Now, I believe that FPGA actually has an application, although dynamic processor configuration is not truly it's niche. However, suppose there is Flash ROM on a supporting BIOS that will configure the processor prior to bootup?
This will provide us with a definately novel idea: A processor that can be hacked as easily as a kernel.
The supporting BIOS itself would be accessible from the operating system, so that redesigning the default configuration could be done from inside the operating system. (And processor upgrades would be performed via software upgrades)
Another possibility would be to allow a processor capable of running virtual machines directly, as opposed to software emulation. This is possibly what they were hinting at when they mentioned x86 compatibility similar to the Alpha.
I believe this is actually a possibility about what Transmeta is up to. In fact, the two patents that Transmeta took out might actually involve the error-correction and programming methods of this type of processor.
But one thing's for certain, I believe this WILL have immense impact in the next three years.
******* DISCLAIMER *******
I am a software type, and a user. Men run screaming if I ever wrap my fingers around a soldering iron's handle. I am not qualified to actually understand this any more than I can tell what a circuit board will do by looking at it (without the printed info on the chips). I am not guaranteed to know exactly what I'm talking about.
Okay, anyone want to... correct my ignorance?
--
Re:Old news... Then why am I so skeptical? (Score:1)
If you for instance have an application that does string compares all over the place, you'd need to be able to recognize it's inner loop, and configure part of the hardware to do the same operation without decoding the same few instructions over and over (that is, you decode them once, find out that this should be handled by the special string compare hardware, and off it goes).
You'd need a good profiler in hardware, that finds code hotspots, and that tell the optimizer which code would be most beneficial to "compile" into microcode for the FPGAs.
Just lets say that the software isn't really the problem here. I'm more reserved about their ability to deliver on the actual hardware side (especially with regards to speed - I don't doubt their concept works in theory, but will it really be as fast as they claim?)
Re:Lies, damned lies and statistics... (Score:2)
Right now because of all the erroneous information they've released my guess is they're high tech snake oil salesmen. I doubt very much that they coined the term Reconfigurable Computing. It's been in fairly common usage for a while. There claim that it outperforms the IBM Pacific Blue with the caveat 'Oh, we ran a different performance measure so direct comparisions are different' is a huge understatement. IBM tested their machine doing real work, real code, albeit on their site rather than the customer site. Star Bridge tested theirs running a useless code perfectly chosen to make their machine look best.
The question isn't whether this machine will work, the question is if it even exists.
These are the same guys.... (Score:1)
Also remember there will be a huge penalty associated with reprogramming the FPGA. Based on the specs of the devices they are using, I would say several hours.
Wait a minute! (Score:1)
If I'm completely out of my mind or am making a fool of myself (because I haven't read the article), please bear with me.
Re:FPGA supercomputing? (Score:1)
I doubt the system would even know about context switches, or about the OS at all.
My understanding of the idea is as follows:
For the hardware, at any given time, you have a working set. The working set is all the code that belong to programs that are currently running.
So the CPU profile the code that is executed. It won't know about context switches or OS's or application boundaries at all. All it will know is that at positions X,Y,Z in memory there are code that accounted for, say, 70% of the total execution time the last minute.
The optimizer assumes that this code will keep running for a while longer. It then examines the code at those locations, and generate specific microcode for the FPGAs to handle those cases.
Thus, the longer a CPU intensive process runs, the more time the optimizer would spend on it.
The more diversity in what you process, the more generic the optimizations will have to be to get a net advantage. If you switch programs every second, then no specific parts of any program will influence the execution time spent in any set of instructions that much, and time will be spent on optimizing simple, common sets of instructions.
Needless to say, the more specialized your applications are, the more it will be able to optimize for speed.
And programs that are long running will have more of an impact on optimizations than applications that quit after a second.
Thus for instance the OS and system libraries will likely be heavily optimized.
And if the system is good, it will optimize short generic instruction sequences first, not highly specific code paths.
Oh, and the point is that no special compiler would be needed. You just compile into any instruction set that the system is configured for, and the system itself would then optimize the microcode for that instruction set.
Actually, to benefit more from this system, creating a simpler, higher level bytecode would probably be a great benefit (and simplify compilers...), since it would be a lot easier for the optimizer to generate good microcode for a small set of high level constructs than for some low level machine code (in the same way as it is a lot more difficult to translate from assembly for one CPU to assembly for another efficiently, than it is to translate from a high level language to assembly for any of the CPU's).
Re:Lies, damned lies and statistics... (Score:1)
However if I were to replace it with my workstation, where I multitask programs AND start new tasks and stop old tasks continuously, then the time it would take to reprogram the FPGAs would be substantial (in comparison to the number of operations it could do in that amount of time).
This technology isn't for the benefit of every average person (yet), unfortunately they wish it was and mis-advertise it occasionally. One of their biggest partners is a Cable company which wants to use their computers for cable encoding and such. A task that just needs a ton of FPGAs.
Enjoy
Umbro2
Already on slashdot once (Score:1)
Re:Old news (Score:1)
This is a new machine that costs $1000 from the same company.
That would make this a completely new story.
Re:Transmeta anyone? (Score:1)
FPGA supercomputing? (Score:1)
Finally, as I mentioned above, i/o is usually the bottleneck with high-speed computing. The FPGA design doesn't offer any compelling advantage there; it doesn't matter how much of the rendering pipeline it can do in hardware if the geometry data can't get there on time.
-_Quinn
Re:Calculate yourself, you don't need XXX ops/sec (Score:1)
think about the overclocking possibiliies.. (Score:1)
Skeptical (Score:1)
Re:link to old /. story (Score:1)
engineers typically do not produce press releases. and when marketing people do, they sometimes try to translate statements of a technical nature into something they believe will be easier to understand. in doing so, they screw things up. i have seen this first hand.
so i don't believe it's fair to judge any company harshly based on initial press releases. if anything, judge the morons that write the final copy.
- pal
No magic -- sorry (Score:3)
Think about it: both Intel and AMD (and everybody else) uses FPGA:s for prototyping their chips. If it was so much more efficient, why do they not release chips whith this technology already?
As for the reprogramming component part of this design: translating from low-level code to actual chip surface (which it still is very much about) is largely a manual even for very simple circuits, largely because the available chip-compiler technologies simply aren't up to the job.
Besides, have any of you thought about the context-switch penalty of a computer that will have to reprogram its' logic for every process
I think I see what they are trying to do. (Score:2)
It's pretty obvious that these guys are a fraud. If they had a real product, they would have every major hardware in the world lined up to buy them out for a billion dollars. Then they would have the resources to building more than the hundred machines a year they claim to be limited to.
Also, if they had a real product they would have some kind of proof. Like cracking RC5 keys. That would be a great proof! Build a supercomputer, design a distributed.net client for it, and then start beaking records with your demo machines.
So the real question is what these weasels are up to. I'm sure that they know that no one is dumb enough to hand over $26 million for a box full of vaccum tubes. They would have found out a long time ago that no one can award a $26 million contract without an ironclad proof of technology. Besides, their web page doesn't even make sense. They say that they have a proprietary operating system, but then on their hardware page it says that it will run either UNIX (I guess any flavor!) or Windows NT.
I suspect that they may be trying to find suckers willing to get certified in their development language, "Viva". They list a training course [starbridgesystems.com] as being available. To participate, all you have to do is sign this an NDA [starbridgesystems.com] and send it right in. Of course, all training will happen over the web. So you won't be able to tell what kind of machine that you are taking your training on. Or complain to someone if you figure out the scam. So even if there is no suckers willing to hand over $26mm, they're probably hoping to find a thousand frustrated postal workers willing to spend $5,000 to be the first to be trained in this technology that will enable them to "ride a great tide of change as one paradigm of computing technology gives way to another". And once they are trained, they get to work for Star Bridge Systems! And they get paid in "valuable computing cycles". I'm not making this up folks!
Re:60,000 times faster? (Score:1)
Over time some bit-slice technology has entered mainstream processor technology, to make them what they are today.
I think it is reasonable to assume that in 5-10 years we will see FPGA technology in mainstream computers.
Lies AND truth (Score:1)
Yes, they are stretching the truth a lot when they say 60,000 times a P-II 350. Yes, they are looking at only 4-bit operations. In general, they are talking about kicking serious butt when all you want to do is massively parellel applications.
But more and more the reason we are begging for more speed in our CPUs are for massively parallel applications. Game rendering, voice recognition, audio mixing, etc are all parallel applications.
What this thing is talking about doing is adapting _on the fly_ to whatever application you are running and reprogramming itself to maximize your use of the silicon. Today's chips are mostly superscalar, there are parts on the chip dedicated to certain operations, an integer add module, an integer mult module, a mem load module, floating point add module, etc. When you play quake, you stress out the floating point modules and leave the integer module twiddling its thumbs. All that silicon goes to waste, possibly only for a fraction of a second, but it could have performed a few MFLOPs if it had been reprogrammed to do FP.
Intel and AMD already recognize the need to handle massively parallel applications. This is where MMX and 3Dnow! are supposed to help.
That being said, we are looking at a whole new paradigm when we start using FPGAs. Today's languages are based on our current architecture paradigm (general purpose CPUs) and our applications are based on today's languages. To make a change to this will be a hell of a jump. To me, that is the best reason to start this stuff out in the supercomputer world where they have the money to rewrite software.
I for one am ready to buy some Xilinx stock. Worse case for them is that they sell only a few thousand more FPGAs and get their name in the paper. Best case is they sell millions and become the foundation for the next generation of computers.
26 million dollars?? who will pay that (Score:1)
for the hyper-machine cost me 26 million dollars, 60000 times faster then PII, im not sure that it will worth for the time being ( 18 months,...so long )
Re:Give me a year and a half... (Score:1)
So these auto mechanics somehow figured out something that IBM's 291,000 employees were just overlooking? Not likely.
---
Can we say, 'Apple'?
Sure, Woz wasn't a mechanic, but he did rev things up in a garage in Cupertino.
- Darchmare
- Axis Mutatis, http://www.axismutatis.net
Re:FPGA supercomputing? (Score:1)
The problem, like I said earlier, is scale. Can this company make an FGPA complex enough that it gains more by doing hardware acceleration of certain chunks of the algorithm than it loses by switching between those accelerations? (Alternatively, is there enough complexity in the FPGA to have a large chunk of the rendering pipeline in hardware AND a general processor core to handle the rest of the code
We'll just have to wait and see.
-_Quinn
Re:FPGA supercomputing? (Score:1)
Regarding the idea that the processor itself will profile its working set: while it's possible, it won't work that well, and special compilers will be necessary for performance. (I'm compiling Quake3, and no matter what else happens, I need to keep this set of gates the same because we'll be returning to the rendering loop very shortly. I also need a generic set of gates to handle the game logic, over here, and I don't want anyone to try and optimize the game logic because it's not worth the effort.) How do I know it won't work well to have the processor itself handle the optimizations? Look at Intel: they've given up on hardware doing the optimizations because it doesn't work well enough to keep their processors busy. If you optimize in the compiler, you can present the FPGA with an area in RAM that contains the proper gate configuration for your program and you get the speedup immediately, without waiting for the optimizer to kick in (which it might never). Even doing on-the-fly optimization in software, where you've got resources to spare, is insanely difficult: look at how late Sun was with its HotSpot tech.
-_Quinn
Re:26 million dollars?? who will pay that (Score:1)
Governments will happily pay that much. Especially foreign governments. They can do their nuclear weapons research without ever being detected. If it'll be possible ot cluster a few of those babies a country like Afghanistan, Iraq, or Pakistan could play catch up on 50 years of nuclear weapons research in a decade.
104 million dollars is a pittance to spend for that much knowledge.
Granted these won't be allowed to be exported for that reason, but who's to say one of those foreign powers won't send someone with enough cash to set up a dummy operation here in North America?
LK
Let's impress some VCs (Score:1)
And sure, sell the first system for $26 million, and the following systems for $1000 each? Supposedly they sold one system so far - I wonder who bought it. Either the company itself, or one of their VCs, probably. That's a neat way to raise money...
Security, the good ol' days (Score:1)
Now, we will have to grapple with crackers who are trying to squooge the amorphous geometry of the processor to their advantange. Maybe you can wipe out your competitor not by stealing his data but by squooging his processor form into an inefficient shape, slowly gagging him out out of business. It's a whole new range of opportunities for Bill.
Maybe the OS will come with some kind of FPGA Squooge Alert (you heard it here first) - which will dump a small error file (50MB will probably describe the amorphous shape) when a momentary configuration change, called a squooge, opens a whole new dimension of security problems.
Ugghhhh
I don't see it (Score:2)
Am I missing something?
This isn't right, is it? (Score:1)
What they said was that
- they will have a HAL Jr, that will fit in a suitcase, and will do 640 billion ops/sec
-they've "mapped out a series of hypercomputer systems, ranging in performance from the HAL-10GrW1, capable of conducting 10 billion floating-point operations per second, to aHAL-100TrW1, which conducts 100 trillion floating point operations per second"
Meaning: the supercomputer will eventually go up to 100 trillion ops/sec, the but the PC is only 100 billion
Now, as I said, could some tell me how much faster 100 billion a second is than a computer today? But I'm going to try to figure it out w/o really knowing.
If I assume the 100 trillion one is the one that's 60,000 times faster than today's computer, then would the 100 billion one be 60 times faster? If they can release something 60 times faster than a P2 450 in 18 months that would still be damn good, IMHO.