ATI's Stream Computing on the Way 129
SQLGuru writes to tell us that ATI has announced plans to release a new graphics product that could provide a shake-up for high performance computing. From the article: "ATI has invited reporters to a Sept. 29 event in San Francisco at which it will reveal 'a new class of processing known as Stream Computing.' The company has refused to divulge much more about the event other than the vague 'stream computing' reference. The Register, however, has learned that a product called FireStream will likely be the star of the show. FireStream product marks ATI's most concerted effort to date in the world of GPGPUs or general purpose graphics processor units. Ignore the acronym hell for a moment because this gear is simple to understand. GPGPU backers just want to take graphics chips from the likes of ATI and Nvidia and tweak them to handle software that normally runs on mainstream server and desktop processors."
World beyond x86 (Score:2, Insightful)
Re:World beyond x86 (Score:4, Funny)
If you're gonna beef up and make more general a GPU you might as well all it a Cell
NEXT!
Tom
Re: (Score:2)
Graphics card manufacturers have yet to cease pumping the 8hit. It is however peculiar that ATI would announce a technology that could potentially devalue their parent companies (AMD) product. I suppose next AMD will announce more Graphics processing capabilities on their CPUs. Whats good for the Goose right?
Re: (Score:3)
And all based and connected through Hypertransport.
Ad ons aren't enough. (Score:1)
Re: (Score:3, Insightful)
Oh really? Then perhaps you'd care to clue the rest of us in? I see very little impact from the x86's VLE instruction set. Only if you make assumptions about the underlying core based on the instruction set (which would not be a wise thing to do) could I see that VLE as an issue.
Re: (Score:1)
Even IA64? (Score:1)
Re:World beyond x86 (Score:5, Informative)
NVidia's current market cap [yahoo.com]: US$ 10.83B
And that's assuming Intel won't have to write down a ton of their current inventory (all their old Netburst crap). They'd have to issue a ton of new stock to pay for the purchase - I don't think their shareholders would go for it.
Re: (Score:3, Insightful)
to control it they only need $5.42B
Re: (Score:2)
Re: (Score:2)
it could happen.. not that i think it ever will
Re: (Score:2)
This makes no sense. Your logic is:
1) x86 sucks. Intel makes x86.
2) Graphics card makers are doing great stuff. NVidia is a graphics card maker.
3) Intel designed the x86, so therefore Intel's product designs suck.
4) NVidia is making cool stuff, so NVidia's designs are good.
Your conclusion: Intel should buy NVidia so innovation can start.
Your conclusion is in dire
Yeah, Intel did that. (Score:1)
Doesn't it matter that Intel's graphics are lame?
Re:Yeah, Intel did that. (Score:4, Interesting)
For most uses you don't need fast 3d graphics anyway. You just need the features. Or want them. Intel graphics will be enough to give Linux users their cutesy Xgl desktop with shadows and warping and blah blah blah and that will be enough to sell a bunch of intel cards solely because they have open source drivers. In fact my goal in future servers will be to get intel integrated graphics so that I can have the open source drivers.
On a desktop I don't care so much about whether drivers are open source or not. On a server, I care very much. I can use another desktop or desktop OS and get the same functionality, but I might not be able to conveniently jump over to another server.
Well go Intel then. (Score:1)
Re: (Score:2)
Some newer models even have a direct DVI output. And for the lower end machines you could always get a card for 30 bucks which enable DVI output through the ADD slot.
Dual link? Dual monitor? (Score:1)
Re: (Score:2)
I also don't think they offer dual link DVI, but honestly, i didn't find any facts regarding this (thank you for those wonderfully detailled product descriptions).
However, these products are intended for the corporate market. I've never seen a demand for Dual DVI or Dual Screen without a demand for a discrete graphic card (think CAD, etc.).
Thanks (Score:1)
SLI (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Intel graphics are perhaps lame for gamers who always need the latest 3D performace. But for everybody else - which is the majority of users, including PCs used at work by most - it's more than sufficient. It'll even play some games just fine (just not the very latest with high details).
I got Intel GMA video on one of my motherboards, and I must say I'm VERY pleased with it. Yes, it uses system memory - all of 8MB, leaving "only" 2040MB to the system. It has some of th
Hear of IP's? (Score:1)
Did a double take on that title... (Score:3, Funny)
Re: (Score:2)
http://www.youtube.com/watch?v=HIgsqYJmtJs&eurl= [youtube.com]
Re: (Score:1)
I'm cuckoo for cuckoo clocks, cuckoo for cuckoo clocks, cuckoo. . .
Oh, sorry. Anyway, it's true you have to do some work to reset them, but I'm not actually averse to work. You can lift barbells to generate waste heat, or you can lift cement blocks and sand to generate electricity.
KFG
Re: (Score:2)
Wow, that was disappointing. I was hoping to see something that somehow used the steam for computation (using "valvsistors" or something). : (
Re: (Score:2)
Re: (Score:1)
Stream eh? (Score:3, Funny)
Re: (Score:1)
AMD's 4X4 (Score:1)
Re: (Score:2)
Great... (Score:2)
Re: (Score:2)
RTFA (Score:1, Informative)
Does still leave room for doubt though.
Re: (Score:2)
Re: (Score:2)
The amount of time ATi devotes to their Linux drivers is beside the point. All we want are specs so that we can write our own drivers!
Besides, what about all the people using BSD (and the 3 people using HURD) that are completely unsupported by ATi?
GPGPUs... (Score:2, Funny)
Re:GPGPUs... (Score:5, Insightful)
Data projections (Score:2)
I've always thought of data projection queries in terms of Z-buffer processing. It would be interesting to see what a GPGPU could do for such queries.
For example, pricing, products, and services often have start and end date-times. Given a particular date, the effective pricing is most recently started data set that hasn't expired yet. It sounds easy in english, but it tends to be a rather bruatal set of unions and hierarchical-join queries to implement.
But given the I/O intensive nature of such pro
Re: (Score:2, Insightful)
Re: (Score:3, Insightful)
So that there are, for example, some specific common database operations that could be significantly more efficient with some optimized hardware. It's just that there's not necessarily a big enough market to design, test, produce, and sell cards designed just for that and make a profit.
Database Benefits (Score:2)
The benefit comes in that the GPU is tightly connected to a bunch of fast RAM that isn't being competed over by the general purpose CPU(s).
So, you throw 128MB or such of data onto the GPU, and you can get it sorted several times faster than a regular CPU could do it. Presumab
Re: (Score:1)
Maxim
Re: (Score:2)
Re: (Score:1)
Re: (Score:2, Funny)
They lack gravitational potential energy. Yeah, you can try to play around with extracting energy from the temperature gradiants of a lake or ocean (ponds don't have any worth worrying about), but it's just easier to stick a turbine in a stream to make the computer go; and unlike my heavy piston on a rope floating in a leaky sand filled cylinder engines the Sun carries the water back up to the upper reseviour for you.
KFG
Why would something like this be useful? (Score:1, Funny)
Re: (Score:2)
Re: (Score:2)
then you have to figure out how to stick them all on one die.. and have to talk to each other..
and while you can make somethings smaller
a video card doens't have to fit in a 1-2 in ^2 area.. it can be a full length ATX card with GB's of fast ram and multi core specilized proccessors..
a CPU
Re: (Score:2)
I'm not sure why some variant of this argument wouldn't hold true for GPUs, but there is a practical limit to how many cores a CPU can have on-die.
With the first set of dual-core CPUs, each CPU has its own L1 and L2 cache. However, this isn't optimal -- basically, this heavily favors scheduling processes on the last CPU which they used, to increase the likelihood that needed memory is already in the L1/L2 cache. So, most current dual-core architectures use individual L1 caches per CPU, but a shared L2 cach
Going the way of the FPU? (Score:2)
I predict that they will eventually go the way of the FPU.
Re: (Score:1)
You mean back into the main CPU core where they (GPUs) came from in the first place?
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
All my life's a circle, sunrise and sundown. . .
And the wheel of reincarnation turns again.
KFG
Re: (Score:2)
GPGPU primer (Score:5, Informative)
There's lots of good sites that talk about GPGPU [gpgpu.org]. Wikipedia [wikipedia.org] has an okay article on the subject as well, and NVIDIA has a primer (PDF) [nvidia.com] on the subject. But the summary of this article is a bit overly broad.
GPGPU isn't about moving arbitrary processing to the GPU, rather it's about moving specific, computationally expensive computing to the massively parallel GPU.
Effectively, the core idea of GPGPU solutions is that you compute 256x256 (or another granularity) of solutions entirely in one pass.
NVIDIA has several examples [nvidia.com] on their website, specifically the GPGPU Disease and GPGPU Fluid samples. The Mandelbrot computation they have there could also be considered an example. (More samples here [nvidia.com]).
GPGPU has already been utilized to perform very fast (comparable to the CPU) FFTs. In an article in GPU Gems 2 (a very good book if you're interested in doing GPGPU work), they indicate that a 1.8x speedup can be had over performing FFTs on the CPU. I've heard that there are now significantly faster implementations as well.
Re: (Score:2)
The amount of processing and bandwidth needed for these new telescopes comming online is staggering. For LOFAR we're using a IBM BlueGene with 12.000 Cores (Stella, nr.12 in the top500), using a 144Gb/s connection, and for SKA the nummbers are going to be orders of magnitude larger. The pos
Hooray for vague marketing terms! (Score:2)
SEGA!!!
Well, here's my issue (Score:1)
Or are there not as many of those tasks as we were led to believe?
Re: (Score:2)
There's plenty of those tasks. There's just not a heck of a lot to be done about it. The apparent recent focus on paralel tasks is partly because chip makers are running out of easy ways to make non-parralel tasks any faster. But it's relatively easy to do the same task more times in parralel at the same speed. Which probably doesn't help until the software gets re-written to take advantage of that, assuming it can be.
On the other hand, my impression is that a lot of tasks that seem like they can't bene
I blame AGP for the delay (Score:2)
We could have had this long ago if not for the fact that the AGP bus is slow as hell when moving data from graphics card memory back to main memory/CPU.
Sure, you could do computations in graphics memeory, if you didn't mind waiting forever to read the results back.
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
Good for them (Score:4, Insightful)
In the original PC, the VGA interface gave the CPU a direct window into the video memory. Your CPU was your GPU as well - the only thing the graphics card did was convert the raster of bytes in a certain location to a signal recognizable by the monitor. As such, the hardware wasn't optimized for the kinds of operations that would become typical in the games that followed. So video card manufacturers began a mitigation strategy which involved moving the computationally complex parts of rendering off to the video card, where the onboard processor could render much more quickly and more efficiently than the CPU itself. The drawback of this approach was that to take full advantage of your video hardware you had to run a certain buggy, unstable, and rather insecure operating system. Typically, the drivers were written only for Windows. Reinstalling Windows became a semi-annual ritual for serious gamers.
But, if ATI is successful in standardizing the GPGPU architecture, we may be able to take advantages of the video hardware on platforms other than Windows. While Linux has typical suffered a dearth of FPS games because of the lack of good hardware rendering support in the past, this has the potential for Linux to become the next serious gamer's platform.
Which is a good thing, IMHO.
Re: (Score:2)
It's totally Microsoft's fault that Linux is not interesting to most gamers, or that GPL doesn't allow proprietary drivers to be used on Linux. Yea. Totally Microsoft's fault.
As for gamers, they have lots of stupid rituals
Re: (Score:1)
It depends on what you mean by "successful." If you mean market penetration, then not much. But I look at "successful" as whether it does what I need it to do and does it better.
For starters, GNU Lilypond [lilypond.org], which is light-years ahead of software like Finale in its flexibility. True, it doesn't (yet) have all of
Re: (Score:2)
Re: (Score:1, Insightful)
So video card manufacturers began a mitigation strategy which involved moving the computationally complex parts of rendering off to the video card,
Oh, you mean those massive VME bus cards that SGI created, and the IRIS Graphics Library they also created to access the power of their graphics cards.
The drawback of this approach was that to take full advantage of your video hardware you had to run a certain buggy, unstable, and rather insecure operating system.
Oh, you mean after SGI created an open vers
Re: (Score:2)
You're kidding, right?
I hate to feed the trolls, but the counterargument is so irresistable: How many gamers would have seriously considered buying the systems you mention?
I know, I know. I must have had it hard because I had to settle for a $2000 PC back when we had to walk to school, both ways, in the snow...
Good graphics hardware has always been available to those who could afford to pay an engineer's salary. But even the average engineer would think twice about buying the systems you mention
Stream computing = Citrix on steroids? (Score:2)
I'm sold, if they can find a way... (Score:1, Offtopic)
And Who Really Needs This? (Score:2)
Re: (Score:2)
Core 2 Duo? Look at those of us who use VMware, any sort of video rendering, or multi-tasking on a general level and we kill those processors in a heartbeat.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
I traded nerd for Geek a long time ago. Gets more girls this way.
Re: (Score:1)
(And your original comment DEFINITELY means you are not a geek
Misplaced signifigance. (Score:2)
Does this GPU require water cooling? (Score:2)
Stream processing is NOT new (Score:3, Interesting)
Core Graphics? (Score:2)
This sounds like the GPU-based programs that OS X uses to perform Core Graphics and Core Video operations.
New Generation (Score:2, Interesting)
As far as I can tell, it's a copy of Accelerator (Score:1)
ftp://ftp.research.microsoft.com/pub/tr/TR-2005-18 4.pdf [microsoft.com]
published last year.