AMD Betting Future On the GPGPU 181
arcticstoat writes with an interview in bit-tech "AMD's manager of Fusion software marketing Terry Makedon revealed that 'AMD as a company made a very, very big bet when it purchased ATI: that things like OpenCL will succeed. I mean, we're betting everything on it.' He also added: 'I'll give you the fact that we don't have any major applications at this point that are going to revolutionize the industry and make people think "oh, I must have this," granted, but we're working very hard on it. Like I said, it's a big bet for us, and it's a bet that we're certain about.'"
Given how specialized the use case scenarios are (Score:4, Insightful)
for OpenCL, this sounds very dangerous. Dangerous as in "Remember this really cool company named SGI that made uber powerful and specialized computing platforms?"
Personally, I actually use things like OpenCL to do real time image processing (video motion analysis), but I don't know too many others in the industry that do, so I can't imagine their market is particularly large.
There must be some huge potential markets that just don't seem to come to mind for me...
Re: (Score:2)
The point of it is that most if not all computers made today have the potential for including OpenCL goodness, it doesn't mean that a particular user will need it, but it will be available for the developers to tap. It might be something that the user only uses from time to time like for video decoding/encoding, but if the hardware is already capable of handling a bit of that with minor design adjustments there's little reason not to offer it.
Plus, 3D accelerators used to be just for games, and since pretty
Re:Given how specialized the use case scenarios ar (Score:5, Informative)
Mathematica 8 can use OpenCL (and CUDA) I think the new MATLAB can, too.
Re:Given how specialized the use case scenarios ar (Score:4, Interesting)
Re: (Score:2)
Maybe they're betting on governments buying video surveillance processing equipment...
Re: (Score:3)
Most of the corporate/government money for that is going into FPGA boxes that sit between the camera (or in the camera) and the network connection.
Re:Given how specialized the use case scenarios ar (Score:5, Informative)
I don't think he means OpenCl specifically. OpenCl is a tool that connects you to GPU hardware. GPU hardware is designed for a different problem than the CPU, so they have different performance characteristics, in the not too distant future heterogenous multi core chips that do both the CPU and GPU calculations of today will be mainstream, and there will general purpose computing tools (which OpenCl is a relatively early generation of, along with CUDA) to access that hardware.
While I don't agree with the idea that this is the entire future, it's certainly part of it. Right now you can have 1200mm^2 of top tier parts in a computer, roughly split half and half CPU/GPU - but not every machine needs that, and it's hard to cool much more than that. So long as there's a market which maximizes performance and uses all of that, CPU/GPU integration will not be total. But there will be, especially in mobile and not top end machines 'enough' performance in 600-800 mm^2 of space, which can be a single IC package which will be a combined CPU-GPU.
It is, I suppose, a bit like the integration of the math co-processor into the CPU a decade ago. GPU's are basically just really big co-processors, and eventually all that streaming, floating point mathy stuff will belong in the CPU. That transition doesn't even have to be painful, a 'cheap' fusion product could be 4 cpu cores and 4 GPU cores, whereas an expensive product might be a 8 core CPU in one package, and 8 cores of GPU power on a separate card, but otherwise the same parts (with the same programming API). The unified memory will eventually obsolete the dedicated GPU probably, but GPU RAM is designed for streaming, in order operations, whereas CPU ram is for out of order random memory block grabs, ram that does either in order or out of order equally well would solve that problem (or as long as it does it well enough), but architecturally I would have GPU ram as a *copy* of the piece of memory that the gpu portion of a fusion part will talk to.
As to what the huge market is: OpenCL gives you easier access to the whole rendering subsystem for non rendering purposes. So your 'killer' apps, are laptops, tablets, mobile phones, low powered desktops, really, anything anyone does any sort of 3D on (games, windows 7, that sort of thing), so basically everything, all your drawing tools.
The strategy is poorly articulated with OpenCl, but I see where they're going. I'm not sure what Intel is doing in this direction though, which will probably be the deciding factor, and nVIDIA, rather than positioning for a buyout (by Intel), seems to be ready to jump ship to SoC/ARM type products. Intel doesn't seem to have the GPU know how to make a good combined product, but they can certainly try and fix that.
Re: (Score:2)
I was also thinking about the analogy of the GPU to math co-processor. But I think the future is kind of the reverse where processor packages are different specialized and generic architectures mixed and matched both on a single chip and motherboards that evolve into back planes. Expansion slots are more or less becoming processing slots. Sure you can plug peripherals into them but by and large peripherals have all gone external. A desktop motherboard is becoming little more than a backplane with an integ
Re: (Score:3)
Right now those 'fusion' type products are pretty terrible, I'm not sure decent RAM would be enough to save them, at least if you want decent gaming performance. Also, once you stick the CPU and GPU together you can pipe data directly from one to the other, so the RAM side of things probably changes a bit, but I'm not sure how. I've migrated to AI temporarily from the world of GPGPU so I'm not fully up on how, in detail, they need to talk to each other.
AMD may be betting the farm on a strategy that involv
Re: (Score:2)
Just like with 3D Now, SSE, x64 and all the other modifications that CPUs have had most programmers won't do anything special to take advantage of the GPU. Libraries, either compiled in or shared at runtime, will be where GPUs get used. For example we have crypto APIs that automatically use hardware AES support on CPUs that have it, and even dedicated FPGAs.
At the moment one of the biggest issues is the amount of time it takes to set up tasks for the GPU to perform. The data has to get from main RAM to vide
Re: (Score:2)
That problem won't go away that easily. There is a fundamental reason why RAM nowadays is faster for sequential operations, and that reason isn't going away without a major technological reakthrough.
Re: (Score:2)
I don't believe you understand OpenCL's application domain, as you insinuate that it only applies to specialized use case scenarios. Even if you choose to ignore how widespread OpenCL is in domains such as games, you always have multimedia and graphics processing. Adding to this, there are countless people all around the world who desperately seek a "supercomputer in a box" such as what you can get if you are suddenly able to use graphics cards for this. I happen to be one of these people who desperately
Re: (Score:2)
What they had going for them was NUMAlink-induced scale-up capacity, which they still have in their Itanium and x64 systems.
Re: (Score:2)
SGI had and maintained its niche market for workstation graphics processing, especially in FX houses. They never were competitors in the data centers, and for a long time, nothing really stood up agains IRIX.
What killed SGI was an explosion in capabilities from the x86 market. Suddenly a maxed-out $3-4K machine from Dell with a decent NVidia or AMD card could beat the pants off of the $10K + machines SGI was making. In-house IRIX apps were ported to Linux or BSD, Windows apps skyrocketed in popularity, and
Re: (Score:2)
AMD lost that bet (Score:5, Informative)
AMD famously overpaid by 250% for ATI, then delayed any fusion products for 2 years, then wrote off all of the overpayment (which they had been required to carry as a "goodwill" asset). At that point, they lost the money.
Luckily for them, ATI was still good at its job, and kept up with nVidia in video HW, so AMD owned what ATI was, and no more. But their gamble on the synergy was a total bust. It cracked their financial structure and forced them to sell off their manufacturing plants, which drastically reduced their possible profitability.
What they have finally produced, years later, is a synergy, but of subunits that are substandard. This is not AMD's best CPU and ATI's best GPU melded into one delicious silicon-peanut-butter cup of awesomeness. It's still very, very easy to beat the performance of the combination with discrete parts.
And instead of leading the industry into this brave new sector, AMD gave its competition a massive head-start. So it's behind on GPGPU, and will probably never get the lead back. Not that its marketing department has a right to admit that.
Re:AMD lost that bet (Score:5, Insightful)
Of course it isn't the best GPU with the Best CPU. It is a good CPU with a good GPU in a small low power package. It will be a long time before the top GPU is going to be fused with the top CPU. That price point is in an area where their are few buyers.
Fusions first target is going to be in small notebooks and nettops. The machines that many people buy for every day use.
GPGPU's mainstream uses are going to be things like video transcoding, and other applications that are going to be more and more useful to the average user.
For the big GPGPU power house just look to high end discrete cards just as high end audio users still want desecrate DSP based audio cards. I am waiting to see AMD use Hyperchannel as the CPU GPU connection in the future for really high end GPGPU systems like supercomputing clusters.
Re: (Score:2)
"It will be a long time before the top GPU is going to be fused with the top CPU. That price point is in an area where their are few buyers."
If AMD wants to stay in business instead of rummaging in dumpsters for customers, it will do exactly that, and discover that they can take major desktop and notebook market share by selling lower power and lower unit cost at higher combined performance.
But that's only if Intel and nVidia don't smack them down in that segment, because AMD's best stuff is not nearly as g
Re: (Score:2)
They are selling lower unit costs and higher power just in that segment. The I3 is the first target and then the I5. Most customers are using integrated graphics and want good enough speed with long battery life. The fact is that most people want the best bang for the buck. The top end will always be served best by discrete GPUs. AMDs best stuff is every bit as good as nVidia in the graphics space and Intel doesn't even play in that space. AMD does well in the CPU space where they compete but Intel does ha
Re: (Score:2)
"The I3 is the first target and then the I5."
If they're targeting second-banana segments with their best-possible offer, they're aiming for continued marginalization.
This strategy will trade one AMD CPU out for one AMD GPU in. The number of additional sales they make on the integrated GPU will be a smaller fraction on top of that.
The money, btw, is still in the high end. Everything else is lower margin. Especially if it has an AMD sticker on it.
Re: (Score:2)
You speak as if Intel and nVidia are teaming against ATI when Intel would love nothing more than to eliminate nVidia so customers have to put up with their shitty GMA bullshit...
Re: (Score:2)
You speak as if Intel and nVidia are teaming against ATI when Intel would love nothing more than to eliminate nVidia so customers have to put up with their shitty GMA bullshit...
There's only one thing that they'd love more, and that would be to eliminate AMD. Intel competes with ATI for the low end integrated graphics market. nVidia makes nicer but more expensive stuff.
Re: (Score:3)
It will be a long time before the top GPU is going to be fused with the top CPU.
Yeah, a long time like never.
If you 'fused' the top GPU and the top CPU in the same package, you'd end up with a fused lump of silicon because you couldn't get the 500+W of heat out of the thing without exotic cooling.
And who, out of the performance fanatics who buy the top CPU or top GPU, would want to have to replace both of them at the same time when they could be two different devices?
Re: (Score:2)
If they can put the best in one core it would make a dandy basis for a game console. Probably it will have to go in two packages and connect via HT, though. That will increase cost but, as you say, make cooling possible.
Re: (Score:2)
Re: (Score:2)
It's begging from those from whom it differs. How does that usually work out for the beggar?
Re: (Score:2)
Re: (Score:3)
*WHOOSH*
Re: (Score:2)
Did anybody really think they'd meld the *best* gpu with the *best* cpu ? this is beyond naive, it has never happened and will never happen.
what is *is* is the best integrated graphics/video, with an OK CPU. That combo is OK for 95% of users. Brazos is completely sold out due to much better sales than expected. The new APUs can have the same success, especially given Intel over-emphasis on the CPU, and sucky integrated GPUs.
Re: (Score:2)
Why would anyone want a 500W+ chip? In a laptop? That's just stupid. Even in a desktop, I wouldn't necessarily want a top end CPU/GPU chip on one die. That's a lot of heat and power concentrated in one spot. Ever wonder why there isn't a 6GHz CPU? That's why they didn't do this with top end parts. These integrated chips (no matter who is making them) are not intended to be top end products. It is simply not feasible.
AMD set their sights on the netbook/low end laptop market with their first release b
Re: (Score:2)
So, basically you are saying you are an Intel shill?
Actually the ATI revenue is what kept AMD in business the past 3 years.
Not to mention the 1.6 billion in settlement from Intel for copyright infringements.
Re: (Score:2)
>So, basically you are saying you are an Intel shill?
No, I'm saying I'm realistic about the disaster that is AMD.
>Actually the ATI revenue is what kept AMD in business the past 3 years.
That's what I said.
>Not to mention the 1.6 billion in settlement from Intel for copyright infringements.
For what? Dude. Seriously?
Re: (Score:2)
Luckily for them, ATI was still good at its job, and kept up with nVidia in video HW, so AMD owned what ATI was, and no more. But their gamble on the synergy was a total bust. It cracked their financial structure and forced them to sell off their manufacturing plants, which drastically reduced their possible profitability.
And how do you think ATI was able to be so good at its job? With help from AMD's engineers, patents, and processes. ATI's cards only started getting really good after the buyout, for instance their idle power dropped by huge amounts after integrating AMD power saving tech. It was years before nVidia had any decent cards with sub-50 watt idle power (let alone less than 10 watt), and it cost them market share. Avoiding a process disaster like nVidia's recall also was no doubt influenced from being part of
Re: (Score:2)
"Of course it's very easy to beat Fusion with discrete parts, if it weren't the morons designing the discrete parts would be fired."
You're the idiot. Of course they could have put AMD's best designs into a Fusion part at the same time they were developing them for discrete use. But they didn't, and thus produced a toy-computer chip instead of a world-beater.
"you forgot that part where Intel was caught using it's dominant market position to keep integrators from using AMD products."
I didn't forget it, I le
Re: (Score:2)
... if you know anything about manufacturing, the money from the plant comes from volume, not sales ...
All the time, our customers ask us, "How do you make money doing this?" The answer is simple: Volume. That's what we do.
Re: (Score:2)
do you have change for a rubber ningi?
What is the point? (Score:3, Insightful)
So will this make peoples web apps and office programs run noticeably better?
Because that is what the vast majority of computers are being used for even in the commercial sector. Computer hardware peaked for the average user around 2000. Now as the article points out we are sitting around waiting for better software*. AMD would be better off developing that software then pushing hardware for a need that mostly doesn't exist.
* Why is it that stuff like user agents and other forms of AI mostly disappeared from the scene in the 90's? We have the power now to run the things that everyone seemed to be working on back then.
Re: (Score:2)
My guess would be that the tasks people were envisioning for them got taken up by something else. Like google maybe.
I just don't think that things like your own private thing to crawl the web are what people want/need any more. It wouldn't be the first time someone has postulated some "ground breaking" technology
Re: (Score:2)
I would say part Google and part twitter. Google for directed knowledge and Twitter for breaking news. Twitter uses a massive distributed organic super computer as it's user agent.
Re: (Score:2)
* Why is it that stuff like user agents and other forms of AI mostly disappeared from the scene in the 90's? We have the power now to run the things that everyone seemed to be working on back then.
Because user agents are useless once you know how to use your own computer.
"Useragent search 'AI', display the 4 most interesting results in separate tabs."
- vs -
[ctrl+J] AI [enter] [click] [click] [click] [click]
Hint: middle button click == open in new tab [on FF].
P.S. "Google" <-- search engines ARE user agents! They spider the web, and determine the most relevant results. All you have to do is type your interest at the moment (or say it if you have voice activation software ala Android), a
Re: (Score:2)
Re: (Score:2)
Um... [ctrl+J] opens up the download window on Firefox.
Yeah, and the Germans didn't bomb Pearl Harbor either. Doesn't really change the point of the rant.
Re: (Score:2)
What it means is that you can have a gaming machine where the GPU is completely shut off when you're not actually gaming. There are definitely cards out there that will consume nearly as much power as the rest of the system. While they're somewhat unusual for most people, there are definitely cards out there that will use over 100watts themselves, and that's without going the SLI route. And for a machine using that much power, you can still be taking about 50 watts being used on just normal tasks. Granted t
Re: (Score:2)
Yes it will. Most web browsers now use hardware acceleration.
Re: (Score:2)
Not on XP, not on linux (at least properly working one), and I'm not sure about macOS.
Gonna be a small market in this realm for a while.
Re: (Score:2)
These are new computers so XP isn't an issue. MacOS does but they only run Intel for now. Linux we will see.
Re: (Score:2)
See IE9 and future versions of Firefox/Chrome that will include GPU accelerated graphics. There is definitely a benefit for the average user of web apps, or at least there is if they like graphics intensive sites. Flash can play back 1080p video on fairly low end machines as long as they have GPU video decoding now, and there is of course WebGL if that ever takes off.
I think people underestimate the hardware requirements of modern systems. Yeah, you can run that stuff on a Pentium 3 with 128MB RAM but why b
First Step... (Score:3)
Make it abundently clear what you need to start development on this platform. Will it work on all new computers or just a rare AMD chipset and my code is worthless on all other machines.
Re: (Score:3)
To use OpenCL, you need any device that has an OpenCL driver written for it, and the driver. These devices include, but are not limited to:
AMD graphics cards
NVIDIA graphics cards
AMD CPUs (not just the new Fusion ones)
Intel CPUs
Multiple IBM products, including the Cell processor
Some chipsets / embedded systems.
To get started with an x86 processor, just download the AMD APP (accelerated parallel processing) SDK and follow the tutorials.
OpenCL is an immature standard (Score:2)
Last time I checked, you also needed to link against vendor-specific libraries, meaning one library for AMD, one for NVIDIA, and one for Intel. This makes cross-platform OpenCL deployment a bitch. Unless and until these vendors get together and settle on an ICD standard, I don't see OpenCL going mainstream.
Re: (Score:3)
I will never buy ati again (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
... heh, ever since I took the plunge and bought an ATi Radeon 7500 All-in-Wonder, purely on the strength of their promise to work with open source driver team. While that card did get decent support from the GATOS team at the time, my card was kinda the cutoff point for their future closed and open source driver efforts.
Anyway, nowadays I mostly just pine for that alternate universe where Intel bought ATi instead, in which we'd be rid of crappy Intel IGP hardware, but finally have had good open source dri
Re: (Score:2)
Anyway, nowadays I mostly just pine for that alternate universe where Intel bought ATi instead
i don't think anyone expected AMD to buy ATI.. when i first read about it i checked the date to make sure it wasn't an April fools joke.
Re: (Score:2)
Re: (Score:2)
This is very much a matter of taste, imho nvidia forceware package is an abomination nowadays. I'm using hd4870 in my current machine with catalyst 10.5 and it is much better.
It's a long way from times when ATI was an abomination when it came to drivers and nvidia had very good driver packages.
Re: (Score:2)
solely based on their mediocre driver support.
esp. on GLinux -- At least they are releasing open source drivers, but I haven't used them for a long time (don't they still require a binary blob with those "open source" drivers?). When will they learn, we buy your shit for the hardware, your drivers mean jack shit, they are not "uber bad ass top secret", let the hardware stand on its own and give us the full open source code!
Both the open and closed Nvida drivers I use are a little flaky on GLinux too... Honestly, if you want to make it to the futur
Re: (Score:2)
Almost 100% linux based pipelines. And now almost 100% Nvidia Quadro. Because of driver support, not the cards themselves. But it's a problem ATI can remedy for themselves. I think re-establishing, and putting meaningful funds into their open-source driver project is the first step.
Re: (Score:2)
I sure hope they do it sometime soon. I just put Vista back on to a machine here because it won't run anything else properly... and it's old :(
somebody tell AMD that the PC is dead (Score:2)
a computer is becoming something you carry everywhere and use almost everywhere. the PC/mac is something most people will keep off 95% of the time and use a few times a week
Re: (Score:3)
That's just utter bullshit and marketing hype.
Re: (Score:2)
Re: (Score:2)
How long before your phone has four cores and a crapload of RAM? If IBM could ever get the cost down on MRAM we could maybe get 'em with stacks of it :)
Re: (Score:3)
You're almost right.
Except that at work -- Where desktops will never die. Editing a spreadsheet or writing code on a portable is retarded. Even if we go to a dockable solution it's still a PC.
P.S. The "smart" in smartphone == PC == Personal Computer.
Someone tell alen to get a clue (Score:2)
Seriously, on what do you base this? The fact that you personally are a cellphone junkie?
Sorry man, I've seen no evidence PCs are dying at all. The seem to be doing quite well.
In fact, looking at history, I am going to predict they will continue to sell at or above current levels indefinitely. Why? Because that is what's happened with other computers.
Mainframes are not dead. In fact, there are more mainframes in use now then back when the only computers were mainframes. They are completely dwarfed in number
Tip for Terry (Score:5, Interesting)
invest a little money in blender foundation: http://blender.org .
They are working on new renderer based on CUDA
Existence of a free and open source openCL renderer of professional quality would force closed source developers to develop GPU based renderers as well or lose customers.
You can even invest in secret , there are other sustantial supporters of the blender foundation whose identity is not given.
The Cycles renderer F.A.Q.
http://dingto.org/?p=157
Re: (Score:2)
Re: (Score:2)
Most professional rendering packages that are on renderfarms already have GPU rendering features.
Also, it's a lot easier to pack 2 Intel processors with 6 cores each into a 1U box than it is to create an equivalency with GPUs.
Re: (Score:3)
Existence of a free and open source openCL renderer of professional quality would force closed source developers to develop GPU based renderers as well or lose customers.
Like this ? http://www.nvidia.co.uk/page/gelato.html [nvidia.co.uk]
Don't kid yourself, true professionals use the professional/closed source solution
Don't get me wrong, Blender kicks ass (while has its warts) but it's far (albeit not very) from 3DS/Maya/Cinema 4D
Re: (Score:2)
No, GPU rendering should achieve at least 10x and perhaps much more speedup. The Fusion architecture will allow using main RAM for the GPU with little penalty. Allowing a major project like Blender to get locked into your arch-nemesis' language standard is also a huge opportunity cost for ATI.
They Are Not the Only Ones (Score:2)
AMD are far from the only company to make this bet. For one, the bet is backed by Apple, who are the creators of OpenCL. Nvidia have a GPU computing SDK with full support for OpenCL for all major platforms. Even Intel has just recently provided Linux drivers for OpenCL, and have supported windows for a while. ARM will have an implementation soon for their Mali GPU architecture.
I use OpenCL for nonlinear wave equations. There may only be a few OpenCL developers at the moment, but with articles like this, the
GPUs coming of age (Score:2)
Until relatively recently, it's always bugged me that there's been these incredible number crunching processors, but that they've been mostly locked away due to the focus on one subset of graphics (rasterization), rather than an all-encompassing generic style which would allow ray-tracing, particle interactions, and many other unique, weird and wonderful styles, as well as many amazing applications which couldn't otherwise exist.
Finally, that's beginning to change with projects like OpenCL, CUDA, even GPU c
Doubt I'll be buying any AMD stock soon (Score:2)
I'll give you the fact that we don't have any major applications at this point that are going to revolutionize the industry and make people think "oh, I must have this"
Translation: We don't really understand how to market this, or the size of the market for this.
it's a big bet for us, and it's a bet that we're certain about.
Translation: We don't have any other promising R&D in the pipeline at the moment so if this fails to play out well for us we will still be number 2 but no longer a top line mark, it will be back to the K6 days for us.
why graphics? (Score:2)
Why do we still need to buy "graphics" hardware to use GPGPU-like acceleration? Why not extend our general notion of the cpu?
It makes me feel rather silly to be buying a graphics card just to improve the performance of some non-graphics-related computation.
Re: (Score:2)
My thoughts exactly. This should be part of the CPU. Like floating point.
Re: (Score:2)
OpenCL: Too slow, too late (Score:3)
I've developed applications (for PrimeGrid.com) for both nVIDIA CUDA and AMD OpenCL. The thing about GPGPU is that you have to write very close to the hardware to get any reasonable speed increases. CUDA lets you do that. But OpenCL is practically CUDA running on AMD; not close to the hardware at all.
By all rights, my application should be faster on AMD cards. It's embarassingly parallel and has a fairly simple inner loop - albeit doing 64-bit math. If I could write it on AMD's close-to-metal language, Stream, I'm sure it would be. But while nVIDIA offered nice documentation and an emulator for CUDA, AMD didn't for Stream, and only recently did for OpenCL. nVIDIA's since removed their emulator, and since they (shrewdly) don't let one machine run both brands of cards together, I'm mainly aiming at CUDA now.
If AMD had come up with a C-like, close-to-metal language, with good documentation and preferably an emulator, they could have run circles around CUDA. Maybe they still can; but I doubt it.
Note to AMD management... (Score:2)
Re: (Score:2)
I bet you drive a big car too, huh?
Re: (Score:2)
Re:Gnu Privacy Guard Pickup Unit? (Score:5, Interesting)
The odds of AMD inventing unicorns and saving the company with their sale are better than I'd give the idea that OpenCL will become popular.
I work with databases all day, and we regularly get people who say "why can't you accelerate sorting using a GPU?" The reason why you can't is that by the time you transmit the whole problem set over to the GPU, wait for it to compute, and then transfer the results back to the CPU again, you could have just sorted it on the CPU. This problem, that you have to load/process/return everything from the GPU, only goes away if they are capable of running much more general software. I'd have to move the entire database query executor onto the GPU, and it would need enough memory to do significant tasks, before it made sense here.
I see OpenCL continuing to be more popular in scientific computing applications, making far less nodes required in the computing clusters they tend to run. It's hard to imagine another area they have any real potential to be popular in.
Re:Gnu Privacy Guard Pickup Unit? (Score:5, Informative)
Re: (Score:2)
They've improved, but not eliminated, the problem I was commenting on. The data you want to operate on often starts in L1 cache of the CPU at the point where it might make sense to off-load to the GPU. The speed drop involved in the context switch to the GPU environment, which then has to fetch the data from main memory, is still large. Fusion drops the point at which the switch can make sense. But the interconnect to the GPU was already quite fast before, and it still got in the way. The speed advanta
Re: (Score:2)
What about merging the instruction sets of the GPU and CPU?
I understand that the GPU part will still need its own registers but I see no reason that the instruction sets can't be merged. Meaning the GPU and CPU share cache and doing a graphics type operation on some piece of data is just a CPU instruction away (as opposed to an interrupt+bus transaction).
Re: (Score:2)
That's a good point although they are currently doing fusion as a shared memory architecture anyway. So they aren't really targeting the high end which is what needs more than your typical dual-channel DDR3 bandwidth.
Re: (Score:2)
I think with true CPU integration you wouldn't even have the kernels. You'd be branching and looping on the CPU and then calling the individual GPU instructions right then and there. I mean sure, the kernels cut down on CPU to GPU traffic but the kernels themselves wouldn't need to exist if it weren't for the need to cut down on CPU to GPU traffic.
As for synergy, totally agreed. I see the Cell as a decent attempt to get there. A few more SPEs and it might have been able to match the throughput of a GPU.
On
Re: (Score:2)
Re: (Score:2)
Doing larger sorting operations usually involve breaking the problem into multiple data sets, then merging the sorted subsets back together again. If done right, you can get each of the GPU units working on sorting their own piece most of the time. The UNC [unc.edu] results are typical, and note that data sizes to be sorted now are much, much larger than the right side of their graph--so the slower growing runtime is even more important.
Also, sorting time can take up a lot of the CPU resources on a busy database se
Re: (Score:2)
I work with databases all day,
Well there's yer problem!
But seriously, it's well-known that GPU acceleration isn't very useful for database applications. However, compared to desktop computers your field is a bit of a niche.
People who are very happy with existing GPU acceleration is 3D artists. Most implementations right now are in CUDA, but OpenCL is getting more common. Witness its power here: http://www.youtube.com/watch?v=8bDaRXvXG0E [youtube.com]
Another niche, it is true, but that rendering engine could soon be powering games in realtime. Then th
Re: (Score:2)
Really, people ask you to speed up the databases using the GPU?! Regularly.
Re: (Score:2)
PhysX is proprietary, it was developed by nVidia precisely for this reason, to make it difficult for the competition to compete. It's been integrated into CUDA which is an nVidia only technology meaning that even if AMD wanted to integrate it they couldn't without paying a lot of money in licensing and architectural changes to their product line.
Re: (Score:2)
This is offset by the fact that ATI generally costs about 10-15% less for the same speed.
Re: (Score:2)
PhysX is proprietary, it was developed by nVidia precisely for this reason
Nope. PhysX was developed by Ageia.
Re: (Score:2)
Re: (Score:2)
Not really. It's analogous to a drawing. You can put your name in the hat as many times as you want, but the person who put their name in once could get lucky and have their name drawn over the person who has their name in 1000 times.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Sadly not. Everyone and their dog seems to be BitCoin mining these days, mostly with AMD GPUs (though there are a few miners out there using FPGAs).
Re: (Score:2)
“Given a choice between doing it with CUDA or not doing it for a while [while waiting for] OpenCL, we chose the former.”
Hopefully as OpenCL matures, Adobe will see the advantages.