One example of using the GPU's computational power to make new types of photography possible is the new app Thalia Lapse HD/R (http://www.thaliacam.com/), which records HDR timelapse videos by taking several exposures for each movie frame to be recorded, merging them, and tonemapping them. (Disclosure: I am the author of the app, and my company is the publisher.) In this way, the dynamic range of the iPhone's sensor that's not quite 8 bit wide can be extended by another 5 bit or so, just by using some clever math on the powerful GPU that's already there for eye candy and gamers, and this in turn makes stunning movies of scenes with a large dynamic range possible that would be rather difficult to produce otherwise.
Interestingly enough, the main obstacles to this sort of application turned out to lie not in the capabilities of the hardware, but in the restrictions imposed by the APIs of popular platforms such as both iPhone and Android. This is, of course, even more true on the high-end camera platforms such as Canon's and Nikon's DSLRs that have excellent optics and sensors and OK GPUs, but no official support for running apps at all, even something as basic as an intervalometer. There is, of course, Magic Lantern (http://www.magiclantern.fm/) for Canon, which is extremely powerful and cool, but is entirely unsupported by Canon and not exactly easy to code for.
What will be the future of the use of in-camera computational power? It appears almost certain that power per price will continue to increase much faster for GPUs than for optics or image sensors. In order to harness this power, for either Apple or Google it would be reasonably easy to extend their APIs to give more nuanced control of the camera modules in iOS and Android devices. In fact, Apple announced something of that kind for iOS 6, but then didn't actually follow through, perhaps motivated by concerns of compatibility when using new camera modules in future devices. On the professional platforms such as Canon or Nikon having an API would be a cultural revolution and probably be most likely to come if and when these manufacturers adopt Android as the operating system for their cameras for other reasons.
What do Slashdotters think? When and how will camera platforms allow us to control the camera, access the raw sensor data as it's coming in, and process it on the GPU? My best-case scenario would be that with the slate of new Android cameras about to be released either Google or Samsung might create an advanced camera API for Android. If that attracts enough cool apps that make a $500 camera do things previously reserved to $20,000 cameras, the DSLR makers might well adopt Android for that reason, and Apple would in turn add more advanced camera control to iOS.