Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

+ - Is computational photography going mainstream?

Submitted by Promanthanic
Promanthanic (2982765) writes "Low-end compact cameras are quickly going the way of the film camera and are being replaced with smartphone cameras. Higher-end (but not truly high-end) cameras start looking like smartphones with touchscreen user interfaces replacing the traditional buttons, although it's not always obvious that this makes the camera easier to use. In some sense, cameras and smartphones are clearly converging. Yet, curiously enough, although we're seeing an endless number of more or less fancy image filter apps for smartphones, there seems to be fairly little advancement in using the power of modern mobile GPUs to do something with the image data that actually opens up new photographic possibilities.

One example of using the GPU's computational power to make new types of photography possible is the new app Thalia Lapse HD/R (http://www.thaliacam.com/), which records HDR timelapse videos by taking several exposures for each movie frame to be recorded, merging them, and tonemapping them. (Disclosure: I am the author of the app, and my company is the publisher.) In this way, the dynamic range of the iPhone's sensor that's not quite 8 bit wide can be extended by another 5 bit or so, just by using some clever math on the powerful GPU that's already there for eye candy and gamers, and this in turn makes stunning movies of scenes with a large dynamic range possible that would be rather difficult to produce otherwise.

Interestingly enough, the main obstacles to this sort of application turned out to lie not in the capabilities of the hardware, but in the restrictions imposed by the APIs of popular platforms such as both iPhone and Android. This is, of course, even more true on the high-end camera platforms such as Canon's and Nikon's DSLRs that have excellent optics and sensors and OK GPUs, but no official support for running apps at all, even something as basic as an intervalometer. There is, of course, Magic Lantern (http://www.magiclantern.fm/) for Canon, which is extremely powerful and cool, but is entirely unsupported by Canon and not exactly easy to code for.

What will be the future of the use of in-camera computational power? It appears almost certain that power per price will continue to increase much faster for GPUs than for optics or image sensors. In order to harness this power, for either Apple or Google it would be reasonably easy to extend their APIs to give more nuanced control of the camera modules in iOS and Android devices. In fact, Apple announced something of that kind for iOS 6, but then didn't actually follow through, perhaps motivated by concerns of compatibility when using new camera modules in future devices. On the professional platforms such as Canon or Nikon having an API would be a cultural revolution and probably be most likely to come if and when these manufacturers adopt Android as the operating system for their cameras for other reasons.

What do Slashdotters think? When and how will camera platforms allow us to control the camera, access the raw sensor data as it's coming in, and process it on the GPU? My best-case scenario would be that with the slate of new Android cameras about to be released either Google or Samsung might create an advanced camera API for Android. If that attracts enough cool apps that make a $500 camera do things previously reserved to $20,000 cameras, the DSLR makers might well adopt Android for that reason, and Apple would in turn add more advanced camera control to iOS."
This discussion was created for logged-in users only, but now has been archived. No new comments can be posted.

Is computational photography going mainstream?

Comments Filter:

10.0 times 0.1 is hardly ever 1.0.

Working...