Comment Author's Comments on Camera Arrays (Score 5, Insightful) 123
First, this work is part of a larger research effort. In the past several years, cameras have become cheap, commodity devices, and you still get more processing power for the buck every year. I designed the Stanford Multiple Camera Array (http://graphics.stanford.edu/projects/array) not to be a high-speed camera, but to be a research tool for exploring the potential of large numbers of cheap image sensors and plentiful processing. High-speed video is one example of high-performance imaging using an array of cameras. We have also used our array for synthetic aperture photography, using many cameras to simulate a camera with a very large aperture. Such a camera has a very narrow depth of field, a property we exploit to look through partially occluding foreground objects like foliage. We are interested in view interpolation (Matrix-like effects, but with user control over the virtual camera viewpoint), too. If you want to learn more about the array and these applications, check out the links to our papers and my dissertation on the camera array website.
About the high-speed video work in particular, there are plenty of commercial high-speed cameras that run at higher frame rates than our camera array. If you want high-speed video camera, I recommend buying one of them. Using an array of cheap cameras has its disadvantages. You have to geometrically and radiometrically calibrate the data from all the different sensors, and in our case, we had to deal with the electronic rolling shutter. One benefit of this work for us was developing accurate and automatic (very important for 100 cameras) calibration methods for our array. An interesting property of the camera array approach is that parallel compression reduces the bandwidth so we can stream continuously. By contrast, as frame rate increase, most high-speed cameras are limited to recording durations that will fit in memory at the camera, usually well under one minute. That said, one could certainly design architectures to compress high-speed video in real-time.
What's most interesting to me about the high speed work is combining it with other multiple camera methods. One example is spatiotemporal view interpolation--capturing a bunch of images of a scene from different positions and times, then generating new views from positions and times not in the captured data. Think Matrix again, but with user control over the virtual camera view position and time. While the BulletTime setup from Manex captured one specific space-time camera trajectory, my goal is to capture images in a way that would let us create many different virtual camera paths later on. Traditional view interpolation methods use arrays of cameras synchronized to trigger simultaneously so they can reason about shape of the "frozen" scene, then infer how the scene is moving. In my thesis, I discuss how using the high-speed approach of staggered trigger times increases our temporal sampling resolution (effective frame rate) and can enable simpler interpolation methods. The interpolation algorithm I describe is also exactly the correction needed to eliminate the jitter due to parallax in the high-speed video sequences.
I've described just a few of the applications we've investigated using our camera array, but we hope this is just the tip of the iceberg. We're hard at work on new uses for the cameras, so stay tuned.