Comment Re:I don't buy it (Score 2) 176
Just a nitpick, but...
- Increasing the number of megapixels while keeping everything else the same does not change the amount of light the sensor collects, although each individual pixel gets less light.
- Increasing or decreasing the fill-factor or changing the total sensor size does.
- In low light situations, statistics of the intensity of light should be Poisson, which means that 4 pixels at 1/4 the area, when averaged together, should result in the same amount of SNR, assuming relatively small read noise, which should be dominated by the Poisson shot noise in this situation.
- Thus, the only downside of having just more pixels on a sensor if everything else was equal (note that fill factor of pixels would probably be different between different sensors) is that there's now a lot more bandwidth coming out of the sensor, which could be an issue with power efficiency. This could possibly be mitigated by reducing the bit depth on each individual pixel, if you assume there's going to be more noise.
- One upside is that if the pixel size is smaller than the diffraction spot size, then the relative size of your Bayer mosaic should make demosaicing easier.
- To summarize... if you had more pixels in the same sized sensor and can deal with the extra bandwidth, then you should be able to, in the worst case, downsample to achieve similar or higher performance compared to what you had before.