Depth and Deblurring from a Spectrally-varying Depth-of-Field

Authors

Ayan Chakrabarti; Todd Zickler



Abstract

We propose modifying the aperture of a conventional color camera so that the effective aperture size for one color channel is smaller than that for the other two. This produces an image where different color channels have different depths-of-field, and from this we can computationally recover scene depth, reconstruct an all-focus image and achieve synthetic re-focusing, all from a single shot. These capabilities are enabled by a spatio-spectral image model that encodes the statistical relationship between gradient profiles across color channels. This approach substantially improves depth accuracy over alternative single-shot coded-aperture designs, and since it avoids introducing additional spatial distortions and is light efficient, it allows high-quality deblurring and lower exposure times. We demonstrate these benefits with comparisons on synthetic data, as well as results on images captured with a prototype lens.
 
Project Page: http://vision.seas.harvard.edu/ccap/

Paper

CZ_ccap.pdf

BibTex entry

@conference { 329, title = {Depth and Deblurring from a Spectrally-varying Depth-of-Field}, year = {2012}, month = {07/10/2012}, address = {Firenze, Italy}, author = {Ayan Chakrabarti and Todd Zickler} }