Light-field cameras are quickly becoming commodity items, with consumer and industrial applications. They capture many
nearby views simultaneously using a single image with a micro-lens array, thereby providing a wealth of cues for depth recovery:
defocus, correspondence, and shading. In particular, apart from conventional image shading, one can refocus images after acquisition,
and shift one’s viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. We present a principled
algorithm for dense depth estimation that combines defocus and correspondence metrics. We then extend our analysis to the
additional cue of shading, using it to refine fine details in the shape. By exploiting an all-in-focus image, in which pixels are expected to
, we define an optimization framework that integrates photo consistency, depth consistency, and shading
consistency. We show that combining all three sources of information: defocus, correspondence, and shading, outperforms
state-of-the-art light-field depth estimation algorithms in multiple scenarios.
Michael Tao, Pratul Srinivasan, Sunil Hadap, Szymon Rusinkiewicz, Jitendra Malik, and Ravi Ramamoorthi.
"Shape Estimation from Shading, Defocus, and Correspondence Using Light-Field Angular Coherence."
IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 39(3):546-560, March 2017.
author = "Michael Tao and Pratul Srinivasan and Sunil Hadap and Szymon
Rusinkiewicz and Jitendra Malik and Ravi Ramamoorthi",
title = "Shape Estimation from Shading, Defocus, and Correspondence Using
Light-Field Angular Coherence",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)",
year = "2017",
month = mar,
volume = "39",
number = "3",
pages = "546--560"