Learning Object Color Models from Multi-view Constraints


Trevor Owens; Kate Saenko; Ayan Chakrabarti; Ying Xiong; Todd Zickler; Trevor Darrell


Color is known to be highly discriminative for many object recognition tasks, but is difficult to infer from uncontrolled images in which the illuminant is not known. Traditional methods for color constancy can improve surface reflectance estimates from such uncalibrated images, but their output depends significantly on the background scene. In many recognition and retrieval applications, we have access to image sets that contain multiple views of the same object in different environments; we show in this paper that correspondences between these images provide important constraints that can improve color constancy. We introduce the multi-view color constancy problem, and present a method to recover estimates of underlying surface reflectance based on joint estimation of these surface properties and the illuminants present in multiple images. The method can exploit image correspondences obtained by various alignment techniques, and we show examples based on matching local region features. Our results show that multi-view constraints can significantly improve estimates of both scene illuminants and object color (surface reflectance) when compared to a baseline single-view method.



BibTex entry

@conference { 296, title = {Learning Object Color Models from Multi-view Constraints}, year = {2011}, month = {21/06/2011}, publisher = {IEEE}, author = {Trevor Owens and Kate Saenko and Ayan Chakrabarti and Ying Xiong and Todd Zickler and Trevor Darrell} }