We see the world because it is lit by illumination from all around us. Estimating this illumination is critical for a number of vision and graphics tasks like scene understanding, image editing, and augmented reality. However, inferring illumination from a single image is an extremely challenging problem. We often do not actually see lighting in images; we only observe it indirectly via its interactions with the unknown shapes and materials in the scene. In this talk, I will present two attempts at tackling this problem for outdoor and indoor scenes. In both cases, we have developed a way to create training data from a large-scale LDR panorama dataset. Using this data in conjunction with a physics-motivated training process allows us to train deep neural networks to predict HDR illumination from a single LDR image. We are able to produce state-of-the-art illumination estimates, and some of this work is now part of the 3D compositing tool, Adobe Dimension CC.
Kalyan Sunkavalli is a Senior Research Scientist at Adobe Research. He received his Ph.D. from Harvard University in 2012 under the supervision of Hanspeter Pfister. His research interests lie at the intersection of Computer Vision and Graphics; in particular, his work focuses on understanding different aspects of visual appearance in images and videos -- especially illumination, geometry, and reflectance properties -- and building tools that enable users to easily edit and enhance them.