Project

Using Shadows to See Behind Objects

Connor Henley, Camera Culture, MIT Media Lab

The full text of our paper is available here.

Sometimes the thing that we want to see is hidden behind something else. A neighboring vehicle on the road might block a motorist's view of a pedestrian that's about to cross the street. Trees might severely hamper a drone's ability to navigate through a forest by occluding the drone's field-of-view in almost every direction.  Even the front-facing surface of any opaque object will block our view of the object's back-facing surface.

While we can't view hidden objects directly, sometimes we can learn about them from more indirect cues. In this paper we exploit the fact that hidden objects can still cast shadows onto surfaces that we can see. Shadows are very informative about object shape, particularly if the position of the light source is also known.

We use a laser pointer to project laser spots onto visible surfaces lying to one side of the hidden space. These laser spots act as scannable, "virtual" point sources of light. We use an RGB or RBG-D camera to observe the resulting shadows that hidden objects cast onto surfaces that lie on the opposite side of the hidden space. After illuminating multiple laser spots and subsequently observing multiple sets of shadows, we use a space-carving algorithm to extract the detailed, 3D shapes of objects lying within the hidden space.

Our paper was published in the proceedings of the European Conference on Computer Vision in 2020.

Title:  Imaging Behind Occluders Using Two-Bounce Light 

Abstract:  We introduce the new non-line-of-sight imaging problem of imaging behind an occluder. The behind-an-occluder problem can be solved if the hidden space is flanked by opposing visible surfaces. We illuminate one surface and observe light that scatters off of the opposing surface after traveling through the hidden space. Hidden objects attenuate light that passes through the hidden space, leaving an observable signature that can be used to reconstruct their shape. Our method uses a simple capture setup—we use an eye-safe laser pointer as a light source and off-the-shelf RGB or RGB-D cameras to estimate the geometry of relay surfaces and observe two-bounce light. We analyze the photometric and geometric challenges of this new imaging problem, and develop a robust method that produces high-quality 3D reconstructions in uncontrolled settings where relay surfaces may be non-planar.

Full text is accessible at:  https://www.ecva.net/papers/eccv_2020/papers_ECCV/html/6833_ECCV_2020_paper.php