Camera Culture
Making the invisible visible–inside our bodies, around us, and beyond–for health, work, and connection.
The Camera Culture group is building new tools to better capture and share visual information. What will a camera look like in ten years? How should we change the camera to improve mobile photography? How will a billion networked and portable cameras change the social culture? We exploit unusual optics, novel illumination, and emerging sensors to build new capture devices and develop associate algorithms.

Research Projects

  • 6D Display

    Ramesh Raskar, Martin Fuchs, Hans-Peter Seidel, and Hendrik P. A. Lensch

    Is it possible to create passive displays that respond to changes in viewpoint and incident light conditions? Holograms and 4D displays respond to changes in viewpoint. 6D displays respond to changes in viewpoint as well as surrounding light. We encode the 6D reflectance field into an ordinary 2D film. These displays are completely passive and do not require any power. Applications include novel instruction manuals and mood lights.

  • A Switchable Light Field Camera

    Matthew Hirsch, Sriram, Sivaramakrishnan, Suren Jayasuriya, Albert Wang, Aloysha Molnar, Ramesh Raskar, and Gordon Wetzstein

    We propose a flexible light field camera architecture that represents a convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that—contrary to light field cameras today—our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.

  • Bokode: Imperceptible Visual Tags for Camera-Based Interaction from a Distance

    Ramesh Raskar, Ankit Mohan, Grace Woo, Shinsaku Hiura and Quinn Smithwick
    With over a billion people carrying camera-phones worldwide, we have a new opportunity to upgrade the classic bar code to encourage a flexible interface between the machine world and the human world. Current bar codes must be read within a short range and the codes occupy valuable space on products. We present a new, low-cost, passive optical design so that bar codes can be shrunk to fewer than 3mm and can be read by unmodified ordinary cameras several meters away.
  • CATRA: Mapping of Cataract Opacities Through an Interactive Approach

    Ramesh Raskar, Vitor Pamplona, Erick Passos, Jan Zizka, Jason Boggess, David Schafran, Manuel M. Oliveira, Everett Lawson, and Estebam Clua

    We introduce a novel interactive method to assess cataracts in the human eye by crafting an optical solution that measures the perceptual impact of forward scattering on the foveal region. Current solutions rely on highly trained clinicians to check the back scattering in the crystallin lens and test their predictions on visual acuity tests. Close-range parallax barriers create collimated beams of light to scan through sub-apertures scattering light as it strikes a cataract. User feedback generates maps for opacity, attenuation, contrast, and local point-spread functions. The goal is to allow a general audience to operate a portable, high-contrast, light-field display to gain a meaningful understanding of their own visual conditions. The compiled maps are used to reconstruct the cataract-affected view of an individual, offering a unique approach for capturing information for screening, diagnostic, and clinical analysis.

  • Coded Computational Photography

    Jaewon Kim, Ahmed Kirmani, Ankit Mohan and Ramesh Raskar
    Computational photography is an emerging multi-disciplinary field at the intersection of optics, signal processing, computer graphics and vision, electronics, art, and online sharing in social networks. The first phase of computational photography was about building a super-camera that has enhanced performance in terms of the traditional parameters, such as dynamic range, field of view, or depth of field. We call this Epsilon Photography. The next phase of computational photography is building tools that go beyond the capabilities of this super-camera. We call this Coded Photography. We can code exposure, aperture, motion, wavelength, and illumination. By blocking light over time or space, we can preserve more details about the scene in the recorded single photograph.
  • Coded Focal Stack Photography

    Ramesh Raskar, Gordon Wetzstein, Xing Lin and Tsinghua University

    We present coded focal stack photography as a computational photography paradigm that combines a focal sweep and a coded sensor readout with novel computational algorithms. We demonstrate various applications of coded focal stacks, including photography with programmable non-planar focal surfaces and multiplexed focal stack acquisition. By leveraging sparse coding techniques, coded focal stacks can also be used to recover a full-resolution depth and all-in-focus (AIF) image from a single photograph. Coded focal stack photography is a significant step towards a computational camera architecture that facilitates high-resolution post-capture refocusing, flexible depth of field, and 3D imaging.

  • Compressive Light Field Camera: Next Generation in 3D Photography

    Kshitij Marwah, Gordon Wetzstein, Yosuke Bando and Ramesh Raskar

    Consumer photography is undergoing a paradigm shift with the development of light field cameras. Commercial products such as those by Lytro and Raytrix have begun to appear in the marketplace with features such as post-capture refocus, 3D capture, and viewpoint changes. These cameras suffer from two major drawbacks: major drop in resolution (converting a 20 MP sensor to a 1 MP image) and large form factor. We have developed a new light field camera that circumvents traditional resolution losses (a 20 MP sensor turns into a full sensor resolution refocused image) in a thin form factor that can fit into traditional DSLRs and mobile phones.

  • Eyeglasses-Free Displays

    Ramesh Raskar and Gordon Wetzstein

    Millions of people worldwide need glasses or contact lenses to see or read properly. We introduce a computational display technology that predistorts the presented content for an observer, so that the target image is perceived without the need for eyewear. We demonstrate a low-cost prototype that can correct myopia, hyperopia, astigmatism, and even higher-order aberrations that are difficult to correct with glasses.

  • Imaging through Scattering Media Using Femtophotography

    Ramesh Raskar, Christopher Barsi and Nikhil Naik

    We use time-resolved information in an iterative optimization algorithm to recover reflectance of a three-dimensional scene hidden behind a diffuser. We demonstrate reconstruction of large images without relying on knowledge of diffuser properties.

  • Inverse Problems in Time-of-Flight Imaging

    Ayush Bhandari and Ramesh Raskar

    We are exploring mathematical modeling of Time-of-Flight imaging problems and solutions.

  • Layered 3D: Glasses-Free 3D Printing

    Gordon Wetzstein, Douglas Lanman, Matthew Hirsch, Wolfgang Heidrich, and Ramesh Raskar

    We are developing tomographic techniques for image synthesis on displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field or high-contrast 2D image when illuminated by a uniform backlight. Since arbitrary views may be inconsistent with any single attenuator, iterative tomographic reconstruction minimizes the difference between the emitted and target light fields, subject to physical constraints on attenuation. For 3D displays, spatial resolution, depth of field, and brightness are increased, compared to parallax barriers. We conclude by demonstrating the benefits and limitations of attenuation-based light field displays using an inexpensive fabrication method: separating multiple printed transparencies with acrylic sheets.

  • LensChat: Sharing Photos with Strangers

    Ramesh Raskar, Rob Gens and Wei-Chao Chen
    With networked cameras in everyone's pockets, we are exploring the practical and creative possibilities of public imaging. LensChat allows cameras to communicate with each other using trusted optical communications, allowing users to share photos with a friend by taking pictures of each other, or borrow the perspective and abilities of many cameras.
  • Looking Around Corners

    Andreas Velten, Di Wu, Christopher Barsi, Ayush Bhandari, Achuta Kadambi, Nikhil Naik, Micha Feigin, Daniel Raviv, Thomas Willwacher, Otkrist Gupta, Ashok Veeraraghavan, Moungi G. Bawendi, and Ramesh Raskar
    Using a femtosecond laser and a camera with a time resolution of about one trillion frames per second, we recover objects hidden out of sight. We measure speed-of-light timing information of light scattered by the hidden objects via diffuse surfaces in the scene. The object data are mixed up and are difficult to decode using traditional cameras. We combine this "time-resolved" information with novel reconstruction algorithms to untangle image information and demonstrate the ability to look around corners.
  • NETRA: Smartphone Add-On for Eye Tests

    Vitor Pamplona, Manuel Oliveira, Erick Passos, Ankit Mohan, David Schafran, Jason Boggess and Ramesh Raskar

    Can a person look at a portable display, click on a few buttons, and recover his refractive condition? Our optometry solution combines inexpensive optical elements and interactive software components to create a new optometry device suitable for developing countries. The technology allows for early, extremely low-cost, mobile, fast, and automated diagnosis of the most common refractive eye disorders: myopia (nearsightedness), hypermetropia (farsightedness), astigmatism, and presbyopia (age-related visual impairment). The patient overlaps lines in up to eight meridians and the Android app computes the prescription. The average accuracy is comparable to the prior art—and in some cases, even better. We propose the use of our technology as a self-evaluation tool for use in homes, schools, and at health centers in developing countries, and in places where an optometrist is not available or is too expensive.

  • New Methods in Time-of-Flight Imaging

    Ramesh Raskar, Christopher Barsi, Ayush Bhandari, Anshuman Das, Micha Feigin-Almon and Achuta Kadambi

    Time-of-flight (ToF) cameras are commercialized consumer cameras that provide a depth map of a scene, with many applications in computer vision and quality assurance. Currently, we are exploring novel ways of integrating the camera illumination and detection circuits with computational methods to handle challenging environments, including multiple scattering and fluorescence emission.

  • PhotoCloud: Personal to Shared Moments with Angled Graphs of Pictures

    Ramesh Raskar, Aydin Arpa, Otkrist Gupta and Gabriel Taubin

    We present a near real-time system for interactively exploring a collectively captured moment without explicit 3D reconstruction. Our system favors immediacy and local coherency to global consistency. It is common to represent photos as vertices of a weighted graph. The weighted angled graphs of photos used in this work can be regarded as the result of discretizing the Riemannian geometry of the high dimensional manifold of all possible photos. Ultimately, our system enables everyday people to take advantage of each others' perspectives in order to create on-the-spot spatiotemporal visual experiences similar to the popular bullet-time sequence. We believe that this type of application will greatly enhance shared human experiences spanning from events as personal as parents watching their children's football game to highly publicized red-carpet galas.

  • Polarization Fields: Glasses-Free 3DTV

    Douglas Lanman, Gordon Wetzstein, Matthew Hirsch, Wolfgang Heidrich, and Ramesh Raskar

    We introduce polarization field displays as an optically efficient design for dynamic light field display using multi-layered LCDs. Such displays consist of a stacked set of liquid crystal panels with a single pair of crossed linear polarizers. Each layer is modeled as a spatially controllable polarization rotator, as opposed to a conventional spatial light modulator that directly attenuates light. We demonstrate that such displays can be controlled, at interactive refresh rates, by adopting the SART algorithm to tomographically solve for the optimal spatially varying polarization state rotations applied by each layer. We validate our design by constructing a prototype using modified off-the-shelf panels. We demonstrate interactive display using a GPU-based SART implementation supporting both polarization-based and attenuation-based architectures.

  • Portable Retinal Imaging

    Everett Lawson, Jason Boggess, Alex Olwal, Gordon Wetzstein, and Siddharth Khullar

    The major challenge in preventing blindness is identifying patients and bringing them to specialty care. Diseases that affect the retina, the image sensor in the human eye, are particularly challenging to address, because they require highly trained eye specialists (ophthalmologists) who use expensive equipment to visualize the inner parts of the eye. Diabetic retinopathy, HIV/AIDS related retinitis, and age-related macular degeneration are three conditions that can be screened and diagnosed to prevent blindness caused by damage to retina. We exploit a combination of two novel ideas which simplify the constraints of traditional devices, with simplified optics and cleaver illumination in order to capture and visualize images of the retina in a standalone device easily operated by the user. Prototypes are conveniently embedded in either a mobile hand-held retinal camera, or wearable eyeglasses.

  • Reflectance Acquisition Using Ultrafast Imaging

    Ramesh Raskar and Nikhil Naik

    We demonstrate a new technique that allows a camera to rapidly acquire reflectance properties of objects "in the wild" from a single viewpoint, over relatively long distances and without encircling equipment. This project has a wide variety of applications in computer graphics including image relighting, material identification, and image editing.

  • Second Skin: Motion Capture with Actuated Feedback for Motor Learning

    Ramesh Raskar, Kenichiro Fukushi, Christopher Schonauer and Jan Zizka

    We have created a 3D motion-tracking system with automatic, real-time vibrotactile feedback and an assembly of photo-sensors, infrared projector pairs, vibration motors, and wearable suit. This system allows us to enhance and quicken the motor learning process in a variety of fields such as healthcare (physiotherapy), entertainment (dance), and sports (martial arts).

  • Shield Field Imaging

    Jaewon Kim
    We present a new method for scanning 3D objects in a single shot, shadow-based method. We decouple 3D occluders from 4D illumination using shield fields: the 4D attenuation function which acts on any light field incident on an occluder. We then analyze occluder reconstruction from cast shadows, leading to a single-shot light field camera for visual hull reconstruction.
  • Single Lens Off-Chip Cellphone Microscopy

    Ramesh Raskar and Aydin Arpa

    Within the last few years, cellphone subscriptions have widely spread and now cover even the remotest parts of the planet. Adequate access to healthcare, however, is not widely available, especially in developing countries. We propose a new approach to converting cellphones into low-cost scientific devices for microscopy. Cellphone microscopes have the potential to revolutionize health-related screening and analysis for a variety of applications, including blood and water tests. Our optical system is more flexible than previously proposed mobile microscopes, and allows for wide field of view panoramic imaging, the acquisition of parallax, and coded background illumination, which optically enhances the contrast of transparent and refractive specimens.

  • Skin Perfusion Photography

    Ramesh Raskar, Christopher Barsi and Guy Satat

    Skin and tissue perfusion measurements are important parameters for estimating wounds and burns, and for monitoring plastic and reconstructive surgeries. In this project, we use a standard camera and a laser in order to image blood flow in skin tissue. We show results of blood flow maps of hands, arms, and fingers. We combine the complex scattering of laser light from blood with computational techniques found in computer science.

  • Slow Display

    Daniel Saakes, Kevin Chiu, Tyler Hutchison, Biyeun Buczyk, Naoya Koizumi and Masahiko Inami

    How can we show our 16-megapixel photos from our latest trip on a digital display? How can we create screens that are visible in direct sunlight as well as complete darkness? How can we create large displays that consume less than 2W of power? How can we create design tools for digital decal application and intuitive-computer aided modeling? We introduce a display that is high resolution but updates at a low frame rate, a slow display. We use lasers and monostable light-reactive materials to provide programmable space-time resolution. This refreshable, high resolution display exploits the time decay of monostable materials, making it attractive in terms of cost and power requirements. Our effort to repurpose these materials involves solving underlying problems in color reproduction, day-night visibility, and optimal time sequences for updating content.

  • SpeckleSense

    Alex Olwal, Andrew Bardagjy, Jan Zizka and Ramesh Raskar

    Motion sensing is of fundamental importance for user interfaces and input devices. In applications where optical sensing is preferred, traditional camera-based approaches can be prohibitive due to limited resolution, low frame rates, and the required computational power for image processing. We introduce a novel set of motion-sensing configurations based on laser speckle sensing that are particularly suitable for human-computer interaction. The underlying principles allow these configurations to be fast, precise, extremely compact, and low cost.

  • StreetScore

    Nikhil Naik, Jade Philipoom, Ramesh Raskar, Cesar Hidalgo

    StreetScore is a machine learning algorithm that predicts the perceived safety of a streetscape. StreetScore was trained using 2,920 images of streetscapes from New York and Boston and their rankings for perceived safety obtained from a crowdsourced survey. To predict an image's score, StreetScore decomposes this image into features and assigns the image a score based on the associations between features and scores learned from the training dataset. We use StreetScore to create a collection of map visualizations of perceived safety of street views from cities in the United States. StreetScore allows us to scale up the evaluation of streetscapes by several orders of magnitude when compared to a crowdsourced survey. StreetScore can empower research groups working on connecting urban perception with social and economic outcomes by providing high resolution data on urban perception.

  • Tensor Displays: High-Quality Glasses-Free 3D TV

    Gordon Wetzstein, Douglas Lanman, Matthew Hirsch and Ramesh Raskar

    We introduce tensor displays: a family of glasses-free 3D displays comprising all architectures employing (a stack of) time-multiplexed LCDs illuminated by uniform or directional backlighting. We introduce a unified optimization framework that encompasses all tensor display architectures and allows for optimal glasses-free 3D display. We demonstrate the benefits of tensor displays by constructing a reconfigurable prototype using modified LCD panels and a custom integral imaging backlight. Our efficient, GPU-based NTF implementation enables interactive applications. In our experiments we show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays.

  • Theory Unifying Ray and Wavefront Lightfield Propagation

    George Barbastathis, Ramesh Raskar, Belen Masia, Se Baek Oh and Tom Cuypers
    This work focuses on bringing powerful concepts from wave optics to the creation of new algorithms and applications for computer vision and graphics. Specifically, ray-based, 4D lightfield representation, based on simple 3D geometric principles, has led to a range of new applications that include digital refocusing, depth estimation, synthetic aperture, and glare reduction within a camera or using an array of cameras. The lightfield representation, however, is inadequate to describe interactions with diffractive or phase-sensitive optical elements. Therefore we use Fourier optics principles to represent wavefronts with additional phase information. We introduce a key modification to the ray-based model to support modeling of wave phenomena. The two key ideas are "negative radiance" and a "virtual light projector." This involves exploiting higher dimensional representation of light transport.
  • Trillion Frames Per Second Camera

    Andreas Velten, Di Wu, Adrián Jarabo, Belen Masia, Christopher Barsi, Chinmaya Joshi, Everett Lawson, Moungi Bawendi, Diego Gutierrez, and Ramesh Raskar

    We have developed a camera system that captures movies at an effective rate of approximately one trillion frames per second. In one frame of our movie, light moves only about 0.6 mm. We can observe pulses of light as they propagate through a scene. We use this information to understand how light propagation affects image formation and to learn things about a scene that are invisible to a regular camera.

  • Ultrasound tomography

    Ramesh Raskar, Micha Feigin-Almon and Brian Anthony

    Traditional medical ultrasound assumes that we are imaging ideal liquids. We interested at imaging muscle and bone as well as measuring elastic properties of tissues, all of which are places where this assumption fails quite miserably. Interested in cancer detections, Duchenne muscular dystrophy and prostethic fitting, we use tomographic techniques as well as ideas from seismic imaging to deal with these issues.

  • Vision on Tap

    Ramesh Raskar
    Computer vision is a class of technologies that lets computers use cameras to automatically stitch together panoramas, reconstruct 3D geometry from multiple photographs, and even tell you when the water's boiling. For decades, this technology has been advancing mostly within the confines of academic institutions and research labs. Vision on Tap is our attempt to bring computer vision to the masses.
  • VisionBlocks

    Chunglin Wen and Ramesh Raskar

    VisionBlocks is an on-demand, in-browser, customizable, computer-vision application-building platform for the masses. Even without any prior programming experience, users can create and share computer vision applications. End-users drag and drop computer vision processing blocks to create their apps. The input feed could be either from a user's webcam or a video from the Internet. VisionBlocks is a community effort where researchers obtain fast feedback, developers monetize their vision applications, and consumers can use state-of-the-art computer vision techniques. We envision a Vision-as-a-Service (VaaS) over-the-web model, with easy-to-use interfaces for application creation for everyone.

  • Visual Lifelogging

    Hyowon Lee, Nikhil Naik, Lubos Omelina, Daniel Tokunaga, Tiago Lucena and Ramesh Raskar

    We are creating a novel visual lifelogging framework for applications in personal life and workplaces.