Object-Based Media
How sensing, understanding, and new interface technologies can change everyday life, the ways in which we communicate with one another, storytelling, and entertainment.
We explore the future of electronic visual communication and expression, and how the distribution of computational intelligence throughout video and audio communication systems can make a richer connection between the people at the ends of the systems, whether a broadcast system or a peer-to-peer environment. We also develop hardware and software technologies to support the requirements of such a scenario, with particular focus on new input and output technologies, advanced interfaces for consumer electronics, and self-organization among smart devices.

Research Projects

  • 3D Telepresence Chair

    V. Michael Bove Jr.

    An autostereoscopic (no glasses) 3D display engine is combined with a "Pepper's Ghost" setup to create an office chair that appears to contain a remote meeting participant. The system geometry is also suitable for other applications such as tabletop displays or automotive heads-up displays.

  • Awakened Apparel

    V. Michael Bove, Kent Larson, Jennifer Broutin Farah, Philippa Mothersill and Laura Perovich

    This project investigates soft mechanisms, origami, and fashion. We created a modified Miura fold skirt that changes shape through pneumatic actuation. In the future, our skirt could dynamically adapt to the climatic, functional, and emotional needs of the user–for example, it might become shorter in warm weather, or longer if the user felt threatened.

  • BigBarChart

    V. Michael Bove and Laura Perovich

    BigBarChart is an immersive 3D bar chart that provides a new physical way for people to interact with data. It takes data beyond visualizations to map out a new area—data experiences—which are multisensory, embodied, and aesthetic interactions. BigBarChart is made up of a number of bars that extend up to 10 feet to create an immersive experience. Bars change height and color in response to interactions that are direct (a person entering the room), tangible (pushing down on a bar to get meta information), or digital (controlling bars and performing statistical analyses through a tablet). BigBarChart helps both scientists and the general public understand information from a new perspective. Early prototypes are available.

  • Bottles&Boxes: Packaging with Sensors

    Ermal Dreshaj and Daniel Novy

    We have added inexpensive, low-power, wireless sensors to product packages to detect user interactions with products. Thus, a bottle can register when and how often its contents are dispensed (and generate side effects like causing a music player to play music when the bottle is picked up, or generating an automatic refill order when near-emptiness is detected). A box can understand usage patterns of its contents. Consumers can vote for their favorites among several alternatives simply by handling them more often.

  • Calliope

    V. Michael Bove Jr., Edwina Portocarrero and Ye Wang

    Calliope is the follow-up to the NeverEnding Drawing Machine. A portable, paper-based platform for interactive story making, it allows physical editing of shared digital media at a distance. The system is composed of a network of creation stations that seamlessly blend analog and digital media. Calliope documents and displays the creative process with no need to interact directly with a computer. By using human-readable tags and allowing any object to be used as material for creation, it offers opportunities for cross-cultural and cross-generational collaboration among peers with expertise in different media.

  • Consumer Holo-Video

    V. Michael Bove Jr., James D. Barabas, Sundeep Jolly and Daniel E. Smalley
    The goal of this project, building upon work begun by Stephen Benton and the Spatial Imaging group, is to create an inexpensive desktop monitor for a PC or game console that displays holographic video images in real time, suitable for entertainment, engineering, or medical imaging. To date, we have demonstrated the fast rendering of holo-video images (including stereographic images that unlike ordinary stereograms have focusing consistent with depth information) from OpenGL databases on off-the-shelf PC graphics cards; current research addresses new optoelectronic architectures to reduce the size and manufacturing cost of the display system.
  • Crystal Media

    Amir Lazarovich, Daniel Novy, Andy Lippman, V. Michael Bove

    We have created a hemispherical multi-touch display globe on which we present a personal view of the interconnections between data such as news and narrative entertainment for an individual or a group of people using it simultaneously. It works for people in the same room as well among people in different, similarly equipped places. Multi-touch gestures expose and bring into focus the joint commonalities and present them on a large, 4K display. Instead of fighting over a remote control, this transforms it into a joint activity. The goal is to explore the universe of media as well as the themes that friends hold in common.

  • Digital Synesthesia

    V. Michael Bove and Santiago Eloy Alfaro

    Digital Synesthesia looks to evolve the idea of Human-Computer Interfacing and give way for Human-World Interacting. It aims to find a way for users to experience the world by perceiving information outside of their sensory capabilities. Modern technology already offers the ability to detect information from the world that is beyond our natural sensory spectrum, but there is still no real way for our brains and body to incorporate this new information as a part of our sensory toolkit, so that we can understand our surrounding world in new and undiscovered ways. The long-term vision is to give users the ability to turn senses on and off depending on the desired experience. This project is part of the Ultimate Media initiative and will be applied to the navigation and discovery of media content.

  • Direct Fringe Writing of Computer-Generated Holograms

    V. Michael Bove Jr., Sundeep Jolly and University of Arizona College of Optical Sciences

    Photorefractive polymer has many attractive properties for dynamic holographic displays; however, the current display systems based around its use involve generating holograms by optical interference methods that complicate the optical and computational architectures of the systems and limit the kinds of holograms that can be displayed. We are developing a system to write computer-generated diffraction fringes directly from spatial light modulators to photorefractive polymers, resulting in displays with reduced footprint and cost, and potentially higher perceptual quality.

  • Dressed in Data

    V. Michael Bove and Laura Perovich

    This project steps beyond data visualizations to create data experiences. It aims to engage not only the analytic mind, but also the artistic and emotional self. In this project, chemicals found in people’s bodies and homes are turned into a series of fashions. Quantities, properties, and sources of chemicals are represented through various parameters of the fashion, such as fabric color, textures, and sizes. Wearing these outfits allows people to live the data–to experience tangibly the findings from their homes and bodies. This is the first project in a series of works that seek to create aesthetic data experiences that prompt researchers and laypeople to engage with information in new ways.

  • Drift Bottle

    V. Michael Bove, Lingyun Sun and Zhejiang University

    How can emotions be conveyed, expressed, and felt? Drift Bottle is a project exploring interfaces that allow users to "feel" others’ emotions to promote their communication. We have developed a voice message-exchange web service. Based on that, we design and develop several terminals with different interfaces which convey emotions via media such as light, smell, and motion. One solution is to convey the emotions in voice messages via different colors of light. Our latest effort is conveying emotions via smells, with the intention of arousing the same emotions in the receivers.

  • EmotiveModeler: Tactile Allegory Design Framework

    V. Michael Bove and Philippa Mothersill

    We have an unconscious understanding of the meaning of different physical objects through our extensive interactions with them. Designers can extend and adapt the existing symbolic meanings through the design of these objects, adding a layer of emotive expression by manipulating their forms. Tactile Allegory explores the physical design language encoded into objects and asks: how can objects be computationally designed to communicate specific information through their very forms? This research explores the underlying design "grammar" of the form of objects, particularly how objects can communicate information to us through their form. This framework is used to create a computational design tool to help people design expressively shaped objects that can express higher-level sentiments of their ideas via aesthetic forms.

  • Everything Tells a Story

    V. Michael Bove Jr., David Cranor and Edwina Portocarrero
    Following upon work begun in the Graspables project, we are exploring what happens when a wide range of everyday consumer products can sense, interpret into human terms (using pattern recognition methods), and retain memories, such that users can construct a narrative with the aid of the recollections of the "diaries" of their sporting equipment, luggage, furniture, toys, and other items with which they interact.
  • Guided-Wave Light Modulator

    V. Michael Bove Jr., Daniel Smalley, Sundeep Jolly, and Quinn Smithwick
    We are developing inexpensive, efficient, high-bandwidth light modulators based on lithium niobate guided-wave technology. These modulators are suitable for demanding, specialized applications such as holographic video displays, as well as other light modulation uses such as compact video projectors.
  • Holoshop

    Paula Dawson, Masa Takatsuka, Hiroshi Yoshikawa, Brian Rogers, V. Michael Bove Jr.

    This project aims to make it easy to create 3D drawings that have the highly nuanced qualities of handmade drawings. Typically, 2D drawing relies on the conjunction of the friction and pressure of the medium (pencil and paper) to enable a sensitive registration of the gesture. However, when drawing in 3D there is not necessarily a “support.” Holoshop software uses forces and magnetism of open and closed fields to enable the user to locate fixed and semipermeable “supports” within the 3D environment. Holoshop is being developed for use in conjunction with a haptic device, the Phantom, enabling the user to navigate 3D space though both touch and vision. Also, the real-time modulation of lines from velocity and pressure enable responsive drawings which can be exported for holograms, 3D prints, and other 3D displays. This research is supported under Australian Research Council's Discovery Projects funding scheme (DP1094613).

  • Infinity-by-Nine

    V. Michael Bove Jr. and Daniel Novy

    We are expanding the home-video viewing experience by generating imagery to extend the TV screen and give the impression that the scene wraps completely around the viewer. Optical flow, color analysis, and heuristics extrapolate beyond the screen edge, where projectors provide the viewer's perceptual vision with low-detail dynamic patterns that are perceptually consistent with the video imagery and increase the sense of immersive presence and participation. We perform this processing in real time using standard microprocessors and GPUs.

  • ListenTree: Audio-Haptic Display in the Natural Environment

    V. Michael Bove, Joseph A. Paradiso, Gershon Dublon and Edwina Portocarrero

    ListenTree is an audio-haptic display embedded in the natural environment. A visitor to our installation notices a faint sound appearing to emerge from a tree. By resting their head against the tree, they are able to hear sound through bone conduction. To create this effect, an audio exciter transducer is weatherproofed and attached to the tree's roots, transforming it into a living speaker that channels audio through its branches and provides vibrotactile feedback. In one deployment, we used the ListenTree to display live sound from an outdoor ecological monitoring sensor network, bringing a faraway wetland into the urban landscape. Our intervention is motivated by a need for forms of display that fade into the background, inviting attention rather than requiring it. We consume most digital information through devices that alienate us from our surroundings; ListenTree points to a future where digital information might become enmeshed in material.

  • Narratarium

    V. Michael Bove Jr., Catherine Havasi, Fransheska Colon, Katherine (Kasia) Hayden, Daniel Novy, Jie Qi, and Robert H. Speer

    Remember telling scary stories in the dark with flashlights? Narratarium is an immersive storytelling environment to augment creative play using texture, color, and image. We are using natural language processing to listen to and understand stories being told, and thematically augment the environment using color and images. As a child tells stories about a jungle, the room is filled with greens and browns and foliage comes into view. A traveling parent can tell a story to a child and fill the room with images, color, and presence.

  • Networked Playscapes: Dig Deep

    V. Michael Bove and Edwina Portocarrero

    Networked Playscapes re-imagine outdoor play by merging the flexibility and fantastical of the digital world with the tangible, sensorial properties of physical play to create hybrid interactions for the urban environment. Dig Deep takes the classic sandbox found in children's playgrounds and merges it with the common fantasy of "digging your way to the other side of the world" to create a networked interaction in tune with child cosmogony.

  • Pillow-Talk

    V. Michael Bove Jr., Edwina Portocarrero and David Cranor

    Pillow-Talk is the first of a series of objects designed to aid creative endeavors through the unobtrusive acquisition of unconscious, self-generated content to permit reflexive self-knowledge. Composed of a seamless recording device embedded in a pillow, and a playback and visualization system in a jar, Pillow-Talk crystallizes that which we normally forget. This allows users to capture their dreams in a less mediated way, aiding recollection by priming the experience and providing no distraction for recall and capture through embodied interaction.

  • ProtoTouch: Multitouch Interfaces to Everyday Objects

    V. Michael Bove Jr. and David Cranor

    An assortment of everyday objects is given the ability to understand multitouch gestures of the sort used in mobile-device user interfaces, enabling people to use such increasingly familiar gestures to control a variety of objects, and to "copy" and "paste" configurations and other information among them.

  • ShakeOnIt

    V. Michael Bove Jr. and David Cranor

    We are exploring ways to encode information exchange into preexisting natural interaction patterns, both between people and between a single user and objects with which he or she interacts on a regular basis. Two devices are presented to provoke thoughts regarding these information interchange modalities: a pair of gloves that requires two users to complete a "secret handshake" in order to gain shared access to restricted information, and a doorknob that recognizes the grasp of a user and becomes operational only if the person attempting to use it is authorized to do so.

  • Simple Spectral Sensing

    Andrew Bardagjy

    The availability of cheap LEDs and diode lasers in a variety of wavelengths enables creation of simple and cheap spectroscopic sensors for specific tasks such as food shopping and preparation, healthcare sensing, material identification, and detection of contaminants or adulterants.

  • Slam Force Net

    V. Michael Bove Jr., Santiago Alfaro, and Daniel Novy

    A basketball net incorporates segments of conductive fiber whose resistance changes with degree of stretch. By measuring this resistance over time, hardware associated with this net can calculate force and speed of a basketball traveling through the net. Applications include training, toys that indicate the force and speed on a display, “dunk competitions,” and augmented reality effects on television broadcasts. This net is far less expensive and more robust than other approaches to measuring data about the ball (e.g., photosensors or ultrasonic sensors) and doesn’t require a physical change to the hoop or backboard other than providing electrical connections to the net. Another application of the material is a flat net that can measure velocity of a ball hit or pitched into it (as in baseball or tennis); it can measure position as well (e.g., for determining whether a practice baseball pitch would have been a strike).

  • SurroundVision

    V. Michael Bove Jr. and Santiago Alfaro
    Adding augmented reality to the living room TV, we are exploring the technical and creative implications of using a mobile phone or tablet (and possibly also dedicated devices like toys) as a controllable "second screen" for enhancing television viewing. Thus, a viewer could use the phone to look beyond the edges of the television to see the audience for a studio-based program, to pan around a sporting event, to take snapshots for a scavenger hunt, or to simulate binoculars to zoom in on a part of the scene. Recent developments include the creation of a mobile device app for Apple products and user studies involving several genres of broadcast television programming.
  • The "Bar of Soap": Grasp-Based Interfaces

    V. Michael Bove Jr. and Brandon Taylor
    We have built several handheld devices that combine grasp and orientation sensing with pattern recognition in order to provide highly intelligent user interfaces. The Bar of Soap is a handheld device that senses the pattern of touch and orientation when it is held, and reconfigures to become one of a variety of devices, such as phone, camera, remote control, PDA, or game machine. Pattern-recognition techniques allow the device to infer the user's intention based on grasp. Another example is a baseball that determines a user's pitching style as an input to a video game.
  • Ultra-High Tech Apparel

    V. Michael Bove, Philippa Mothersill, Laura Perovich, Christopher Bevans (CBAtelier), and Philipp Schmidt

    The classic lab coat has been a reliable fashion staple for scientists around the world. But Media Lab researchers are not only scientists—we are also designers, tinkerers, philosophers, and artists. We need a different coat! Enter the Media Lab coat. Our lab coat is uniquely designed for, and with, the Media Lab community. It features reflective materials, new bonding techniques, and integrated electronics. Each Labber has different needs. Some require access to Arduinos, others need moulding materials, yet others carry around motors or smart tablets. The lab coat is a framework for customization. The coat is just the start. Together with some of the innovative member companies of the MIT Media Lab, we are exploring protective eyewear, footwear, and everything in between.

  • Vidplora: Exploring the World through Video

    Andy Lippman, V. Michael Bove, and Jonathan Speiser

    What is happening in the world now? It is a question whose answer is scattered across the web, and whose opaque surface can only be scratched via traditional search-based interfaces. The goal of this project is to develop a window for exploration into current world happenings, in order to enable people to answer visually the question of "What is happening now?" without the use of a specific, a priori, search term. To do this, we create a web crawler geared specifically towards extracting recent video content as well as an accompanying user-interface that enables users to virtually explore the world through video. (Ultimate Media Program)

  • Vision-Based Interfaces for Mobile Devices

    V. Michael Bove Jr. and Santiago Alfaro
    Mobile devices with cameras have enough processing power to do simple machine-vision tasks, and we are exploring how this capability can enable new user interfaces to applications. Examples include dialing someone by pointing the camera at the person's photograph, or using the camera as an input to allow navigating virtual spaces larger than the device's screen.