Tangible Media
How to design seamless interfaces between humans, digital information, and the physical environment.
We live between two worlds: our physical environment and cyberspace. The Tangible Media group's focus is on the design of seamless interfaces between humans, digital information, and the physical environment. People have developed sophisticated skills for sensing and manipulating our physical environments. However, most of these skills are not employed by traditional GUIs (Graphical User Interfaces). The Tangible Media group is designing a variety of "tangible interfaces" based on these skills by giving physical form to digital information, seamlessly coupling the dual worlds of bits and atoms. The goal is to change the "painted bits" of GUIs to "tangible bits," taking advantage of the richness of multimodal human senses and skills developed through our lifetime of interaction with the physical world.

Research Projects

  • Coded Lens

    Yusuke Sekikawa, Koichiro Suzuki and Sang-won Leigh

    We propose Coded Lens, a novel system for lensless photography. The system does not require highly calibrated optics, but instead, utilizes a coded aperture for guiding lights. Compressed sensing (CS) is used to reconstruct scene from the raw image obtained through the coded aperture. Experimenting with synthetic and real scenes, we show the applicability of the technique and also demon- strate additional functionality such as changing focus programmat- ically. We believe this will lead to a more compact, cheaper and even versatile imaging systems.

  • FocalSpace

    Hiroshi Ishii, Anthony DeVincenzi and Lining Yao

    FocalSpace is a system for focused collaboration utilizing spatial depth and directional audio. We present a space where participants, tools, and other physical objects within the space are treated as interactive objects that can be detected, selected, and augmented with metadata. Further, we demonstrate several scenarios of interaction as concrete examples. By utilizing diminishing reality to remove unwanted background surroundings through synthetic blur, the system aims to attract participant attention to foreground activity.

  • inFORM

    Hiroshi Ishii, Alex Olwal, Daniel Leithinger and Sean Follmer

    Shape displays can be used to render both 3D physical content and user interface elements. We propose to use shape displays in three different ways to mediate interaction: facilitate, providing dynamic physical affordances through shape change; restrict, guiding users through dynamic physical constraints; and manipulate, actuating passive physical objects on the interface surface. We demonstrate this on a new, high-resolution shape display.

  • jamSheets: Interacting with Thin Stiffness-Changing Material

    Jifei Ou, Lining Yao, Daniel Tauber, Juergen Steimle and Ryuma Niiyama, Hiroshi Ishii,

    This project introduces layer jamming as an enabling technology for designing deformable, stiffness-tunable, thin sheet interfaces. Interfaces that exhibit tunable stiffness properties can yield dynamic haptic feedback and shape deformation capabilities. In comparison to the particle jamming, layer jamming allows for constructing thin and lightweight form factors of an interface. We propose five layer structure designs and an approach which composites multiple materials to control the deformability of the interfaces. We also present methods to embed different types of sensing and pneumatic actuation layers on the layer-jamming unit. Through three application prototypes we demonstrate the benefits of using layer jamming in interface design. Finally, we provide a survey of materials that have proven successful for layer jamming.

  • MirrorFugue III

    Xiao Xiao and Hiroshi Ishii

    MirrorFugue III is an installation for a player piano that evokes the impression that the "reflection" of a disembodied pianist is playing the physically moving keys. Live music emanates from a grand piano, whose keys move under the supple touch of a pianist's hands reflected on the lacquered surface of the instrument. The pianist's face is displayed on the music stand, with subtle expressions projecting the emotions of the music. MirrorFugue recreates the feeling of a live performance, but no one is actually there. The pianist is an illusion of light and mirrors, a ghost both present and absent. Viewing MirrorFugue evokes the sense of walking into a memory, where the pianist plays without awareness of the viewer's presence; or, it is as if viewers were ghosts in another's dream, able to sit down in place of the performing pianist and play along.

  • Pneumatic Shape-Changing Interfaces

    Hiroshi Ishii, Jifei Ou, Lining Yao, Ryuma Niiyama and Sean Follmer

    An enabling technology to build shape-changing interfaces through pneumatically driven soft composite materials. The composite materials integrate the capabilities of both input sensing and active shape output. We explore four applications: a multi-shape mobile device, table-top shape-changing tangibles, dynamically programmable texture for gaming, and shape-shifting lighting apparatus.

  • Radical Atoms

    Hiroshi Ishii, Leonardo Bonanni, Keywon Chung, Sean Follmer, Jinha Lee, Daniel Leithinger and Xiao Xiao
    Radical Atoms is our vision of interactions with future material.
  • Sublimate

    Hiroshi Ishii, Sean Follmer, Daniel Leithinger, Samuel Luescher, Alex Olwal, Akimitsu Hogge and Jinha Lee

    Recent research in 3D user interfaces has pushed in two directions: immersive graphics and actuated tangible shape displays. We seek their hybrid by thinking about physical material density as a parameter in 3D rendering. We want to explore how digital models, handles, and controls can be rendered either as virtual 3D graphics or dynamic physical shapes, and move fluidly and quickly between these states, allowing physical affordances to be rendered only when needed. We were inspired by the different states of water: solid, gas, and liquid. We view digital computation and models as liquid, which can be vaporized into mid-air graphics, or solidified into dynamic physical shape. We also investigate transitions between solid and gas: sublimation and vaporization. To explore this, we have implemented a system which combines an actuated shape display and a spatial augmented reality display. This system can render physical shapes and volumetric graphics, co-located in the same space. We explore interaction techniques and motivating demonstration applications to explore 3D interaction between these boundaries. We also present results of a user study showing that freehand interaction with a physical shape display with co-located graphics outperforms direct interaction with only 3D graphics through a wand.

  • Tangible Bits

    Hiroshi Ishii, Sean Follmer, Jinha Lee, Daniel Leithinger and Xiao Xiao
    People have developed sophisticated skills for sensing and manipulating our physical environments, but traditional GUIs (Graphical User Interfaces) do not employ most of them. Tangible Bits builds upon these skills by giving physical form to digital information, seamlessly coupling the worlds of bits and atoms. We are designing "tangible user interfaces" that employ physical objects, surfaces, and spaces as tangible embodiments of digital information. These include foreground interactions with graspable objects and augmented surfaces, exploiting the human senses of touch and kinesthesia. We also explore background information displays that use "ambient media"—light, sound, airflow, and water movement—to communicate digitally mediated senses of activity and presence at the periphery of human awareness. We aim to change the "painted bits" of GUIs to "tangible bits," taking advantage of the richness of multimodal human senses and skills developed through our lifetimes of interaction with the physical world.
  • Tangible CityScape

    Hiroshi Ishii, Sean Follmer, Felix Heibeck, Daniel Leithinger, Philipp Schoessler, Yusuke Sekikawa and Sheng Kai Tang

    Tangible CityScape is a platform for users to explore the real 3D cityscape by changing parameters such as population, building capacity, traffic, energy consumption, and shadow simulation for collaborative review of urban planning. By integrating a 2.5D actuated shape display, immersive 2D displays, 3D projection mapping, and handheld AR, CityScape combines the strength of bits (pixels) and atoms (tangibles) to represent city models at different scales and translate the tangible view onto a larger, underlying data set.