Research Projects
aireForm: Refigured Shape-Changing Fashion
Henry Holtzman, Hiroshi Ishii, Leah Buechley, Jennifer Jacobs, Philippa Mothersill, Ryuma Niiyama and Xiao XiaoaireForm is a dress of many forms that fluidly morph from one to another, animated by air. Its forms evoke classic feminine silhouettes, from sleek to supple to striking. Garments are a medium through which we may alter our apparent forms to project different personas. As our personas shift from moment to moment, so too does aireForm, living and breathing with us.
Ambient Furniture
Hiroshi Ishii, David Rose, and Shaun SalzbergFurniture is the infrastructure for human activity. Every day we open cabinets and drawers, pull up to desks, recline in recliners, and fall into bed. How can technology augment these everyday rituals in elegant and useful ways? The Ambient Furniture project mixes apps with the IKEA catalog to make couches more relaxing, tables more conversational, desks more productive, lamps more enlightening, and beds more restful. With input from Vitra and Steelcase, we are prototyping a line of furniture to explore ideas about peripheral awareness (Google Latitude door bell), incidental gestures (Amazon restocking trash can and the Pandora lounge chair), pre-attentive processing (energy clock), and eavesdropping interfaces (FaceBook photo coffee table).
Beyond: A Collapsible Input Device for 3D Direct Manipulation
Jinha Lee and Hiroshi IshiiBeyond is a collapsible input device for direct 3D manipulation. When pressed against a screen, Beyond collapses in the physical world and extends into the digital space of the screen, so that users have an illusion that they are inserting the tool into the virtual space. Beyond allows users to interact directly with 3D media without having to wear special glasses, avoiding inconsistencies of input and output. Users can select, draw, and sculpt in 3D virtual space, and seamlessly transition between 2D and 3D manipulation.
FocalSpace
Hiroshi Ishii, Anthony DeVincenzi and Lining YaoFocalSpace is a system for focused collaboration utilizing spatial depth and directional audio. We present a space where participants, tools, and other physical objects within the space are treated as interactive objects that can be detected, selected, and augmented with metadata. Further, we demonstrate several scenarios of interaction as concrete examples. By utilizing diminishing reality to remove unwanted background surroundings through synthetic blur, the system aims to attract participant attention to foreground activity.
GeoSense
Hiroshi Ishii, Anthony DeVincenzi and Samuel LuescherAn open publishing platform for visualization, social sharing, and data analysis of geospatial data.
IdeaGarden
Hiroshi Ishii, David Lakatos, and Lining YaoThe IdeaGarden allows participants of creative activities to collectively capture, select, and share (CCSS) the stories, sketches, and ideas they produce in physical and digital spaces. The iGarden attempts to optimize the CCSS loop and to bring it from hours to seconds in order to turn asynchronous collaborative thought processes into synchronous real-time cognitive flows. The iGarden system is composed of a tangible capturing system including recording devices always "at-hand", of a selection workflow that allows the group to reflect and reduce the complexity of captured data in real-time and of a sharing module that connects socially selected information to the cloud.
Jamming User Interfaces
Hiroshi Ishii, Sean Follmer, Daniel Leithinger, Alex Olwal and Nadia ChengMalleable user interfaces have the potential to enable radically new forms of interactions and expressiveness through flexible, free-form and computationally controlled shapes and displays. This work, specifically focuses on particle jamming as a simple, effective method for flexible, shape-changing user interfaces where programmatic control of material stiffness enables haptic feedback, deformation, tunable affordances and control gain. We introduce a compact, low-power pneumatic jamming system suitable for mobile devices, and a new hydraulic-based technique with fast, silent actuation and optical shape sensing. We enable jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and fluids, and 2) capacitive and electric field sensing. We explore the design space of malleable and organic user interfaces enabled by jamming through four motivational prototypes that highlight jamming’s potential in HCI, including applications for tabletops, tablets and for portable shape-changing mobile devices.
Kinected Conference
Anthony DeVincenzi, Lining Yao, Hiroshi Ishii and Ramesh RaskarHow could we enhance the experience of video-conference by utilizing an interactive display? With a Kinect camera and sound sensors, we explore how expanding a system's understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels, containing information regarding their material properties and location. Four features have been implemented: Talking to Focus, Freezing Former Frames, Privacy Zone, and Spacial Augmenting Reality.
MirrorFugue II
MirrorFugue is an interface for the piano that bridges the gap of location in music playing by connecting pianists in a virtual shared space reflected on the piano. Built on a previous design that only showed the hands, our new prototype displays both the hands and upper body of the pianist. MirrorFugue may be used for watching a remote or recorded performance, taking a remote lesson, and remote duet playing.Xiao Xiao and Hiroshi IshiiMirrorFugue III
Xiao Xiao and Hiroshi IshiiMirrorFugue is an installation for a player piano that evokes the impression that the "reflection" of a disembodied pianist is playing the physically moving keys. Live music emanates from a grand piano, whose keys move under the supple touch of a pianists's hands reflected on the lacquered surface of the instrument. On the music stand is visible the pianist's face, whose subtle expressions project emotions of the music. MirrorFugue recreates the feeling of a live performance, but no one is actually there. The pianist is but an illusion of light and mirrors, a ghost at once present and absent. Viewing MirrorFugue evokes the sense of walking into a memory, where the pianist plays along with no awareness of the viewer's presence. Or it is as if viewers were ghosts in another's dream, themselves incorporeal, able to sit down in place of the performing pianist and play along.
Peddl
Andy Lippman, Hiroshi Ishii, Matthew Blackshaw, Anthony DeVincenzi and David LakatosPeddl creates a localized, perfect market. All offers are broadcasts, allowing users to spot trends, bargains, and opportunities. With GPS- and Internet-enabled mobile devices in almost every pocket, we see an opportunity for a new type of marketplace which takes into account your physical location, availability, and open negotiation. Like other real-time activities, we are exploring transactions as an organizing principle among people that, like Barter, may be strong, rich, and long-lived.
PingPongPlusPlus
Hiroshi Ishii, Xiao Xiao, Michael Bernstein, Lining Yao, Dávid Lakatos, Kojo Acquah, Jeff Chan, Sean Follmer and Daniel LeithingerPingPong++ (PingPongPlusPlus) builds on PingPongPlus (1998), a ping pong table that could sense ball hits, and reuse that data to control visualizations projected on the table. We have redesigned the system using open-source hardware and software platforms so that anyone in the world can build their own reactive table. We are exploring ways that people can customize their ping pong game experience. This kiosk allows players to create their own visualizations based on a set of templates. For more control of custom visualizations, we have released a software API based on the popular Processing language to enable users to write their own visualizations. We are always looking for collaborators! Visit pppp.media.mit.edu to learn more.
Pneumatic Shape-Changing Interfaces
Hiroshi Ishii, Lining Yao, Ryuma Niiyama and Sean FollmerAn enabling technology to build shape-changing
interfaces through pneumatically-driven soft
composite materials. The composite materials integrate the
capabilities of both input sensing and active shape output. We explore four applications: a multi-shape mobile device, table-top shape-changing tangibles, dynamically programmable texture for gaming, and shape-shifting lighting apparatus.Radical Atoms
Radical Atoms is our vision of interactions with future material.Hiroshi Ishii, Leonardo Bonanni, Keywon Chung, Sean Follmer, Jinha Lee, Daniel Leithinger and Xiao XiaoRecompose
Matthew Blackshaw, Anthony DeVincenzi, David Lakatos, and Hiroshi IshiiHuman beings have long shaped the physical environment to reflect designs of form and function. As an instrument of control, the human hand remains the most fundamental interface for affecting the material world. In the wake of the digital revolution, this is changing, bringing us to reexamine tangible interfaces. What if we could now dynamically reshape, redesign, and restructure our environment using the functional nature of digital tools? To address this, we present Recompose, a framework allowing direct and gestural manipulation of our physical environment. Recompose complements the highly precise, yet concentrated affordance of direct manipulation with a set of gestures, allowing functional manipulation of an actuated surface.
Relief
Relief is an actuated tabletop display, able to render and animate 3D shapes with a malleable surface. It allows users to experience and form digital models such as geographical terrain in an intuitive manner. The tabletop surface is actuated by an array of motorized pins, which can be addressed individually and sense user input like pulling and pushing. Our current research focuses on utilizing freehand gestures for interacting with content on Relief.Hiroshi Ishii and Daniel LeithingerRopeRevolution
Jason Spingarn-Koff (MIT), Hiroshi Ishii, Sayamindu Dasgupta, Lining Yao, Nadia Cheng (MIT Mechanical Engineering) and Ostap Rudakevych (Harvard University Graduate School of Design)Rope Revolution is a rope-based gaming system for collaborative play. After identifying popular rope games and activities from around the world, we developed a generalized tangible rope interface that includes a compact motion-sensing and force-feedback module that can be used for a variety of rope-based games, such as rope jumping, kite flying, and horseback riding. Rope Revolution is designed to foster both co-located and remote collaborative experiences by using actual rope to connect players in physical activities across virtual spaces.
SandScape
SandScape is a tangible interface for designing and understanding landscapes through a variety of computational simulations using sand. The simulations are projected on the surface of a sand model representing the terrain; users can choose from a variety of different simulations highlighting height, slope, contours, shadows, drainage, or aspect of the landscape model, and alter its form by manipulating sand while seeing the resulting effects of computational analysis generated and projected on the surface of sand in real time. SandScape demonstrates an alternative form of computer interface (tangible user interface) that takes advantage of our natural abilities to understand and manipulate physical forms while still harnessing the power of computational simulation to help in our understanding of a model representation.Carlo Ratti, Assaf Biderman and Hiroshi IshiiSecond Surface: Multi-User Spatial Collaboration System Based on Augmented Reality
Shunichi Kasahara, Hiroshi Ishii, Pattie Maes, Austin S. Lee and Valentin HeunAn environment for creative collaboration is significant for enhancing human communication and expressive activities, and many researchers have explored different collaborative spatial interaction technologies. However, most of these systems require special equipment and cannot adapt to everyday environments. We introduce Second Surface, a novel multi-user augmented reality system that fosters real-time interactions for user-generated content on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. Our system allows users to place 3D drawings, texts, and photos relative to such objects and to share this expression with any other person who uses the same software at the same spot. Second Surface explores a vision that integrates collaborative virtual spaces into the physical space. Our system can provide an alternate reality that generates a playful and natural interaction in an everyday setup.
Sensetable
Sensetable is a system that wirelessly, quickly, and accurately tracks the positions of multiple objects on a flat display surface. The tracked objects have a digital state, which can be controlled by physically modifying them using dials or tokens. We have developed several new interaction techniques and applications on top of this platform. Our current work focuses on business supply-chain visualization using system-dynamics simulation.James Patten, Jason Alonso and Hiroshi IshiiSourcemap
Sourcemap.com is the open directory of supply chains and environmental footprints. Consumers use the site to learn about where products come from, what they’re made of, and how they impact people and the environment. Companies use Sourcemap to communicate transparently with consumers and tell the story of how products are made. Thousands of maps have already been created for food, furniture, clothing, electronics, and more. Behind the website is a revolutionary social network for supply-chain reporting. The real-time platform gathers information from every stakeholder so that–one day soon–you’ll be able to scan a product on a store shelf and know exactly who made it.Hiroshi Ishii and Leonardo Amerigo BonanniT(ether)
Hiroshi Ishii, Andy Lippman, Matthew Blackshaw and David LakatosT(ether) is a novel spatially aware display that supports intuitive interaction with volumetric data. The display acts as a window affording users a perspective view of three- dimensional data through tracking of head position and orientation. T(ether) creates a 1:1 mapping between real and virtual coordinate space allowing immersive exploration of the joint domain. Our system creates a shared workspace in which co-located or remote users can collaborate in both the real and virtual worlds. The system allows input through capacitive touch on the display and a motion-tracked glove. When placed behind the display, the user’s hand extends into the virtual world, enabling the user to interact with objects directly.
Tangible Bits
People have developed sophisticated skills for sensing and manipulating our physical environments, but traditional GUIs (Graphical User Interfaces) do not employ most of them. Tangible Bits builds upon these skills by giving physical form to digital information, seamlessly coupling the worlds of bits and atoms. We are designing "tangible user interfaces" that employ physical objects, surfaces, and spaces as tangible embodiments of digital information. These include foreground interactions with graspable objects and augmented surfaces, exploiting the human senses of touch and kinesthesia. We also explore background information displays that use "ambient media"—light, sound, airflow, and water movement—to communicate digitally mediated senses of activity and presence at the periphery of human awareness. We aim to change the "painted bits" of GUIs to "tangible bits," taking advantage of the richness of multimodal human senses and skills developed through our lifetimes of interaction with the physical world.Hiroshi Ishii, Sean Follmer, Jinha Lee, Daniel Leithinger and Xiao XiaoTopobo
Topobo is a 3-D constructive assembly system embedded with kinetic memory—the ability to record and play back physical motion. Unique among modeling systems is Topobo’s coincident physical input and output behaviors. By snapping together a combination of passive (static) and active (motorized) components, users can quickly assemble dynamic, biomorphic forms such as animals and skeletons, animate those forms by pushing, pulling, and twisting them, and observe the system repeatedly playing back those motions. For example, a dog can be constructed and then taught to gesture and walk by twisting its body and legs. The dog will then repeat those movements.Hayes Raffle, Amanda Parkes and Hiroshi IshiiVideo Play
Sean Follmer, Hayes Raffle and Hiroshi IshiiLong-distance families are increasingly staying connected with free video conferencing tools. However, the tools themselves are not designed to accommodate children's or families' needs. We explore how play can be a means for communication at a distance. Our Video Play prototypes are simple video-conferencing applications built with play in mind, creating opportunities for silliness and open-ended play between adults and young children. They include simple games, such as Find It, but also shared activities like book reading, where users' videos are displayed as characters in a story book.