Fluid Interfaces
How to integrate the world of information and services more naturally into our daily physical lives, enabling insight, inspiration, and interpersonal connections.
The Fluid Interfaces research group is radically rethinking the ways we interact with digital information and services. We design interfaces that are more intuitive and intelligent, and better integrated in our daily physical lives. We investigate ways to augment the everyday objects and spaces around us, making them responsive to our attention and actions. The resulting augmented environments offer opportunities for learning and interaction and ultimately for enriching our lives.

Research Projects

  • Augmented Product Counter

    Natan Linder, Pattie Maes and Rony Kubat

    We have created an augmented reality (AR) based product display counter that transforms any surface or object into an interactive surface, blending digital media and information with physical space. This system enables shoppers to conduct research in the store, learn about product features, and talk to a virtual expert to get advice via built-in video conferencing. The Augmented Product Counter is based on LuminAR technology, which enables shoppers to get detailed information on products as well as web access for unbiased reviews, compare pricing, and conduct research while they interact with real products.

  • Blossom

    Pattie Maes and Sajid Sadi
    Blossom is a multiperson awareness system that uses ioMaterials-based techniques to connect distant friends and family. It provides an awareness medium that does not rely on the attention- and reciprocity-demanding interfaces that are provided by digital communication media such as mobile phones, SMS, and email. Combining touch-based input with visual, haptic, and motile feedback, Blossoms are created as pairs that can communicate over the network, echoing the conditions of each other and forming an implicit, always-there link that physically express awareness, while retaining the instantaneous capabilities that define digital communication.
  • Brainstorming with Someone Else's Mind

    Pattie Maes and Cassandra Xia

    This project examines how a file system can be organized for brainstorming. We perform a keyword search against a filesystem of information written by a particular user as well as external material that the user has deemed inspirational. When tasked with the objective of generating new ideas, we hit upon the user's files to come up with relevant thoughts endorsed by that user. If multiple people organize their file systems in this way, then it is possible to brainstorm with someone else's head using the same search with the other person's file system.

  • Community Data Portrait

    Pattie Maes and Doug Fritz

    As research communities grow, it is becoming increasingly difficult to understand the dynamics of the community; its history and the varying perspective with which it is interpreted. As our information becomes more digital, the histories and artifacts of community become increasingly hidden. The purpose here is to show researchers how they fit into the background of a larger community, hopefully strengthening weak ties and understanding. At a high level this project is intended to help the Media Lab community to reflect on what it has been working on over the past 25 years and where it should be heading next. On a more individual level, this is intended to help researchers within the community situate themselves by better understanding the research directions and interests of their collaborators.

  • Cornucopia: Digital Gastronomy

    Marcelo Coelho
    Cornucopia is a concept design for a personal food factory, bringing the versatility of the digital world to the realm of cooking. In essence, it is a 3D printer for food that works by storing, precisely mixing, depositing, and cooking layers of ingredients. Cornucopia's cooking process starts with an array of food canisters that refrigerate and store a user's favorite ingredients. These are piped into a mixer and extruder head that can accurately deposit elaborate combinations of food; while this takes place, the food is heated or cooled. This fabrication process not only allows for the creation of flavors and textures that would be completely unimaginable through other cooking techniques, but it also allows the user to have ultimate control over the origin, quality, nutritional value, and taste of every meal.
  • Defuse

    Aaron Zinman, Judith Donath and Pattie Maes
    Defuse is a commenting platform that rethinks the medium's basic interactions. In a world where a single article in The New York Times can achieve 3,000 comments, the original design of public asynchronous text systems has reached its limit; it needs more than social convention. Defuse uses context to change the basics of navigation and message posting. It uses a combination of machine learning, visualization, and structural changes to achieve this goal.
  • Display Blocks

    Pattie Maes and Pol Pla i Conesa

    Display Blocks is a novel approach to display technology that consists of arranging six organic light-emitting diode screens in a cubic form factor. The aim of the project is to explore the possibilities that this type of display holds for data visualization, manipulation, and exploration. The research focuses on exploring how the physicality of the screen can be leveraged to better interpret its contents. To this end, the physical design is accompanied by a series of applications that demonstrate the advantages of this technology.

  • EyeRing: A Compact, Intelligent Vision System on a Ring

    Suranga Nanayakkara and Roy Shilkrot

    EyeRing is a wearable intuitive interface that allows a person to point at an object to see or hear more information about it. We came up with the idea of a micro camera worn as a ring on the index finger with a button on the side, which can be pushed with the thumb to take a picture or a video that is then sent wirelessly to a mobile phone to be analyzed. The user receives information about the object in either auditory or visual form. Future versions of our proposed system may include more sensors to allow non-visual data capture and analysis. This finger-worn configuration of sensors opens up a myriad of possible applications for the visually impaired as well as the sighted.

  • FlexDisplays

    Pattie Maes, Juergen Steimle, and Simon Olberding

    We believe that in the near future many portable devices will have resizable displays. This will allow for devices with a very compact form factor, which can unfold into a large display when needed. In this project, we design and study novel interaction techniques for devices with flexible, rollable, and foldable displays. We are exploring a number of scenarios, including personal and collaborative uses.

  • Flexpad

    Pattie Maes, Jürgen Steimle and Andreas Jordt

    Flexpad is a highly flexible display interface. Using a Kinect camera and a projector, Flexpad transforms virtually any sheet of paper or foam into a flexible, highly deformable, and spatially aware handheld display. It uses a novel approach for tracking deformed surfaces from depth images in real time. This approach captures deformations in high detail, is very robust to occlusions created by the user’s hands and fingers, and does not require any kind of markers or visible texture. As a result, the display is considerably more deformable than previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation.

  • Hyperego

    Aaron Zinman

    When we meet new people in real life, we assess them using a multitude of signals relevant to our upbringing, society, and our experiences and disposition. When we encounter a new individual virtually, usually we are looking at a single communication instance in bodiless form. How can we gain a deeper understanding of this individual without the cues we have in real life? Hyperego aggregates information across various online services to provide a more uniform data portrait of the individual. These portraits are at the user's control, allowing specific data to be hidden, revealed, or grouped in aggregate using an innovative privacy model.

  • Inktuitive: An Intuitive Physical Design Workspace

    Pranav Mistry and Kayato Sekiya
    Despite the advances and advantages of computer-aided design tools, the traditional pencil and paper continue to exist as the most important tools in the early stages of design. Inktuitive is an intuitive physical design workspace that aims to bring together conventional design tools such as paper and pencil with the power and convenience of digital tools for design. Inktuitive also extends the natural work-practice of using physical paper by giving the pen the ability to control the design in physical 3D, freeing it from its tie to the paper. The intuition of pen and paper are still present, but lines are captured and translated into shapes in the digital world. The physical paper is augmented with overlaid digital strokes. Furthermore, the platform provides a novel interaction mechanism for drawing and designing using above-the-surface pen movements.
  • InReach

    Anette von Kapri, Seth Hunter, and Pattie Maes

    Remote collaboration systems are still far from offering the same rich experience that collocated meetings provide. Collaborators can transmit their voice and face at a distance, but it is very hard to point at physical objects and interpret gestures. ReachIn explores how remote collaborators can "reach into" a shared digital workspace where they can manipulate virtual objects and data. The collaborators see their live 3D recreated mesh in a shared virtual space and can point at data or 3D models. They can grab digital objects with their bare hands, and translate, scale, and rotate them.

  • InterPlay: Full-Body Interaction Platform

    Pattie Maes, Seth Hunter and Pol Pla i Conesa

    InterPlay is a platform for designers to create dynamic social simulations that transform public spaces into immersive environments where people become the central agents. It uses computer vision and projection to facilitate full-body interaction with digital content. The physical world is augmented to create shared experiences that encourage active play, negotiation, and creative composition.

  • ioMaterials

    Pattie Maes, Sajid Sadi and Amir Mikhak
    ioMaterials is a project encompassing a variety of collocated sensing-actuation platforms. The project explores various aspects of dense sensing for human communication, memory, and remote awareness. Using dense collocated sensing actuation and sensing, we can change common objects into an interface capable of hiding unobtrusively in plain sight. Relational Pillow and Blossom are instantiations of this ideal.
  • Liberated Pixels

    Susanne Seitinger
    We are experimenting with systems that blur the boundary between urban lighting and digital displays in public spaces. These systems consist of liberated pixels, which are not confined to rigid frames as are typical urban screens. Liberated pixels can be applied to existing horizontal and vertical surfaces in any configuration, and communicate with each other to enable a different repertoire of lighting and display patterns. We have developed Urban Pixels a wireless infrastructure for liberated pixels. Composed of autonomous units, the system presents a programmable and distributed interface that is flexible and easy to deploy. Each unit includes an on-board battery, RF transceiver unit, and microprocessor. The goal is to incorporate renewable energy sources in future versions.
  • Light.Bodies

    Susanne Seitinger, Alex S. Taylor, and Microsoft Research
    Light.Bodies are mobile and portable, hand-held lights that respond to audio and vibration input. The motivation to build these devices is grounded in a historical reinterpretation of street lighting. Before fixed infrastructure illuminated cities at night, people carried lanterns with them to make their presence known. Using this as our starting point, we asked how we might engage people in more actively shaping the lightscapes which surround them. A first iteration of responsive, LED-based colored lights were designed for use in three different settings including a choreographed dance performance, an outdoor public installation, and an audio-visual event.
  • LuminAR

    Natan Linder, Pattie Maes and Rony Kubat

    LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into a new category of robotic, digital information devices. The LuminAR Bulb combines a Pico-projector, camera, and wireless computer in a compact form factor. This self-contained system enables users with just-in-time projected information and a gestural user interface, and it can be screwed into standard light fixtures everywhere. The LuminAR Lamp is an articulated robotic arm, designed to interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment their environments with media and information, while seamlessly connecting with laptops, mobile phones, and other electronic devices. LuminAR transforms surfaces and objects into interactive spaces that blend digital media and information with the physical space. The project radically rethinks the design of traditional lighting objects, and explores how we can endow them with novel augmented-reality interfaces.

  • MARS: Manufacturing Augmented Reality System

    Rony Daniel Kubat, Natan Linder, Niaja Farve, Yihui Saw and Pattie Maes

    Projected augmented reality in the manufacturing plant can increase worker productivity, reduce errors, game-ify the workspace to increase worker satisfaction, and collect detailed metrics. We have built new LuminAR hardware customized for the needs of the manufacturing plant and software for a specific manufacturing use case.

  • MemTable

    Pattie Maes, Seth Hunter, Alexandre Milouchev and Emily Zhao
    MemTable is a table with a contextual memory. The goal of the system is to facilitate reflection on the long-term collaborative work practices of a small group by designing an interface that supports meeting annotation, process documentation, and visualization of group work patterns. The project introduces a tabletop designed both to remember how it is used and to provide an interface for contextual retrieval of information. MemTable examines how an interface that embodies the history of its use can be incorporated into our daily lives in more ergonomic and meaningful contexts.
  • Mouseless

    Pranav Mistry and Pattie Maes

    Mouseless is an invisible computer mouse that provides the familiarity of interaction of a physical mouse without actually needing a real hardware mouse. Despite the advances in computing hardware technologies, the two-button computer mouse has remained the predominant means to interact with a computer. Mouseless removes the requirement of having a physical mouse altogether, but still provides the intuitive interaction of a physical mouse with which users are familiar.

  • Moving Portraits

    Pattie Maes
    A Moving Portrait is a framed portrait that is aware of and reacts to viewers’ presence and body movements. A portrait represents a part of our lives and reflects our feelings, but it is completely oblivious to the events that occur around it or to the people who view it. By making a portrait interactive, we create a different and more engaging relationship between it and the viewer.
  • MTM "Little John"

    Natan Linder

    MTM "Little John" is a multi-purpose, mid-size, rapid prototyping machine with the goal of being a personal fabricator capable of performing a variety of tasks (3D printing, milling, scanning, vinyl cutting) at a price point in the hundreds rather than thousands of dollars. The machine was designed and built in collaboration with the MTM (Machines that Make) Project at MIT Center for Bits and Atoms.

  • Perifoveal Display

    Valentin Heun, Anette von Kapri and Pattie Maes

    Today's GUIs are made for small screens with little information shown. Real-time data that goes beyond one small screen needs to be continuously scanned with our eyes in order to create an abstract model of it in our minds. GUIs therefore do not work with huge amounts of data. The Perifoveal Display takes this abstraction from the user and visualizes it so that the full range of vision can be used for data monitoring. This can be realized by taking care of the different visual systems in the eye. Our vision has a field of view from about 120 degrees, which is highly sensitive for motion. Six degrees of our vision is very slow but complex enough to read text.

  • PreCursor

    Pranav Mistry and Pattie Maes

    'PreCursor' is an invisible layer that hovers in front of the screen and enables novel interaction that reaches beyond the current touchscreens. Using a computer mouse provides two levels of depth when interacting with content on a screen: one can either just hover or click. Hovering allows receiving short descriptions, while clicking selects or performs an action. PreCursor provides this missing sense of interaction to touchscreens. PreCursor technology has the potential to expand beyond a basic computer screen. It can also be applied to mobile touchscreens, to objects in the real world, or can be the launching pad for creating a 3D space for interaction.

  • Pulp-Based Computing: A Framework for Building Computers Out of Paper

    Marcelo Coelho, Pattie Maes, Joanna Berzowska and Lyndl Hall
    Pulp-Based Computing is a series of explorations that combine smart materials, papermaking, and printing. By integrating electrically active inks and fibers during the papermaking process, it is possible to create sensors and actuators that behave, look, and feel like paper. These composite materials not only leverage the physical and tactile qualities of paper, but can also convey digital information, spawning new and unexpected application domains in ubiquitous and pervasive computing at extremely affordable costs.
  • Quickies: Intelligent Sticky Notes

    Pranav Mistry and Pattie Maes
    The goal of Quickies is to bring one of the most useful inventions of the twentieth century into the digital age: the ubiquitous sticky note. Quickies enriches the experience of using sticky notes by linking hand-written sticky notes to mobile phones, digital calendars, task-lists, email, and instant messaging clients. By augmenting the familiar, ubiquitous sticky note, Quickies leverages existing patterns of behavior, merging paper-based sticky note usage with the user's informational experience. The project explores how the use of artificial intelligence (AI), natural language processing (NLP), RFID, and ink-recognition technologies can make it possible to create intelligent sticky notes that can be searched, located, can send reminders and messages, and more broadly, can act as an I/O interface to the digital information world.
  • ReflectOns: Mental Prostheses for Self-Reflection

    Pattie Maes and Sajid Sadi
    ReflectOns are objects that help people think about their actions and change their behavior based on subtle, ambient nudges delivered at the moment of action. Certain tasks—such as figuring out the number of calories consumed, or amount of money spent eating out—are generally difficult for the human mind to grapple with. By using in-place sensing combined with gentle feedback and understanding of users' goals, we can recognize behaviors and trends, and provide a reflection of their own actions tailored to enable both better understanding of the repercussions of those actions, and changes to their behaviors to help them better match their own goals.
  • Remnant: Handwriting Memory Card

    Pattie Maes and Sajid Sadi
    Remnant is a greeting card that merges the affordances of physical materials with the temporal malleability of digital systems to create, enshrine, and reinforce the very thing that makes a greeting personal: the hand of the sender. The card records both the timing and the form of the sender's handwriting when it is first used. At a later time, collocated output recreates the handwriting, allowing the invisible, memorized hand of the sender to write his or her message directly in front of the recipient.
  • Second Surface: Multi-User Spatial Collaboration System Based on Augmented Reality

    Shunichi Kasahara, Hiroshi Ishii, Pattie Maes, Austin S. Lee and Valentin Heun

    An environment for creative collaboration is important for enhancing human communication and expression. And while many researchers have explored different collaborative spatial interaction technologies, most of these systems require special equipment and cannot adapt to everyday environments. Second Surface is a novel multi-user augmented reality system that fosters real-time interactions. This interaction takes place in the physical surroundings of everyday objects, such as trees or houses. Our system allows users to share 3D drawings, texts, and photos relative to such objects with any other person who uses the same software at the same spot. Our system can provide an alternate reality that generates a playful and natural interaction in an everyday setup, exploring a vision that integrates collaborative virtual spaces into the physical space.

  • Sensei: A Mobile Tool for Language Learning

    Pattie Maes, Suranga Nanayakkara and Roy Shilkrot

    Sensei is a mobile interface for language learning (words, sentences, pronunciation). It combines techniques from computer vision, augmented reality, speech recognition, and common-sense knowledge. In the current prototype, the user points his cell phone at an object and then sees the word and hears it pronounced in the language of his choice. The system also shows more information pulled from a common-sense knowledge base. The interface is primarily designed to be used as an interactive and fun language-learning tool for children. Future versions will be applied to other contexts such as real-time language translation for face-to-face communication and assistance to travelers for reading information displays in foreign languages; in addition, future versions will provide feedback to users about whether they are pronouncing words correctly. The project is implemented on a Samsung Galaxy phone running Android, donated by Samsung Corporation.

  • Shutters: A Permeable Surface for Environmental Control and Communication

    Marcelo Coelho and Pattie Maes
    Shutters is a permeable kinetic surface for environmental control and communication. It is composed of actuated louvers (or shutters) that can be individually addressed for precise control of ventilation, daylight incidence, and information display. By combining smart materials, textiles, and computation, Shutters builds upon other facade systems to create living environments and work spaces that are more energy efficient, while being aesthetically pleasing and considerate of their inhabitants' activities.
  • Siftables: Physical Interaction with Digital Media

    Pattie Maes
    Siftables are compact electronic devices with motion sensing, graphical display, and wireless communication. One or more Siftables may be physically manipulated to interact with digital information and media. A group of Siftables can thus act in concert to form a physical, distributed, gesture-sensitive, human-computer interface. Each Siftable object is stand-alone (battery-powered and wireless); Siftables do not require installed infrastructure such as large displays, instrumented tables, or cameras. Siftables' key innovation is to give direct physical embodiment to information items and digital media content, allowing people to use their hands and bodies to manipulate these data instead of relying on virtual cursors and windows. By leveraging people’s ability to manipulate physical objects, Siftables radically simplify the way we interact with information and media.
  • Six-Forty by Four-Eighty: An Interactive Lighting System

    Marcelo Coelho and Jamie Zigelbaum

    Six-Forty by Four-Eighty is an interactive lighting system composed of an array of magnetic physical pixels. Individually, pixel-tiles change their color in response to touch and communicate their state to each other by using a person's body as the conduit for information. When grouped together, the pixel-tiles create patterns and animations that can serve as a tool for customizing our physical spaces. By transposing the pixel from the confines of the screen and into the physical world, focus is drawn to the materiality of computation and new forms for design emerge.

  • SixthSense

    Pranav Mistry
    Information is often confined to paper or computer screens. SixthSense frees data from these confines and seamlessly integrates information and reality. With the miniaturization of computing devices, we are always connected to the digital world, but there is no link between our interactions with these digital devices and our interactions with the physical world. SixthSense bridges this gap by augmenting the physical world with digital information, bringing intangible information into the tangible world. Using a projector and camera worn as a pendant around the neck, SixthSense sees what you see and visually augments surfaces or objects with which you interact. It projects information onto any surface or object, and allows users to interact with the information through natural hand gestures, arm movements, or with the object itself. SixthSense makes the entire world your computer.
  • Smarter Objects: Using AR technology to Program Physical Objects and their Interactions

    Pattie Maes, Valentin Heun and Shunichi Kasahara

    The Smarter Objects system explores a new method for interaction with everyday objects. The system associates a virtual object with every physical object to support an easy means of modifying the interface and the behavior of that physical object as well as its interactions with other "smarter objects." As a user points a smart phone or tablet at a physical object, an augmented reality (AR) application recognizes the object and offers an intuitive graphical interface to program the object's behavior and interactions with other objects. Once reprogrammed, the Smarter Object can then be operated with a simple tangible interface (such as knobs or buttons). Smarter Objects combine the adaptability of digital objects with the simple tangible interface of a physical object. We have implemented several Smarter Objects and usage scenarios demonstrating the potential of this approach.

  • SPARSH

    Pranav Mistry, Suranga Nanayakkara, and Pattie Maes

    SPARSH explores a novel interaction method to seamlessly transfer data among multiple users and devices in a fun and intuitive way. A user touches a data item to be copied from a device, conceptually saving the item in his or her body. Next, the user touches the other device to which he or she wants to paste/pass the saved content. SPARSH uses touch-based interactions as indications for what to copy and where to pass it. Technically, the actual transfer of media happens via the information cloud.

  • Spotlight

    Pattie Maes and Sajid Sadi
    Spotlight is about an artist's ability to create a new meaning using the combination of interactive portraits and diptych or polyptych layouts. The mere placement of two or more portraits near each other is a known technique to create a new meaning in the viewer's mind. Spotlight takes this concept into the interactive domain, creating interactive portraits that are aware of each other's state and gesture. So not only the visual layout, but also the interaction with others creates a new meaning for the viewer. Using a combination of interaction techniques, Spotlight engages the viewer at two levels. At the group level, the viewer influences the portrait's "social dynamics." At the individual level, a portrait's "temporal gestures" expose much about the subject's personality.
  • Sprout I/O: A Texturally Rich Interface

    Marcelo Coelho and Pattie Maes
    Sprout I/O is a kinetic fur that can capture, mediate, and replay the physical impressions we leave in our environment. It combines embedded electronic actuators with a texturally rich substrate that is soft, fuzzy, and pliable to create a dynamic structure where every fur strand can sense physical touch and be individually moved. By developing a composite material that collocates kinetic I/O, while preserving the expectations that we normally have from interacting with physical things, we can more seamlessly embed and harness the power of computation in our surrounding environments to create more meaningful interfaces for our personal and social activities.
  • Surflex: A Shape-Changing Surface

    Marcelo Coelho and Pattie Maes
    Surflex is a programmable surface for the design and visualization of physical objects and spaces. It combines the different memory and elasticity states of its materials to deform and gain new shapes, providing a novel alternative for 3-D fabrication and the design of physically adaptable interfaces.
  • Swyp

    Natan Linder and Alexander List

    With Swyp you can transfer any file from any app to any app on any device, simply with a swipe of a finger. Swyp is a framework facilitating cross-app, cross-device data exchange using physical "swipe" gestures. Our framework allows any number of touch-sensing and collocated devices to establish file-exchange and communications with no pairing other than a physical gesture. With this inherent physical paradigm, users can immediately grasp the concepts behind device-to-device communications. Our prototypes application, Postcards, explore touch-enabled mobile devices connected to the LuminAR augmented surface interface. Postcards allows users to collaborate and create a digital postcards using Swyp interactions. We demonstrate how Swyp enabled interfaces can make a new generation of interactive workspaces possible by allowing pair-free gesture-based communications to and from collocated devices.

  • TaPuMa: Tangible Public Map

    Pranav Mistry and Tsuyoshi Kuroki
    TaPuMa is a digital, tangible, public map that allows people to use everyday objects they carry to access relevant, just-in-time information and to find locations of places or people. TaPuMa envisions that conventional maps can be augmented with the unique identities and affordances of the objects. TaPuMa uses an environment where map and dynamic content is projected on a tabletop. A camera mounted above the table identifies and tracks the locations of the objects on the surface, and a software program identifies and registers the location of objects. After identifying the objects, the software provides relevant information visualizations directly on the table. The projector augments both object and table with projected digital information. TaPuMa explores a novel interaction mechanism where physical objects are used as interfaces to digital information. It allows users to acquire information through tangible media, the things they carry.
  • TeleStudio

    Seth Hunter

    TeleKinect is a peer to peer software for creative tele-video interactions. The environment can be used to interact with others in the same digital window at a distance such as: presenting a powerpoint together, broadcasting your own news, creating an animation, acting/dancing with any online video, overdub-commentary, teaching, creating a puppet show, storytelling, social TV viewing, and exercising together. The system tracks gestures and objects in the local environment and maps them to virtual objects and characters. It allows users to creatively bridge the physical and digital meeting spaces by defining their own mappings.

  • Textura

    Pattie Maes, Marcelo Coelho and Pol Pla i Conesa

    Textura is an exploration of how to enhance white objects with textures. By projecting onto any white surface, we can simulate different textures and materials. We envision this technology to have great potential for customization and personalization, and to be applicable to areas such as industrial design, the game industry, and retail scenarios.

  • The Design of Artifacts for Augmenting Intellect

    Pattie Maes and Cassandra Xia

    Fifty years ago, Doug Engelbart created a conceptual framework for augmenting human intellect in the context of problem-solving. We expand upon Engelbart's framework and use his concepts of process hierarchies and artifact augmentation for the design of personal intelligence augmentation (IA) systems within the domains of memory, decision making, motivation, and mood. We propose a systematic design methodology for personal IA devices, to organize existing IA research within a logical framework, and to uncover underexplored areas of IA that could benefit from the invention of new artifacts.

  • The Relative Size of Things

    Marcelo Coelho and Pattie Maes

    The Relative Size of Things is a low-cost 3D scanner for the microscopic world. It combines a webcam, a three-axis computer-controlled plotter, and image processing to merge hundreds of photographs into a single three-dimensional scan of surface features which are invisible to the naked eye.

  • thirdEye

    Pranav Mistry and Pattie Maes
    thirdEye is a new technique that enables multiple viewers to see different things on the same display screen at the same time. With thirdEye, a public sign board can show a Japanese tourist instructions in Japanese and to an American in English; games won't need a split screen anymore—each player can see his or her personal view of the game on the screen; two people watching TV can watch their favorite channel on a single screen; a public display can show secret messages or patterns; and in the same movie theater, people can see different ends of a suspense movie.
  • Transitive Materials: Towards an Integrated Approach to Material Technology

    Pattie Maes, Marcelo Coelho, Neri Oxman, Sajid Sadi, Amit Zoran and Amir Mikhak
    Transitive Materials is an umbrella project encompassing novel materials, fabrication technologies, and traditional craft techniques that can operate in unison to create objects and spaces that realize truly omnipresent interactivity. We are developing interactive textiles, ubiquitous displays, and responsive spaces that seamlessly couple input, output, processing, communication, and power distribution, while preserving the uniqueness and emotional value of physical materials and traditional craft. Life in a Comic, Physical Heart in a Virtual Body, Augmented Pillows, Flexible Urban Display, Shutters, Sprout I/O, and Pulp-Based Computing are current instantiations of these technologies.
  • VisionPlay

    Pattie Maes and Seth Hunter

    VisionPlay is a framework to support the development of augmented play experiences for children. We are interested in exploring mixed reality applications enabled by web cameras, computer vision techniques, and animations that are more socially oriented and physically engaging. These include using physical toys to control digital characters, augmenting physical play environments with projection, and merging representations of the physical world with virtual play spaces.

  • Watt Watcher

    Pattie Maes, Sajid Sadi and Eban Kunz

    Energy is the backbone of our technological society, yet we have great difficulty understanding where and how much of it is used. Watt Watcher is a project that provides in-place feedback on aggregate energy use per device in a format that is easy to understand and intuitively compare. Energy is inherently invisible, and its use is often sporadic and difficult to gauge. How much energy does your laptop use compared to your lamp? Or perhaps your toaster? By giving users some intuition regarding these basic questions, this ReflectOn allows users both to understand their use patterns and form new, more informed habits.

  • Wear Someone Else's Habits

    Pattie Maes and Cassandra Xia

    This project explores the idea that some "intelligence" is encoded in the habits that people assume in daily life. Adopting someone else's habits might allow you to break out of a personal rut, glean some success tactics from someone you admire, or to empathize with someone you care about. This is a wearable system with a Google Calendar backend that actively alerts users to perform a habit based on the events of their calendar.

  • Wearables for Emotion Capture

    Pattie Maes and Cassandra Xia

    We are exploring the use of wearable objects for capturing emotion. When the user experiences a particular emotion, she initiates the wearable object to generate unique haptic sensations that come to be associated with that particular emotion. We explore the use of these haptic-emotion capture devices triggered by natural gestures such as the knee-slapping funny gesture and the congratulatory high-five gesture.