Fluid Interfaces
How to integrate the world of information and services more naturally into our daily physical lives, enabling insight, inspiration, and interpersonal connections.
The Fluid Interfaces research group is radically rethinking the ways we interact with digital information and services. We design interfaces that are more intuitive and intelligent, and better integrated in our daily physical lives. We investigate ways to augment the everyday objects and spaces around us, making them responsive to our attention and actions. The resulting augmented environments offer opportunities for learning and interaction and ultimately for enriching our lives.

Research Projects

  • Adventure Game for Reducing Cognitive Biases

    Pattie Maes and Cassandra Xia

    This project explores the possibility of reducing cognitive biases (logical reasoning errors) through the use of a game to introduce appropriate statistical mental models.

  • Augmented Airbrush

    Pattie Maes, Joseph A. Paradiso, Roy Shilkrot and Amit Zoran

    We present an augmented handheld airbrush that allows unskilled painters to experience the art of spray painting. Inspired by similar smart tools for fabrication, our handheld device uses 6DOF tracking, mechanical augmentation of the airbrush trigger, and a specialized algorithm to let the painter apply color only where indicated by a reference image. It acts both as a physical spraying device and as an intelligent digital guiding tool that provides manual and computerized control. Using an inverse rendering approach allows for a new augmented painting experience with unique results. We present our novel hardware design, control software, and a discussion of the implications of human-computer collaborative painting.

  • Augmenting Reflection and Awareness with Wearable Devices

    Pattie Maes, Sophia Brueckner, Tinsley Galyean and The Dalai Lama Center for Ethics and Transformative Values

    With this project, we are building wearable devices that can be used to encourage connectedness, awareness, and reflection, providing a radically different type of technology for social networking.

  • AutoEmotive: Bringing Empathy to the Driving Experience

    Pattie Maes, Rosalind W. Picard, Judith Amores Fernandez, Xavier Benavides Palos, Javier Hernandez Rivera and Daniel Jonathan McDuff

    Regardless of the emotional state of drivers, current cars feel impassive and disconnected. We believe that by adding emotion-sensing technologies inside the car, we can dramatically improve the driving experience while increasing the safety of drivers. This work explores the landscape of possible applications when incorporating stress-sensing devices in the car.

  • Clearcut: Bringing Direct Manipulation to Laser Cutter Using a Transparent Display

    Pattie Maes, Pattie Maes, Anirudh Sharma and Meghana Bhat

    Clearcut is a system for direct manipulation with laser cutters. The system consists of a semi-transparent, projected display and optical tracking system interfaced with a laser cutter so that users can draw and edit virtual graphics on top of their work piece which can then be cut by the laser cutter. The system also helps users make virtual copies of physical artifacts. Clearcut offers advantages currently only accessible with hand fabrication, such as letting users adjust designs quickly and iteratively based on visual feedback by virtue of stylus, and letting users use ruler, protractor, and more as if they were working on the physical piece directly.

  • Data Objects

    Pattie Maes and Sang-won Leigh

    With the advent of touch screens and gesture systems, computing has become manipulating digital data by tapping, dragging, and making hand gestures. However, one of the most crucial functions of hands is missing in this context: carrying. Data has an un-carry-able form, as visualized shapes in 2D displays. Data can't leave displays while retaining their own visual shapes: they become invisible data over Internet cables–wireless signals. This project aims to transform digital data into a more objectified form, which we can carry with our hands, in our pockets, not through any digital devices or storage which have been playing a role as "containers" of digital data. The short-term goal of this project is to create proof-of-concept interactions based on existing platforms, and then to create new technologies to bring an authentic experience of manipulating digital data as physical objects.

  • Empathy Box

    Pattie Maes and Sophia Brueckner

    The Empathy Box is an appliance inspired by Philip K. Dick's science fiction novel, Do Androids Dream of Electric Sheep? It connects the user to one or many anonymous people using touch.

  • Enlight

    Pattie Maes, Yihui Saw, Tal Achituv, Natan Linder, and Rony Kubat

    In physics education, virtual simulations have given us the ability to show and explain phenomena that are otherwise invisible to the naked eye. However, experiments with analog devices still play an important role. They allow us to verify theories and discover ideas through experiments that are not constrained by software. What if we could combine the best of both worlds? We achieve that by building our applications on a projected augmented reality system. By projecting onto physical objects, we can paint the phenomena that are invisible. With our system, we have built "physical playgrounds"—simulations that are projected onto the physical world and respond to detected objects in the space. Thus, we can draw virtual field lines on real magnets, track and provide history on the location of a pendulum, or even build circuits with both physical and virtual components.

  • EyeRing: A Compact, Intelligent Vision System on a Ring

    Roy Shilkrot and Suranga Nanayakkara

    EyeRing is a wearable intuitive interface that allows a person to point at an object to see or hear more information about it. We came up with the idea of a micro camera worn as a ring on the index finger with a button on the side, which can be pushed with the thumb to take a picture or a video that is then sent wirelessly to a mobile phone to be analyzed. The user uses voice to instruct the system on what information they are interested in and receives the answer in either auditory or visual form. The device also provides some simple haptic feedback. This finger-worn configuration of sensors and actuators opens up a myriad of possible applications for the visually impaired as well as for sighted people.

  • FingerReader

    Pattie Maes, Jochen Huber, Roy Shilkrot, Connie K. Liu and Suranga Nanayakkara

    FingerReader is a finger-worn device that helps the visually impaired to effectively and efficiently read paper-printed text. It works in a local-sequential manner for scanning text that enables reading of single lines or blocks of text, or skimming the text for important sections while providing auditory and haptic feedback.

  • Flexpad

    Pattie Maes, Jürgen Steimle and Andreas Jordt

    Flexpad is a highly flexible display interface. Using a Kinect camera and a projector, Flexpad transforms virtually any sheet of paper or foam into a flexible, highly deformable, and spatially aware handheld display. It uses a novel approach for tracking deformed surfaces from depth images in real time. This approach captures deformations in high detail, is very robust to occlusions created by the user’s hands and fingers, and does not require any kind of markers or visible texture. As a result, the display is considerably more deformable than previous work on flexible handheld displays, enabling novel applications that leverage the high expressiveness of detailed deformation.

  • Glassified: Explorations with Transparent Displays

    Pattie Maes and Anirudh Sharma

    This project explores techniques and applications for using truly transparent displays for augmented reality. The user looks at the physical world through a piece of glass, a lens, and perceives relevant digital annotations. Unlike traditional handheld augmented-reality solutions such as Layar or Wikitude, the user can see through the screen to look at and interact with the objects behind it. Among other applications, we have built a prototype of a "smart ruler" which can augment paper drawings with measurements, physical simulations, and more.

  • GlassProv Improv Comedy System

    Pattie Maes, Scott Greenwald, Baratunde Thurston and Cultivated Wit

    As part of a Google-sponsored Glass developer event, Scott Greenwald produced a Glass-enabled improv comedy show together with noted comedians from ImprovBoston and Big Bang Improv. The actors, all wearing Glass, received cues in real-time in the course of their improvisation. In contrast with the traditional model for improv comedy, which is punctuated by “freezing” and audience members shouting suggestions, using Glass allowed actors to seamlessly integrate audience suggestions. Actors and audience members agreed that this was a fresh take on improv comedy. This was a powerful demonstration that cues on Glass are suitable for performance—actors could become aware of their presence without having their concentration or flow interrupted, and then view them at an appropriate time thereafter.

  • InterPlay: Full-Body Interaction Platform

    Pattie Maes, Seth Hunter and Pol Pla i Conesa

    InterPlay is a platform for designers to create interactive experiences that transform public spaces into immersive environments where people become the central agents. It uses computer vision and projection to facilitate full-body interaction with digital content. The physical world is augmented to create shared experiences that encourage active play, negotiation, and creative composition.

  • Limbo: Reprogramming Body-Control System

    Sang-won Leigh, Ermal Dreshaj, Pattie Maes, and Mike Bove

    This project aims to create technologies to support people who have lost the ability to control a certain part of their body, or who are attempting sophisticated tasks beyond their capabilities, using wearable sensing and actuating techniques. Our strategy is to mix a body gesture/signal detection system and a muscle actuating/limiting system, and to reprogram the way human body parts are controlled. For example, individuals with paralysis could regain the experience of grasping with their hands by actuating hand muscles based on gaze gestures. Individuals who have lost leg control could control their legs with finger movement—and be able to drive a car without special assistance. Individuals handling very fragile objects could limit their grasping strength using voice commands (i.e., "not stronger," "weaker"). A person with a specific skill could help another complete a complicated task via another's body control system.

  • LuminAR

    Natan Linder, Pattie Maes, and Rony Kubat

    LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into a new category of robotic, digital information devices. The LuminAR Bulb combines a Pico-projector, camera, and wireless computer in a compact form factor. This self-contained system enables users with just-in-time projected information and a gestural user interface, and it can be screwed into standard light fixtures everywhere. The LuminAR Lamp is an articulated robotic arm, designed to interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment their environments with media and information, while seamlessly connecting with laptops, mobile phones, and other electronic devices. LuminAR transforms surfaces and objects into interactive spaces that blend digital media and information with the physical space. The project radically rethinks the design of traditional lighting objects, and explores how we can endow them with novel augmented-reality interfaces.

  • MARS: Manufacturing Augmented Reality System

    Rony Daniel Kubat, Natan Linder, Ben Weissmann, Niaja Farve, Yihui Saw and Pattie Maes

    Projected augmented reality in the manufacturing plant can increase worker productivity, reduce errors, gamify the workspace to increase worker satisfaction, and collect detailed metrics. We have built new LuminAR hardware customized for the needs of the manufacturing plant and software for a specific manufacturing use case.

  • Move Your Glass

    Pattie Maes and Niaja Farve

    Move Your Glass is an activity and behavior tracker that tries to increase wellness by enforcing positive behaviors.

  • Moving Portraits

    Pattie Maes
    A Moving Portrait is a framed portrait that is aware of and reacts to viewers’ presence and body movements. A portrait represents a part of our lives and reflects our feelings, but it is completely oblivious to the events that occur around it or to the people who view it. By making a portrait interactive, we create a different and more engaging relationship between it and the viewer.
  • Musical Paintings

    Eric Rosenbaum and Sophia Brueckner

    Touch the painting to release its music. Slide your finger across it to play melodies, play chords with your palm, improvise a duet. We've combined traditional painting techniques with conductive paint and capacitive touch sensing. The result is a new form of visual music, combining composition and instrument into a playable score.

  • Ogle: A Mobile Eye-Tracking Application Platform

    Pattie Maes and Niaja Farve

    Although the eye, its movements, and how it works has been studied since the late 1800s, studies involving eye tracking have been primarily constrained to stationary applications, due to limitations in computational power as well as the bulky size of the equipment. Recent advances in hardware technology have made it possible to use eye-tracking technology in a mobile context. Ogle is a platform for compact, mobile eye tracking for everyday use applications.

  • OpenShades: Augmented Reality with Google Glass

    Brandyn White, Scott Greenwald, Andrew Miller, Kurtis Nelson, and Pattie Maes

    We are exploring the use of Google Glass for augmented reality. The Glass display allows us to overlay information on the world. Streaming data from Glass, and communicating back to the user with the display, we can enable remote people to see and participate in activities—to teach, learn, and collaborate. Using a combination of software and hardware tools, we aim to facilitate this type of interaction.

  • Perifoveal Display

    Valentin Heun, Anette von Kapri and Pattie Maes

    Today's computer screens are ill-suited for displaying large amounts of dynamic data such as stock market updates or control room data. The Perifoveal Display is a multi-screen display that is aware of the user's focus of attention and adapts the rendering of the data based on this information. Information in the user's visual periphery is displayed using motion and larger black and white blocks, while information in the user's foveal visual area is displayed in very fine detail. Using this technology the user's entire visual system can be used to more easily monitor large amounts of changing data.

  • Reality Editor: Programming Smarter Objects

    Valentin Heun, James Hobin, Pattie Maes

    The Reality Editor system supports editing the behavior and interfaces of so-called “smart objects”: objects or devices that have an embedded processor and communication capability. Using augmented reality techniques, the Reality Editor maps graphical elements directly on top of the tangible interfaces found on physical objects, such as push buttons or knobs. The Reality Editor allows flexible reprogramming of the interfaces and behavior of the objects as well as defining relationships between smart objects in order to easily create new functionalities.

  • ShowMe: Immersive Remote Collaboration System with 3D Hand Gestures

    Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos

    ShowMe is an immersive mobile collaboration system that allows remote users to communicate with peers using video, audio, and gestures. With this research, we explore the use of head-mounted displays and depth sensor cameras to create a system that (1) enables remote users to be immersed in another person’s view, and (2) offer a new way of sending and receiving the guidance of an expert through 3D hand gestures. With our system, both users are surrounded in the same physical environment and can perceive real-time inputs from each other.

  • Six-Forty by Four-Eighty: An Interactive Lighting System

    Marcelo Coelho and Jamie Zigelbaum

    Six-Forty by Four-Eighty is an interactive lighting system composed of an array of magnetic physical pixels. Individually, pixel-tiles change their color in response to touch and communicate their state to each other by using a person's body as the conduit for information. When grouped together, the pixel-tiles create patterns and animations that can serve as a tool for customizing our physical spaces. By transposing the pixel from the confines of the screen and into the physical world, focus is drawn to the materiality of computation and new forms for design emerge.

  • SixthSense

    Pranav Mistry
    Information is often confined to paper or computer screens. SixthSense frees data from these confines and seamlessly integrates information and reality. With the miniaturization of computing devices, we are always connected to the digital world, but there is no link between our interactions with these digital devices and our interactions with the physical world. SixthSense bridges this gap by augmenting the physical world with digital information, bringing intangible information into the tangible world. Using a projector and camera worn as a pendant around the neck, SixthSense sees what you see and visually augments surfaces or objects with which you interact. It projects information onto any surface or object, and allows users to interact with the information through natural hand gestures, arm movements, or with the object itself. SixthSense makes the entire world your computer.
  • Smarter Objects: Health

    Pattie Maes, Valentin Heun, James Hobin and Benjamin Reynolds

    Alongside an overall framework for Smarter Objects, we work on individual themes that illustrate the Smarter Objects potential. The Health example breaks the bubble that may form around us when we measure our life data but have no connection to the physical world around us. With a Smarter Health Object you are able to connect yourself to the objects around you in order to communicate with your peers or empower your physical environment to support you.

  • Smarter Objects: Home Automation

    Pattie Maes, Valentin Heun and James Hobin

    Alongside an overall framework for Smarter Objects, we work on individual themes that illustrate the Smarter Objects potential. The Home Automation example enables a powerful yet intuitive solution to highly customize and personalize lighting, kitchen appliances, air conditioning, and smart grid applications.

  • Smarter Objects: Play

    Pattie Maes, Valentin Heun and James Hobin

    Alongside an overall framework for Smarter Objects, we work on individual themes that illustrate the Smarter Objects potential. The Play example enables a simple and intuitive interface to connect and program a LEGO Mindstorms component. This interface is so simple that a child can build robotics.

  • STAT: Statistical Augmentation Tool

    Pattie Maes, Niaja Farve, David Hill and Paul Dawson

    STAT is a Glass app that allows coaches to get real-time statistical information about players simply by looking at them.

  • TagMe

    Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos

    TagMe is an end-user toolkit for easy creation of responsive objects and environments. It consists of a wearable device that recognizes the object or surface the user is touching. The user can make everyday objects come to life through the use of RFID tag stickers, which are read by an RFID bracelet whenever the user touches the object. We present a novel approach to create simple and customizable rules based on emotional attachment to objects and social interactions of people. Using this simple technology, the user can extend their application interfaces to include physical objects and surfaces into their personal environment, allowing people to communicate through everyday objects in very low-effort ways.

  • TeleStudio

    Seth Hunter

    TeleKinect is peer-to-peer software for creative tele-video interactions. The environment can be used to interact with others in the same digital window at a distance such as: presenting a Powerpoint together, broadcasting your own news, creating an animation, acting/dancing with any online video, overdub-commentary, teaching, creating a puppet show, storytelling, social TV viewing, and exercising. The system tracks gestures and objects in the local environment and maps them to virtual objects and characters. It allows users to creatively bridge physical and digital meeting spaces by defining their own mappings.

  • Text Editor for Augmented Writing

    Pattie Maes and Cassandra Xia

    We study augmentation of the creative writing process through a new text editor. This text editor presents short-length snippets from books and Internet sources such as blogs and Twitter in the moment of the writing process. The snippets are selected in such a way as to be relevant to the content that the user is trying to produce. This text editor explores a number of ideas including: Is it possible to augment creativity by introducing just-in-time information? Is it more efficient to consume content in the moment of creation? Is it possible to develop a notion of “books” that do not have to be consumed in a serial process?

  • THAW

    Hiroshi Ishii, Pattie Maes, Sang-won Leigh and Philipp Schoessler

    We present a novel interaction system that allows collocated screen devices to work together. The system tracks the position of a smartphone placed on a host computer screen. As a result, the smartphone can interact directly with data displayed on the host computer, which opens up a novel interaction space. We believe that the space on and above the computer screen will open up huge possibilities for new types of interactions. What makes this technology especially interesting is today’s ubiquity of smartphones and the fact that we can achieve the tracking solely through installing additional software on potentially any phone or computer.

  • Thermal Interfaces

    Pattie Maes and Jochen Huber

    In this project, we explore interfaces that can heat up and cool down. These thermal interfaces can be embedded into garments, clothes, or displays. They are particularly suitable for providing subtle, yet expressive on-body feedback for interfaces that rely on alternative feedback modalities.

  • VisionPlay

    Pattie Maes and Seth Hunter

    VisionPlay is a framework to support the development of augmented play experiences for children. We are interested in exploring mixed reality applications enabled by web cameras, computer vision techniques, and animations that are more socially oriented and physically engaging. These include using physical toys to control digital characters, augmenting physical play environments with projection, and merging representations of the physical world with virtual play spaces.

  • WearScript for Glass

    Pattie Maes, Scott Greenwald, Brandyn White and UMD

    WearScript is an open-source platform for rapid prototyping and sharing of networked applications for Android and Glass. Code is edited in a web-based development environment, and stored in the cloud for easy sharing. Applications can easily be networked with other Android devices, peripherals, or web applications.