Fluid Interfaces
Integrating digital interfaces more naturally into our physical lives, enabling insight, inspiration, and interpersonal connections.
The Fluid Interfaces research group is radically rethinking the ways we interact with digital information and services. We design interfaces that are more intuitive and intelligent, and better integrated in our daily physical lives. We investigate ways to augment the everyday objects and spaces around us, making them responsive to our attention and actions. The resulting augmented environments offer opportunities for learning and interaction and ultimately for enriching our lives.

Research Projects

  • Augmented Airbrush

    Pattie Maes, Joseph A. Paradiso, Roy Shilkrot and Amit Zoran

    We present an augmented handheld airbrush that allows unskilled painters to experience the art of spray painting. Inspired by similar smart tools for fabrication, our handheld device uses 6DOF tracking, mechanical augmentation of the airbrush trigger, and a specialized algorithm to let the painter apply color only where indicated by a reference image. It acts both as a physical spraying device and as an intelligent digital guiding tool that provides manual and computerized control. Using an inverse rendering approach allows for a new augmented painting experience with unique results. We present our novel hardware design, control software, and a discussion of the implications of human-computer collaborative painting.

  • Enlight

    Pattie Maes, Yihui Saw, Tal Achituv, Natan Linder, and Rony Kubat

    In physics education, virtual simulations have given us the ability to show and explain phenomena that are otherwise invisible to the naked eye. However, experiments with analog devices still play an important role. They allow us to verify theories and discover ideas through experiments that are not constrained by software. What if we could combine the best of both worlds? We achieve that by building our applications on a projected augmented reality system. By projecting onto physical objects, we can paint the phenomena that are invisible. With our system, we have built "physical playgrounds"—simulations that are projected onto the physical world and respond to detected objects in the space. Thus, we can draw virtual field lines on real magnets, track and provide history on the location of a pendulum, or even build circuits with both physical and virtual components.

  • EyeRing: A Compact, Intelligent Vision System on a Ring

    Roy Shilkrot and Suranga Nanayakkara

    EyeRing is a wearable, intuitive interface that allows a person to point at an object to see or hear more information about it. We came up with the idea of a micro-camera worn as a ring on the index finger with a button on the side, which can be pushed with the thumb to take a picture or a video that is then sent wirelessly to a mobile phone to be analyzed. The user uses voice to instruct the system on what information they are interested in and receives the answer in either auditory or visual form. The device also provides some simple haptic feedback. This finger-worn configuration of sensors and actuators opens up a myriad of possible applications for the visually impaired as well as for sighted people.

  • FingerReader

    Pattie Maes, Jochen Huber, Roy Shilkrot, Connie K. Liu and Suranga Nanayakkara

    FingerReader is a finger-worn device that helps the visually impaired to effectively and efficiently read paper-printed text. It works in a local-sequential manner for scanning text that enables reading of single lines or blocks of text, or skimming the text for important sections while providing auditory and haptic feedback.

  • GlassProv Improv Comedy System

    Pattie Maes, Scott Greenwald, Baratunde Thurston and Cultivated Wit

    As part of a Google-sponsored Glass developer event, we created a Glass-enabled improv comedy show together with noted comedians from ImprovBoston and Big Bang Improv. The actors, all wearing Glass, received cues in real time in the course of their improvisation. In contrast with the traditional model for improv comedy—punctuated by “freezing” and audience members shouting suggestions—using Glass allowed actors to seamlessly integrate audience suggestions. Actors and audience members agreed that this was a fresh take on improv comedy. This was a powerful demonstration that cues on Glass are suitable for performance—actors could become aware of their presence without having their concentration or flow interrupted, and then view them at an appropriate time thereafter.

  • HandsOn: Collaborative 3D Augmented Reality System

    Pattie Maes and Kevin Wong

    2D screens, even stereoscopic ones, limit our capabilities to interact with 3D data. We believe that an augmented reality solution, where 3D data is seamlessly integrated in the real world, is promising. We are exploring a collaborative augmented reality system for visualizing and manipulating 3D data using a head-mounted, see-through display where data manipulation can be done collaboratively using natural hand gestures.

  • JaJan!: Remote Language Learning in Shared Virtual Space

    Pattie Maes and Kevin Wong

    JaJan! is a virtual language learning application in a creative telepresence system where users can learn a language together in the same shared virtual space. JaJan! can support five aspects of language learning: learning in context; personalization of learning materials; learning with cultural information; enacting language-learning scenarios; and supporting creativity and collaboration. Although JaJan! is still in an early stage, we are confident that it will bring profound changes to the ways in which we experience language learning and can make a great contribution to the field of language education.

  • Limbo: Reprogramming Body-Control System

    Sang-won Leigh, Ermal Dreshaj, Pattie Maes, and Mike Bove

    This project aims to create technologies to support people who have lost the ability to control a certain part of their body, or who are attempting sophisticated tasks beyond their capabilities, using wearable sensing and actuating techniques. Our strategy is to mix a body gesture/signal detection system and a muscle actuating/limiting system, and to reprogram the way human body parts are controlled. For example, individuals with paralysis could regain the experience of grasping with their hands by actuating hand muscles based on gaze gestures. Individuals who have lost leg control could control their legs with finger movement—and be able to drive a car without special assistance. Individuals handling very fragile objects could limit their grasping strength using voice commands (i.e., "not stronger," "weaker"). A person with a specific skill could help another complete a complicated task via another's body control system.

  • LuminAR

    Natan Linder, Pattie Maes, and Rony Kubat

    LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into a new category of robotic, digital information devices. The LuminAR Bulb combines a Pico-projector, camera, and wireless computer in a compact form factor. This self-contained system enables users with just-in-time projected information and a gestural user interface, and it can be screwed into standard light fixtures everywhere. The LuminAR Lamp is an articulated robotic arm, designed to interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment their environments with media and information, while seamlessly connecting with laptops, mobile phones, and other electronic devices. LuminAR transforms surfaces and objects into interactive spaces that blend digital media and information with the physical space. The project radically rethinks the design of traditional lighting objects, and explores how we can endow them with novel augmented-reality interfaces.

  • MARS: Manufacturing Augmented Reality System

    Rony Daniel Kubat, Natan Linder, Ben Weissmann, Niaja Farve, Yihui Saw and Pattie Maes

    Projected augmented reality in the manufacturing plant can increase worker productivity, reduce errors, gamify the workspace to increase worker satisfaction, and collect detailed metrics. We have built new LuminAR hardware customized for the needs of the manufacturing plant and software for a specific manufacturing use case.

  • Move Your Glass

    Pattie Maes and Niaja Farve

    Move Your Glass is an activity and behavior tracker that tries to increase wellness by enforcing positive behaviors.

  • Reality Editor: Programming Smarter Objects

    Valentin Heun, James Hobin, Pattie Maes

    The Reality Editor system supports editing the behavior and interfaces of so-called “smart objects”: objects or devices that have an embedded processor and communication capability. Using augmented reality techniques, the Reality Editor maps graphical elements directly on top of the tangible interfaces found on physical objects, such as push buttons or knobs. The Reality Editor allows flexible reprogramming of the interfaces and behavior of the objects as well as defining relationships between smart objects in order to easily create new functionalities.

  • ShowMe: Immersive Remote Collaboration System with 3D Hand Gestures

    Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos

    ShowMe is an immersive mobile collaboration system that allows remote users to communicate with peers using video, audio, and gestures. With this research, we explore the use of head-mounted displays and depth sensor cameras to create a system that (1) enables remote users to be immersed in another person’s view, and (2) offer a new way of sending and receiving the guidance of an expert through 3D hand gestures. With our system, both users are surrounded in the same physical environment and can perceive real-time inputs from each other.

  • SmileCatcher

    Special Interest group(s): 
    Pattie Maes and Niaja Farve

    SmileCatcher is a game to be played solely or in groups that attempts to increase happiness. Previous research has shown that smiling correlates directly to happiness and can even produce happiness in a person. A user playing the game tries to collect as many smiles from people they interact with during a set period of time. In single player mode, the user compares their scores over subsequent days while multiple players compare their scores over a set period of time to other players. The objective of the tool is to encourage positive social interactions through gamification.

  • STEM Accessibility Tool for the Visually Impaired

    Pattie Maes and Rahul Namdev

    We are developing a very intuitive and interactive platform to make complex information (especially Science, Technology, Engineering, and Mathematical material) truly accessible to blind and visually impaired students by using the tactile device with no loss of information compared with printed materials. An important goal of this project is to develop tactile information-mapping protocols through which the tactile interface can best convey educational and other graphical materials.

  • TagMe

    Pattie Maes, Judith Amores Fernandez and Xavier Benavides Palos

    TagMe is an end-user toolkit for easy creation of responsive objects and environments. It consists of a wearable device that recognizes the object or surface the user is touching. The user can make everyday objects come to life through the use of RFID tag stickers, which are read by an RFID bracelet whenever the user touches the object. We present a novel approach to create simple and customizable rules based on emotional attachment to objects and social interactions of people. Using this simple technology, the user can extend their application interfaces to include physical objects and surfaces into their personal environment, allowing people to communicate through everyday objects in very low-effort ways.

  • THAW

    Sang-won Leigh, Philipp Schoessler, Hiroshi Ishii, Pattie Maes

    We present a novel interaction system that allows collocated screen devices to work together. The system tracks the position of a smartphone placed on a host computer screen. As a result, the smartphone can interact directly with data displayed on the host computer, which opens up a novel interaction space. We believe that the space on and above the computer screen will open up huge possibilities for new types of interactions. What makes this technology especially interesting is today’s ubiquity of smartphones and the fact that we can achieve the tracking solely through installing additional software on potentially any phone or computer.