Responsive Environments
How sensor networks augment and mediate human experience, interaction, and perception.
The Responsive Environments group explores how sensor networks augment and mediate human experience, interaction and perception, while developing new sensing modalities and enabling technologies that create new forms of interactive experience and expression. Our current research encompasses the development and application of various types of sensor networks, energy harvesting and power management, and the technical foundation of ubiquitous computing. Our work is highlighted in diverse application areas, which have included automotive systems, smart highways, medical instrumentation, RFID, wearable computing, and interactive media.

Research Projects

  • A Machine Learning Toolbox for Musician Computer Interaction

    Joe Paradiso and Nick Gillian

    The SEC is an extension to the free open-source program EyesWeb that contains a large number of machine learning and signal processing algorithms that have been specifically designed for real-time pattern and gesture recognition. All the algorithms within the SEC are encapsulated as individual blocks, allowing the user to connect the output of one block to the input of another to create a signal flow chain. This allows a user to quickly build and train their own custom gesture recognition system, without having to write a single line of code or explicitly understand how any of the machine learning algorithms within their recognition system work.

  • Beyond the Light Switch: New Frontiers in Dynamic Lighting

    Matthew Aldrich

    Advances in building technology and sensor networks offer a chance to imagine new forms of personalized and efficient utility control. One such area is lighting control. With the aid of sensor networks, these new control systems not only offer lower energy consumption, but also enable new ways to specify and augment lighting. It is our belief that dynamic lighting controlled by a single user, or even an entire office floor, is the frontier of future intelligent and adaptive systems.

  • Chameleon Guitar: Physical Heart in a Virtual Body

    Joe Paradiso and Amit Zoran
    How can traditional values be embedded into a digital object? We explore this concept by implementing a special guitar that combines physical acoustic properties with virtual capabilities. The acoustical values will be embodied by a wooden heart—a unique, replaceable piece of wood that will give the guitar a unique sound. The acoustic signal created by this wooden heart will be digitally processed in order to create flexible sound design.
  • Customizable Sensate Surface for Music Control

    Joe Paradiso, Nan-Wei Gong and Nan Zhao

    We developed a music control surface which enables integration between any musical instruments via a versatile, customizable, and inexpensive user interface. This sensate surface allows capacitive sensor electrodes and connections between electronics components to be printed onto a large roll of flexible substrate unrestricted in length. The high dynamic range capacitive sensing electrodes can not only infer touch, but near-range, non-contact gestural nuance in a music performance. With this sensate surface, users can “cut” out their desired shapes, “paste” the number of inputs, and customize their controller interfaces, which can then send signals wirelessly to effects or software synthesizers. We seek to find a solution for integrating the form factor of traditional music controllers seamlessly on top of one’s instrument while adding expressiveness to performance by sensing and incorporating movements and gestures to manipulate the musical output.

  • Data-Driven Elevator Music

    Joe Paradiso, Gershon Dublon, Nicholas Joliat, Brian Mayton and Ben Houge (MIT Artist in Residence)

    Our new building lets us see across spaces, extending our visual perception beyond the walls that enclose us. Yet, invisibly, networks of sensors, from HVAC and lighting systems to Twitter and RFID, control our environment and capture our social dynamics. This project proposes extending our senses into this world of information, imagining the building as glass in every sense. Sensor devices distributed throughout the Lab transmit privacy-protected audio streams and real-time measurements of motion, temperature, humidity, and light levels. The data are composed into an eight-channel audio installation in the glass elevator that turns these dynamic parameters into music, while microphone streams are spatialized to simulate their real locations in the building. A pressure sensor in the elevator provides us with fine-grained altitude to control the spatialization and sonification. As visitors move from floor to floor, they hear the activities taking place on each.

  • Dense, Low-Power Environmental Monitoring for Smart Energy Profiling

    Nan-Wei Gong, Ashley Turza, David Way and Joe Paradiso with: Phil London, Gary Ware, Brett Leida and Tim Ren (Schneider Electric); Leon Glicksman and Steve Ray (MIT Building Technologies)

    We are working with sponsor Schneider Electric to deploy a dense, low-power wireless sensor network aimed at environmental monitoring for smart energy profiling. This distributed sensor system measures temperature, humidity, and 3D airflow, and transmits this information through a wireless Zigbee protocol. These sensing units are currently deployed in the lower atrium of E14. The data is being used to inform CFD models of airflow in buildings, explore and retrieve valuable information regarding the efficiency of commercial building HVAC systems, energy efficiency of different building materials, and lighting choices in novel architectural designs.

  • DoppelLab: Experiencing Multimodal Sensor Data

    Joe Paradiso, Gershon Dublon and Brian Dean Mayton
    Homes and offices are being filled with sensor networks to answer specific queries and solve pre-determined problems, but no comprehensive visualization tools exist for fusing these disparate data to examine relationships across spaces and sensing modalities. DoppelLab is a cross-reality virtual environment that represents the multimodal sensor data produced by a building and its inhabitants. Our system encompasses a set of tools for parsing, databasing, visualizing, and sonifying these data; by organizing data by the space from which they originate, DoppelLab provides a platform to make both broad and specific queries about the activities, systems, and relationships in a complex, sensor-rich environment.
  • DoppelLab: Spatialized Sonification in a 3D Virtual Environment

    Joe Paradiso, Nicholas Joliat, Brian Mayton, Gershon Dublon, and Ben Houge (MIT Artist in Residence)

    In DoppelLab, we are developing tools that intuitively and scalably represent the rich, multimodal sensor data produced by a building and its inhabitants. Our aims transcend the traditional graphical display, in terms of the richness of data conveyed and the immersiveness of the user experience. To this end, we have incorporated 3D spatialized data sonification into the DoppelLab application, as well as in standalone installations. Currently, we virtually spatialize streams of audio recorded by nodes throughout the physical space. By reversing and shuffling short audio segments, we distill the sound to its ambient essence while protecting occupant privacy. In addition to the sampled audio, our work includes abstract data sonification that conveys multimodal sensor data.

  • Expressive Re-Performance

    Joe Paradiso, Nick Gillian and Laurel Smith Pardue

    Expressive musical re-performance is about enabling a person to experience the creative aspects of a playing a favorite song regardless of technical expertise. This is done by providing users with computer-linked electronic instruments that distills the instruments' interface but still allows them to provide expressive gesture. The next note in an audio source is triggered on the instrument, with the computer providing correctly pitched audio and mapping the expressive content onto it. Thus, the physicality of the instrument remains, but requires far less technique. We are implementing an expressive re-performance system using commercially available, expressive electronic musical instruments and an actual recording as the basis for deriving audio. Performers will be able to select a voice within the recording and re-perform the song with the targeted line subject to their own creative and expressive impulse.

  • Feedback Controlled Solid State Lighting

    Joe Paradiso and Matt Aldrich

    At present, luminous efficacy and cost remain the greatest barriers to broad adoption of LED lighting. However, it is anticipated that within several years, these challenges will be overcome. While we may think our basic lighting needs have been met, this technology offers many more opportunities than just energy efficiency: this research attempts to alter our expectations for lighting and cast aside our assumptions about control and performance. We will introduce new, low-cost sensing modalities that are attuned to human factors such as user context, circadian rhythms, or productivity, and integrate these data with atypical environmental factors to move beyond traditional lux measurements. To research and study these themes, we are focusing on the development of superior color-rendering systems, new power topologies for LED control, and low-cost multimodal sensor networks to monitor the lighting network as well as the environment.

  • FreeD

    Joe Paradiso and Amit Zoran

    The FreeD is a hand-held, digitally controlled, milling device that is guided and monitored by a computer while still preserving the craftsperson's freedom to sculpt and carve. The computer will intervene only when the milling bit approaches the planned model. Its interaction is either by slowing down the spindle speed or by drawing back the shaft; the rest of the time it allows complete freedom, letting the user to manipulate and shape the work in any creative way.

  • Gesture Recognition Toolkit

    Joe Paradiso and Nick Gillian

    The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, c++ machine-learning library that has been specifically designed for real-time gesture recognition. The GRT has been created as a general-purpose tool for allowing programmers with little or no machine-learning experience to develop their own machine-learning based recognition systems, through just a few lines of code. Further, the GRT is designed to enable machine-learning experts to precisely customize their own recognition systems, and easily incorporate their own algorithms within the GRT framework. In addition to facilitating developers to quickly create their own gesture-recognition systems, the machine-learning algorithms at the core of the GRT have been designed to be rapidly trained with just a few examples examples for each gesture. The GRT therefore allows a more diverse group of users to easily integrate gesture recognition into their own projects.

  • Gestures Everywhere

    Joseph A. Paradiso and Nicholas Gillian

    Gestures Everywhere is a multimodal framework for supporting ubiquitous computing. Our framework aggregates the real-time data from a wide range of heterogeneous sensors, and provides an abstraction layer through which other ubiquitous applications can request information about an environment or a specific individual. The Gestures Everywhere framework supports both low-level spatio-temporal properties, such as presence, count, orientation, location, and identity; in addition to higher-level descriptors, including movement classification, social clustering, and gesture recognition.

  • Grassroots Mobile Power

    Joe Paradiso, Ethan Zuckerman, Pragun Goyal and Nathan Matias

    We want to help people in nations where electric power is scarce to sell power to their neighbors. We’re designing a piece of prototype hardware that plugs into a diesel generator or other power source, distributes the power to multiple outlets, monitors how much power is used, and uses mobile payments to charge the customer for the power consumed.

  • Hackable, High-Bandwidth Sensory Augmentation

    Joe Paradiso and Gershon Dublon

    The tongue is known to have an extremely dense sensing resolution, as well as an extraordinary degree of neuroplasticity, the ability to adapt to and internalize new input. Research has shown that electro-tactile tongue displays paired with cameras can be used as vision prosthetics for the blind or visually impaired; users quickly learn to read and navigate through natural environments, and many describe the signals as an innate sense. However, existing displays are expensive and difficult to adapt. Tongueduino is an inexpensive, vinyl-cut tongue display designed to interface with many types of sensors besides cameras. Connected to a magnetometer, for example, the system provides a user with an internal sense of direction, like a migratory bird. Plugged into weighted piezo whiskers, a user can sense orientation, wind, and the lightest touch. Through tongueduino, we hope to bring electro-tactile sensory substitution beyond vision replacement, towards open-ended sensory augmentation.

  • Living Observatory Installation: A Transductive Encounter with Ecology

    Joseph A. Paradiso, Brian Mayton and Gershon Dublon

    In the Living Observatory installation at the Other Festival, we invite participants into a transductive encounter with a wetland environment in flux. Our installation brings sights, smells, sounds, and a bit of mud from a peat bog undergoing restoration near Plymouth, MA to the MIT Media Lab. As part of the Living Observatory initiative, we are developing sensor networks that document ecological processes and allow people to experience the data at different spatial and temporal scales. Small, distributed sensor devices capture climate and other environmental data, while others stream audio from high in the trees and underwater. Visit at any time from dawn till dusk and again after midnight, and check the weather report on our website (tidmarsh.media.mit.edu) for highlights; if you’re lucky you might just catch an April storm.

  • Living Observatory: Arboreal Telepresence

    Joseph A. Paradiso, V. Michael Bove, Edwina PortocarreroGershon Dublon, Brian Mayton and Glorianna Davenport

    Extending the Living Observatory installation, we have instrumented the roots of several trees outside of E15 with vibratory transducers that excite the trees with live streaming sound from a forest near Plymouth, MA. Walking though the trees just outside the Lab, you won't notice anything, but press your ear up against one of them and you'll feel vibrations and hear sound from a tree 60 miles away. Visit at any time from dawn till dusk and again after midnight; if you’re lucky you might just catch an April storm, a flock of birds, or an army of frogs.

  • Living Observatory: Sensor Networks for Documenting and Experiencing Ecology

    Glorianna Davenport, Joe Paradiso, Gershon Dublon, Pragun Goyal and Brian Dean Mayton

    Living Observatory is an initiative for documenting and interpreting ecological change that will allow people, individually and collectively, to better understand relationships between ecological processes, human lifestyle choices, and climate change adaptation. As part of this initiative, we are developing sensor networks that document ecological processes and allow people to experience the data at different spatial and temporal scales. Low-power sensor nodes capture climate and other data at a high spatiotemporal resolution, while others stream audio. Sensors on trees measure transpiration and other cycles, while fiber-optic cables in streams capture high resolution temperature data. At the same time, we are developing tools that allow people to explore this data, both remotely and onsite. The remote interface allows for immersive 3D exploration of the terrain, while visitors to the site will be able to access data from the network around them directly from wearable devices.

  • PrintSense: A Versatile Sensing Technique to Support Flexible Surface Interaction

    Joseph A. Paradiso and Nan-wei Gong

    Touch sensing has become established for a range of devices and systems both commercially and in academia. In particular, multi-touch scenarios based on flexible sensing substrates are popular for products and research projects. We leverage recent developments in single-layer, off-the-shelf, inkjet-printed conductors on flexible substrates as a practical way to prototype the necessary electrode patterns, and combine this with our custom-designed PrintSense hardware module which uses the full range of sensing techniques. Not only do we support touch detection, but in many scenarios also pressure, flexing, and close proximity gestures.

  • Prosthetic Sensor Networks: Factoring Attention, Proprioception, and Sensory Coding

    Gershon Dublon

    Sensor networks permeate our built and natural environments, but our means for interfacing to the resultant data streams have not evolved much beyond HCI and information visualization. Researchers have long experimented with wearable sensors and actuators on the body as assistive devices. A user’s neuroplasticity can, under certain conditions, transcend sensory substitution to enable perceptual-level cognition of “extrasensory” stimuli delivered through existing sensory channels. But there remains a huge gap between data and human sensory experience. We are exploring the space between sensor networks and human augmentation, in which distributed sensors become sensory prostheses. In contrast, user interfaces are substantially unincorporated by the body—our relationship to them never fully pre-attentive. Attention and proprioception are key, not only to moderate and direct stimuli, but also to enable users to move through the world naturally, attending to the sensory modalities relevant to their specific contexts.

  • Rapidnition: Rapid User-Customizable Gesture Recognition

    Joe Paradiso and Nick Gillian

    Rapidnition is a new way of thinking about gesturally controlled interfaces. Rather than forcing users to adapt their behavior to a predefined gestural interface, Rapidnition frees users to define their own gestures, which the system rapidly learns. The machine learning algorithms at the core of Rapidnition enable it to quickly infer a user’s gestural vocabulary, using a small number of user-demonstrated examples of each gesture. Rapidnition is capable of recognizing not just static postures but also dynamic temporal gestures. In addition, Rapidnition allows the user to define complex, nonlinear, continuous-mapping spaces. Rapidnition is currently being applied to the real-time recognition of musical gestures to rigorously test both the discrete and continuous recognition abilities of the system.

  • RElight: Exploring pointing and other gestures for appliance control

    Joseph A. Paradiso, Brian Mayton, Nan Zhao and Nicholas Gillian

    Increasing numbers of networked appliances are bringing about new opportunities for control and automation. At the same time, an increase in multifunctional appliances is creating a complex and often frustrating environment for the end-user. Motivated by these opportunities and challenges, we are exploring the potential for sensor fusion to increase usability and improve user experience while retaining the user in the control loop. We have developed a novel, camera-less, multi-sensor solution for intuitive gesture-based indoor lighting control, called RElight. Using a wireless handheld device, the user simply points at a light fixture to select it and rotates his hand to continuously configure the dimming level. Pointing is a universal gesture that communicates one’s interest in or attention to an object. Advanced machine learning algorithms allow rapid training of gestures and continuous control that supplements gesture classification.

  • Scalable and Versatile Surface for Ubiquitous Sensing

    Joe Paradiso, Nan-Wei Gong and Steve Hodges (Microsoft Research Cambridge)

    We demonstrate the design and implementation of a new versatile, scalable, and cost-effective sensate surface. The system is based on a new conductive inkjet technology, which allows capacitive sensor electrodes and different types of RF antennas to be cheaply printed onto a roll of flexible substrate that may be many meters long. By deploying this surface on (or under) a floor, it is possible to detect the presence and whereabouts of users through both passive and active capacitive coupling schemes. We have also incorporated GSM and NFC electromagnetic radiation sensing and piezoelectric pressure and vibration detection. We believe that this technology has the potential to change the way we think about covering large areas with sensors and associated electronic circuitry–not just floors, but potentially desktops, walls, and beyond.

  • SEAT-E: Solar Power for People - Big Data for Cities

    Kent Larson, Joseph A. Paradiso, Sandra Richter, Nan Zhao and Ines Gaisset

    The SEAT-E provides free access to renewable energy to charge smart
    phones and small electronic devices in cities. This brings cities one step
    closer to fulfilling a key UN goal: sustainable energy access for
    all. The seats are off-grid and entirely autonomous. Fully integrated solar
    panels store energy in Li-ion batteries and can be accessed through
    weatherproof USB ports. The batteries also power lighting and sensing. Each seat has an ID and forms part of the SEAT-E network. The seats gather location-based data on air quality. Cities typically measure air quality only at one or two locations. However, the levels vary significantly depending on traffic and other factors, and as a result, both policymakers and citizens are often uninformed. Public engagement with this sensor data has the potential to create a platform for real dialogue between cities and their citizens about the air we share.

  • Sensor Fusion for Gesture Analyses of Baseball Pitch

    Joseph A. Paradiso and Carolina Brum Medeiros

    Current sports-medicine practices for understanding the motion of athletes while engaged in their sport of choice are limited to camera-based marker tracking systems that generally lack the fidelity and sampling rates necessary to make medically usable measurements; they also typically require a structured, stable "studio" environment, and need considerable time to set up and calibrate. The data from our system provides the ability to understand the forces and torques that an athlete's joints and body segments undergo during activity. It also allows for precise biomechanical modeling of an athlete's motion. The application of sensor fusion techniques is essential for optimal extraction of kinetic and kinematic information. Also, it provides an alternative measurement method that can be used in out-of-lab scenarios.

  • Sticky Circuits

    Joseph A. Paradiso, Leah Buechley, Jie Qi and Nan-wei Gong

    Sticky Circuits is toolkit for creating electronics using circuit board stickers. Circuit stickers are created by printing traces on flexible substrates and adding conductive adhesive. These lightweight, flexible, and sticky circuit boards allow us to begin sticking interactivity onto new spaces and interfaces such as clothing, instruments, buildings, and even our bodies.

  • TRUSS: Tracking Risk with Ubiquitous Smart Sensing

    Joe Paradiso, Gershon Dublon and Brian Dean Mayton

    We are developing a system for inferring safety context on construction sites by fusing data from wearable devices, distributed sensing infrastructure, and video. Wearable sensors stream real-time levels of dangerous gases, dust, noise, light quality, precise altitude, and motion to base stations that synchronize the mobile devices, monitor the environment, and capture video. Context mined from these data is used to highlight salient elements in the video stream for monitoring and decision support in a control room. We tested our system in a initial user study on a construction site, instrumenting a small number of steel workers and collecting data. A recently completed hardware revision will be followed by further user testing and interface development.

  • Virtual Messenger

    Joe Paradiso and Nick Gillian

    The virtual messenger system acts as a portal to subtly communicate messages and pass information between the digital, virtual, and physical worlds, using the Media Lab’s Glass Infrastructure system. Users who opt into the system are tracked throughout the Media Lab by a multimodal sensor network. If a participating user approaches any of the Lab’s Glass Infrastructure displays they are met by their virtual personal assistant (VPA), who exists in Dopplelab’s virtual representation of the current physical space. Each VPA acts as a mediator to pass on any messages or important information from the digital world to the user in the physical world. Participating users can interact with their VPA through a small subset of hand gestures, allowing the user to read any pending messages or notices, or inform their virtual avatar not to bother them until later.

  • Wearable, Wireless Sensor System for Sports Medicine and Interactive Media

    Joe Paradiso, Michael Thomas Lapinski, Dr. Eric Berkson and MGH Sports Medicine
    This project is a system of compact, wearable, wireless sensor nodes, equipped with full six-degree-of-freedom inertial measurement units and node-to-node capacitive proximity sensing. A high-bandwidth, channel-shared RF protocol has been developed to acquire data from many (e.g., 25) of these sensors at 100 Hz full-state update rates, and software is being developed to fuse this data into a compact set of descriptive parameters in real time. A base station and central computer clock the network and process received data. We aim to capture and analyze the physical movements of multiple people in real time, using unobtrusive sensors worn on the body. Applications abound in biomotion analysis, sports medicine, health monitoring, interactive exercise, immersive gaming, and interactive dance ensemble performance.
  • WristQue: A Personal Wristband for Sensing and Smart Infrastructure

    Joe Paradiso and Brian Mayton

    While many wearable sensors have been developed, few are actually worn by people on a regular basis. WristQue is a wristband sensor that is comfortable and customizable to encourage widespread adoption. The hardware is 3D printable, giving users a choice of materials and colors. Internally, the wristband will include a main board with microprocessor, standard sensors, and localization/wireless communication, and an additional expansion board that can be replaced to customize functionality of the device for a wide variety of applications. Environmental sensors (temperature, humidity, light) combined with fine-grained indoor localization will enable smarter building infrastructure, allowing HVAC and lighting systems to optimize to the locations and ways that people are actually using the space. Users' preferences can be input through buttons on the wristband. Fine-grained localization also opens up possibilities for larger applications, such as visualizing building usage through DoppelLab and smart displays that react to users' presence.