Opera of the Future
Extending expression, learning, and health through innovations in musical composition, performance, and participation.
The Opera of the Future group (also known as Hyperinstruments) explores concepts and techniques to help advance the future of musical composition, performance, learning, and expression. Through the design of new interfaces for both professional virtuosi and amateur music-lovers, the development of new techniques for interpreting and mapping expressive gesture, and the application of these technologies to innovative compositions and experiences, we seek to enhance music as a performance art, and to develop its transformative power as counterpoint to our everyday lives. The scope of our research includes musical instrument design, concepts for new performance spaces, interactive touring and permanent installations, and "music toys." It ranges from extensions of traditional forms to radical departures, such as the Brain Opera, Toy Symphony, and Death and the Powers.

Research Projects

  • Ambisonic Surround-Sound Audio Compression

    Tod Machover and Charles Holbrow

    Traditional music production and studio engineering depends on dynamic range compression—audio signal processors that precisely and dynamically control the gain of an audio signal in the time domain. This project expands on the traditional dynamic range compression model by adding a spatial dimension. Ambisonic Compression allows audio engineers to dynamically control the spatial properties of a three-dimensional sound field, opening new possibilities for surround-sound design and spatial music performance.

  • Breathing Window

    Tod Machover and Rebecca Kleinberger

    Breathing Window is a tool for non-verbal dialogue that reflects on your own breathing while also offering a window on another person's respiration. This prototype is an example of shared human experiences (SHEs) crafted to improve the quality of human understanding and interactions. Our work on SHEs focuses on first encounters with strangers. We meet strangers every day, and without prior background knowledge of the individual we often form opinions based on prejudices and differences. In this work, we bring respiration to the foreground as one common experience of all living creatures.

  • City Symphonies: Massive Musical Collaboration

    Tod Machover, Akito Van Troyer, Benjamin Bloomberg, Charles Holbrow, David Nunez, Simone Ovsey, Sarah Platte, Bryn Bliska, Rébecca Kleinberger, Peter Alexander Torpey and Garrett Parrish

    Until now, the impact of crowdsourced and interactive music projects has been limited: the public contributes a small part of the final result, and is often disconnected from the artist leading the project. We believe that a new musical ecology is needed for true creative collaboration between experts and amateurs. Toward this goal, we are creating "city symphonies," each collaboratively composed with the population of an entire city. We designed the infrastructure needed to bring together an unprecedented number of people, including a variety of web-based music composition applications, a social media framework, and real-world community-building activities. We have premiered city symphonies in Toronto, Edinburgh, Perth, and Lucerne. With support from the Knight Foundation, our first city symphony for the US, A Symphony for Detroit, premiered in fall 2015. We are also working on scaling this process by mentoring independent groups, beginning with Akron, Ohio.

  • Death and the Powers: Global Interactive Simulcast

    Tod Machover, Peter Torpey, Ben Bloomberg, Elena Jessop, Charles Holbrow, Simone Ovsey, Garrett Parrish, Justin Martinez, and Kevin Nattinger

    The live global interactive simulcast of the final February 2014 performance of "Death and the Powers" in Dallas made innovative use of satellite broadcast and Internet technologies to expand the boundaries of second-screen experience and interactivity during a live remote performance. In the opera, Simon Powers uploads his mind, memories, and emotions into The System, represented onstage through reactive robotic, visual, and sonic elements. Remote audiences, via simulcast, were treated as part of The System alongside Powers and the operabots. Audiences had an omniscient view of the action of the opera, as presented through the augmented, multi-camera video and surround sound. Multimedia content delivered to mobile devices, through the Powers Live app, privileged remote audiences with perspectives from within The System. Mobile devices also allowed audiences to influence The System by affecting the illumination of the Winspear Opera House's Moody Foundation Chandelier.

  • Death and the Powers: Redefining Opera

    Tod Machover, Ben Bloomberg, Peter Torpey, Elena Jessop, Bob Hsiung, Akito van Troyer
    "Death and the Powers" is a groundbreaking opera that brings a variety of technological, conceptual, and aesthetic innovations to the theatrical world. Created by Tod Machover (composer), Diane Paulus (director), and Alex McDowell (production designer), the opera uses the techniques of tomorrow to address age-old human concerns of life and legacy. The unique performance environment, including autonomous robots, expressive scenery, new Hyperinstruments, and human actors, blurs the line between animate and inanimate. The opera premiered in Monte Carlo in fall 2010, with additional performances in Boston and Chicago in 2011 and a new production with a global, interactive simulcast in Dallas in February 2014. The DVD of the Dallas performance of Powers was released in April 2015.
  • Empathy and the Future of Experience

    Tod Machover, Akito Van Troyer, Benjamin Bloomberg, Bryn Bliska, Charles Holbrow, David Nunez, Rébecca Kleinberger, Simone Ovsey, Sarah Platte, Peter Torpey, Kelly Donovan, Meejin Yoon and the Empathy and Experience class

    Nothing is more important in today's troubled world than the process of eliminating prejudice and misunderstanding, and replacing them with communication and empathy. We explore the possibility of creating public experiences to dramatically increase individual and community awareness of the power of empathy on an unprecedented scale. We draw on numerous precedents from the Opera of the Future group that have proposed concepts and technologies to inspire and intensify human connectedness (such as Sleep No More, Death and the Powers, Vocal Vibrations, City Symphonies, and Hyperinstruments) and from worldwide instances of transformative shared human experience (such as the Overview Effect, Human Libraries, Immersive Theatre, and non-sectarian spiritual traditions). The objective is to create a model of a multisensory, participatory, spatially radical installation that will break down barriers between people of immensely different backgrounds, providing instantaneous understanding of—as well as long-term commitment to—empathic communication.

  • Fablur

    Tod Machover, Rebecca Kleinberger and Alisha Panjwani

    Fablur explores the limit of the self in its relationship to others through the medium of clothing. The augmented gown uses a rear dome projection system on the surface of the fabric. The system comprises laser projectors and mirror structures talking wirelessly with a computer, within which is contained both content and warp projection mapping software. This novel technological interface presents both a performative element and a seamless integration in a woman's life experience. This wearable project questions the boundary between the self and others, the boundary between the individual and society, and the boundary between the body and nature.

  • Fensadense

    Tod Machover, Ben Bloomberg, Peter Torpey, Garrett Parrish, Kevin King

    Fensadense is a new work for 10-piece ensemble composed by Tod Machover, commissioned for the Lucerne Festival in summer 2015. The project represents the next generation of hyperinstruments, involving the measurement of relative qualities of many performers where previous systems only looked at a single performer. Off-the-shelf components were used to collect data about movement and muscle tension of each musician. The data was analyzed using the Hyperproduction platform to create meaningful production control for lighting and sound systems based on the connection of the performers, with a focus on qualities such as momentum, connection, and tension of the ensemble as a whole. The project premiered at the Lucerne Festival, and a European tour is scheduled for spring 2016.

  • Hyperinstruments

    Tod Machover
    The Hyperinstruments project creates expanded musical instruments and uses technology to give extra power and finesse to virtuosic performers. They were designed to augment a wide range of traditional musical instruments and have been used by some of the world's foremost performers (Yo-Yo Ma, the Los Angeles Philharmonic, Peter Gabriel, and Penn & Teller). Research focuses on designing computer systems that measure and interpret human expression and feeling, exploring appropriate modalities and content of interactive art and entertainment environments, and building sophisticated interactive musical instruments for non-professional musicians, students, music lovers, and the general public. Recent projects involve the production a new version of the "classic" Hyperstring Trilogy for the Lucerne Festival, and the design of a new generation of Hyperinstruments, for Fensadense and other projects, that emphasizes measurement and interpretation of inter-player expression and communication, rather than simply the enhancement of solo performance.
  • Hyperproduction: Advanced Production Systems

    Tod Machover and Benjamin Bloomberg

    Hyperproduction is a conceptual framework and a software toolkit that allows producers to specify a descriptive computational model and consequently an abstract state for a live experience through traditional operating paradigms, such as mixing audio or operation of lighting, sound, and video systems. The hyperproduction system is able to interpret this universal state and automatically utilize additional production systems, allowing for a small number of producers to cohesively guide the attention and perspective of an audience using many or very complex production systems simultaneously. The toolkit is under active development and has been used for new pieces such as Fensadense, and to recreate older systems such as those for the original Hyperstring Triolgy as part of the Lucerne Festival in 2015. Work continues to enable new structures and abstraction within the framework.

  • Hyperscore

    Tod Machover
    Hyperscore is an application to introduce children and non-musicians to musical composition and creativity in an intuitive and dynamic way. The "narrative" of a composition is expressed as a line-gesture, and the texture and shape of this line are analyzed to derive a pattern of tension-release, simplicity-complexity, and variable harmonization. The child creates or selects individual musical fragments in the form of chords or melodic motives, and layers them onto the narrative-line with expressive brushstrokes. The Hyperscore system automatically realizes a full composition from a graphical representation. Currently, Hyperscore uses a mouse-based interface; the final version will support freehand drawing, and integration with the Music Shapers and Beatbugs to provide a rich array of tactile tools for manipulation of the graphical score.
  • Maestro Myth: Exploring the Impact of Conducting Gestures on Musician's Body and Sounding Result

    Tod Machover and Sarah Platte

    Expert or fraud, the powerful person in front of an orchestra or choir attracts both hate and admiration. But what is the actual influence a conductor has on the musician and the sounding result? To throw light on the fundamental principles of this special gestural language, we try to prove a direct correlation between the conductor's gestures, muscle tension, and the physically measurable reactions of musicians in onset-precision, muscle tension, and sound quality. We also measure whether the mere form of these gestures causes different levels of stress or arousal. With this research we aim not only to contribute to the development of a theoretical framework on conducting, but also to enable a precise mapping of gestural parameters in order to develop and demonstrate a new system to the optional enhancement of musical learning, performance, and expression.

  • Media Scores

    Tod Machover and Peter Torpey

    Media Scores extends the concept of a musical score to other modalities, facilitating the process of authoring and performing multimedia compositions and providing a medium through which to realize a modern-day Gesamtkunstwerk. The web-based Media Scores environment and related show control systems leverage research into multimodal representation and encoding of expressive intent. Using such a tool, the composer will be able to shape an artistic work that may be performed through a variety of media and modalities. Media Scores offer the potential for authoring content using live performance data as well as audience participation and interaction. This paradigm bridges the extremes of the continuum from composition to performance, allowing for improvisation. The Media Score also provides a common point of reference in collaborative productions, as well as the infrastructure for real-time control of technologies used during live performance.

  • Music Visualization via Musical Information Retrieval

    Tod Machover and Thomas Sanchez

    In a study of human perception of music in relation to different representations of video graphics, this project explores the automatic synchronization in real time between audio and image. This aims to make the relationship seem smaller and more consistent. The connection is made using techniques that rely on audio signal processing to automatically extract data from the music, which subsequently are mapped to the visual objects. The visual elements are influenced by data obtained from various Musical Information Retrieval (MIR) techniques. By visualizing music, one can stimulate the nervous system to recognize different musical patterns and extract new features.

  • Remote Theatrical Immersion: Extending "Sleep No More"

    Tod Machover, Punchdrunk, Akito Van Troyer, Ben Bloomberg, Gershon Dublon, Jason Haas, Elena Jessop, Brian Mayton, Eyal Shahar, Jie Qi, Nicholas Joliat, and Peter Torpey

    We have collaborated with London-based theater group Punchdrunk to create an online platform connected to their NYC show, Sleep No More. In the live show, masked audience members explore and interact with a rich environment, discovering their own narrative pathways. We have developed an online companion world to this real-life experience, through which online participants partner with live audience members to explore the interactive, immersive show together. Pushing the current capabilities of web standards and wireless communications technologies, the system delivers personalized multimedia content, allowing each online participant to have a unique experience co-created in real time by his own actions and those of his onsite partner. This project explores original ways of fostering meaningful relationships between online and onsite audience members, enhancing the experiences of both through the affordances that exist only at the intersection of the real and the virtual worlds.

  • SIDR: Deep Learning-Based Real-Time Speaker Identification

    Tod Machover, Rebecca Kleinberger and Clement Duhart

    Consider each of our individual voices as a flashlight to illuminate how we project ourselves in society and how much sonic space we give ourselves or others. Thus, turn-taking computation through speaker recognition systems has been used as a tool to understand social situations or work meetings. We present SIDR, a deep learning-based, real-time speaker recognition system designed to be used in real-world settings. The system is resilient to noise, changes in room acoustics, different languages, and overlapping dialogues. While existing systems require the use of several microphones for each speaker or the need to couple video and sound recordings for accurate recognition of a speaker, SIDR only requires a medium-quality microphone or computer-embedded microphone.

  • Sound Cycles

    Tod Machover and Charles Holbrow

    Sound Cycles is a new interface for exploring, re-mixing, and composing with large volumes of audio content. The project presents a simple and intuitive interface for scanning through long audio files or pre-recorded music. Sound Cycles integrates with the existing Digital Audio Workstation for on-the-fly editing, audio analysis, and feature extraction.

  • Using the Voice As a Tool for Self-Reflection

    Special Interest group(s): 
    Tod Machover and Rebecca Kleinberger

    Our voice is an important part of our individuality. From the voices of others, we understand a wealth of non-linguistic information, such as identity, social-cultural clues, and emotional state. But the relationship we have with our own voice is less obvious. We don't hear it the way others do, and our brain treats it differently from any other sound. Yet its sonority is deeply connected with how we are perceived by society and how we see ourselves, body and mind. This project is composed of software, devices, installations, and thoughts used to challenge us to gain new insights on our voices. To increase self-awareness, we propose different ways to extend, project, and visualize the voice. We show how our voices sometimes escape our control, and we explore the consequences in terms of self-reflection, cognitive processes, therapy, affective features visualization, and communication improvement.

  • Vocal Vibrations: Expressive Performance for Body-Mind Wellbeing

    Special Interest group(s): 
    Tod Machover, Charles Holbrow, Elena Jessop, Rebecca Kleinberger, Le Laboratoire, and the Dalai Lama Center at MIT

    Vocal Vibrations explores relationships between human physiology and the vibrations of the voice. The voice is an expressive instrument that nearly everyone possesses and that is intimately linked to the physical form. In collaboration with Le Laboratoire and the MIT Dalai Lama Center, we examine the hypothesis that voices can influence mental and physical health through physico-physiological phenomena. The first Vocal Vibrations installation premiered in Paris, France, in March 2014. The public "Chapel" space of the installation encouraged careful meditative listening. A private "Cocoon" environment guided an individual to explore his/her voice, augmented by tactile and acoustic stimuli. Vocal Vibrations then had a successful showing as the inaugural installation at the new Le Laboratoire Cambridge from November 2014 through March 2015. The installation was incorporated into Le Laboratoire's Memory/Witness of the Unimaginable exhibit, April 17-August 16, 2015.