Opera of the Future
How musical composition, performance, and instrumentation can lead to innovative forms of expression, learning, and health.
The Opera of the Future group (also known as Hyperinstruments) explores concepts and techniques to help advance the future of musical composition, performance, learning, and expression. Through the design of new interfaces for both professional virtuosi and amateur music-lovers, the development of new techniques for interpreting and mapping expressive gesture, and the application of these technologies to innovative compositions and experiences, we seek to enhance music as a performance art, and to develop its transformative power as counterpoint to our everyday lives. The scope of our research includes musical instrument design, concepts for new performance spaces, interactive touring and permanent installations, and "music toys." It ranges from extensions of traditional forms to radical departures, such as the Brain Opera, Toy Symphony and Death and the Powers.

Research Projects

  • A Toronto Symphony: Massive Musical Collaboration

    Tod Machover, Peter Torpey, Akito Van Troyer, and Ben Bloomberg

    Thus far, crowdsourced and interactive music have been limited, with the public only a small part of a final musical result, and often disconnected from the artist leading the project. We believe that a new “musical ecology” is needed for true creative collaboration between experts and amateurs that benefits both. For this purpose, we created a new symphony in collaboration with the entire city of Toronto. Called “A Toronto Symphony,” the work–commissioned by the Toronto Symphony Orchestra–premiered in March 2013. We designed the necessary infrastructure, a variety of web-based music composition applications, a social media framework, and real-world community-building activities to bring together an unprecedented number of people from diverse backgrounds. This process establishes a new model for creating complex collaborations between experts and everyone else.

  • Cauldron

    Tod Machover and Akito Van Troyer

    With this interactive musical web app, users created and experienced the sounds of Edinburgh for Tod Machover's Festival City, composed by Machover with public collaboration for the 2013 Edinburgh International Festival.

  • Cauldron Connector

    Tod Machover and Akito Van Troyer

    How's your auditory and musical acumen? Test it out with the Cauldron Connector quiz. Correctly traverse your way through the sounds of Tod Machover’s Festival City.

  • Constellation

    Tod Machover and Akito Van Troyer

    Constellation was been designed especially for A Toronto Symphony and Festival City. Constellation lets users take musical material and reshape, modify, morph, and personalize it to create the version that they prefer.

  • Crenulations and Excursions

    Tod Machover and Elena Jessop

    We are developing a new set of movement experiences titled Crenulations and Excursions. This work consists of two aspects: a public installation that allows visitors to explore a rich sonic space through their expressive movement, and a short dance performance that allows a trained performer to explore the expressive capabilities of the installation environment. With a tiny, energetic gesture or with a fluid and sweeping movement, a performer can create and shape layers of sound around herself. Both the installation and performance will explore the body as a subtle and powerful instrument, providing continuous control of continuous expression. This piece is related to Elena Jessop's doctoral work on new high-level analysis frameworks for recognition and extension of expressive physical performance.

  • Death and the Powers: Global Interactive Simulcast

    Tod Machover, Peter Torpey, Ben Bloomberg, Elena Jessop, Charles Holbrow, Simone Ovsey, Garrett Parrish, Justin Martinez, and Kevin Nattinger.

    The live global interactive simulcast of the final February 2014 performance of Death and the Powers in Dallas made innovative use of satellite broadcast and Internet technologies to expand the boundaries of second-screen experience and interactivity during a live remote performance. In the opera, Simon Powers uploads his mind, memories, and emotions into The System, represented onstage through reactive robotic, visual, and sonic elements. Remote audiences, via simulcast, were treated as part of The System alongside Powers and the operabots. Audiences had an omniscient view of the action of the opera, as presented through the augmented, multi-camera video and surround sound. Multimedia content delivered to mobile devices, through the Powers Live app, privileged remote audiences with perspectives from within The System. Mobile devices also allowed audiences to influence The System by affecting the illumination of the Winspear Opera House’s Moody Foundation Chandelier.

  • Death and the Powers: Redefining Opera

    Tod Machover, Ben Bloomberg, Peter Torpey, Elena Jessop, Bob Hsiung, Akito van Troyer
    "Death and the Powers" is a groundbreaking opera that brings a variety of technological, conceptual, and aesthetic innovations to the theatrical world. Created by Tod Machover (composer), Diane Paulus (director), and Alex McDowell (production designer), the opera uses the techniques of tomorrow to address age-old human concerns of life and legacy. The unique performance environment, including autonomous robots, expressive scenery, new Hyperinstruments, and human actors, blurs the line between animate and inanimate. The opera premiered in Monte-Carlo in fall 2010, with additional performances in Boston and Chicago in 2011 and continuing engagements worldwide, including upcoming performances in Dallas in February 2014.
  • Disembodied Performance

    Tod Machover, Peter Torpey and Elena Jessop
    Early in the opera "Death and the Powers," the main character, Simon Powers, is subsumed into a technological environment of his own creation. The set comes alive through robotic, visual, and sonic elements that allow the actor to extend his range and influence across the stage in unique and dynamic ways. This environment assumes the behavior and expression of the absent Simon; to distill the essence of this character, we recover performance parameters in real time from physiological sensors, voice, and vision systems. Gesture and performance parameters are then mapped to a visual language that allows the off-stage actor to express emotion and interact with others on stage. To accomplish this, we developed a suite of innovative analysis, mapping, and rendering software systems.
  • DrumTop

    Tod Machover and Akito Oshiro van Troyer

    This project aims to transform everyday objects into percussive musical instruments, encouraging people to rediscover their surroundings through musical interactions with the objects around them. DrumTop is a drum machine made up of eight transducers. Placing objects on top of the transducers triggers a "hit," causing sounds to come out from the objects themselves. In addition, users can program drum patterns by pushing on a transducer, and the weight of an object can be measured to control the strength of a “hit.”

  • Figments

    Peter Torpey

    Figments is an theatrical performance that tells a story inspired by a variety of source texts, including Dante Alighieri's prosimetrum La Vita Nuova. Framed by a woman's accidental discovery of the compelling journals of the Dante-archetype, three inner vignettes reveal the timeless tribulations of the memoir's author(s). Figments was created using Media Scores, a framework in development to facilitate the composition of Gesamtkunstwerk using parametric score-like visual notation. The Media Score for Figments is realized in this production through the performance of actors, light, visuals, and the generation of musical accompaniment in response to the expressive qualities represented in the score. The score served as a reference during the creation and design of the piece, a guide during rehearsals, and as show control for the final production.

  • Gestural Media Framework

    Tod Machover and Elena Jessop
    We are all equipped with two extremely expressive instruments for performance: the body and the voice. By using computer systems to sense and analyze human movement and voices, artists can take advantage of technology to augment the body's communicative powers. However, the sophistication, emotional content, and variety of expression possible through original physical channels is often not captured by the technologies used for analyzing them, and thus cannot be intuitively transferred from body to digital media. To address these issues, we are developing systems that use machine learning to map continuous input data, whether of gesture or voice, to a space of expressive, qualitative parameters. We are also developing a new framework for expressive performance augmentation, allowing users to create clear, intuitive, and comprehensible mappings by using high-level qualitative movement descriptions, rather than low-level descriptions of sensor data streams.
  • Hyperinstruments

    Tod Machover
    The Hyperinstruments project creates expanded musical instruments and uses technology to give extra power and finesse to virtuosic performers. They were designed to augment a wide range of traditional musical instruments and have been used by some of the world's foremost performers (Yo-Yo Ma, the Los Angeles Philharmonic, Peter Gabriel, and Penn & Teller). Research focuses on designing computer systems that measure and interpret human expression and feeling, exploring appropriate modalities and content of interactive art and entertainment environments, and building sophisticated interactive musical instruments for non-professional musicians, students, music lovers, and the general public. Recent projects involve both new hyperinstruments for children and amateurs, and high-end hyperinstruments capable of expanding and transforming a symphony orchestra or an entire opera stage.
  • Hyperproduction: Advanced Production Systems

    Tod Machover and Benjamin Bloomberg

    Hyperproduction is a conceptual framework and a software toolkit which allows producers to specify a descriptive computational model and consequently an abstract state for a live experience through traditional operating paradigms such as mixing audio, or operation of lighting, sound and video systems. The hyperproduction system is able to interpret this universal state and automatically utilize additional production systems, allowing for a small number of producers to cohesively guide the attention and perspective of an audience using many or very complex production systems simultaneously. We focus on exploring the relationship of conventional production systems and techniques to abstract computational models of live performance, with attention and perspective as the cornerstones of this exploration.

  • Hyperscore

    Tod Machover
    Hyperscore is an application to introduce children and non-musicians to musical composition and creativity in an intuitive and dynamic way. The "narrative" of a composition is expressed as a line-gesture, and the texture and shape of this line are analyzed to derive a pattern of tension-release, simplicity-complexity, and variable harmonization. The child creates or selects individual musical fragments in the form of chords or melodic motives, and layers them onto the narrative-line with expressive brushstokes. The Hyperscore system automatically realizes a full composition from a graphical representation. Currently, Hyperscore uses a mouse-based interface; the final version will support freehand drawing, and integration with the Music Shapers and Beatbugs to provide a rich array of tactile tools for manipulation of the graphical score.
  • Looking for the Persona through the Voice

    Tod Machover and Rebecca Kleinberger

    How does your voice reflect how you as an individual interact with society? As a way to reveal the persona, this project explores the voice as a complex ecosystem in which psychology, cognition, and physiology interact with the physicality of the world, and how this interaction loops back to influence cognitive processes. This journey is composed of software, devices, installations, and thoughts used together to challenge us to see our vocal apparel from a new point of view by a process of "estrangement." This enterprise starts by bringing self-awareness of one's voice by proposing different ways to extend, project, and visualize it. A second stage will consist of disrupting expectancy, showing how our voices can escape from our control, and exploring the consequences in terms of self-reflection, therapy, affective features visualization, and communication improvement.

  • Media Scores

    Tod Machover and Peter Torpey

    Media Scores extends the concept of a musical score to other modalities, facilitating the process of authoring and performing multimedia compositions and providing a medium through which to realize a modern-day Gesamtkunstwerk. The web-based Media Scores environment and related show control systems leverages research into multimodal representation and encoding of expressive intent. Using such a tool, the composer will be able to shape an artistic work that may be performed through a variety of media and modalities. Media Scores offer the potential for authoring content considering live performance data as well as audience participation and interaction. This paradigm bridges the extremes of the continuum from composition to performance, allowing for improvisation. The Media Score also provides a common point of reference in collaborative productions as well as the infrastructure for real-time control of technologies used during live performance.

  • Personal Opera

    Tod Machover and Peter Torpey
    Personal Opera is a radically innovative creative environment that enables anyone to create musical masterpieces sharing personal thoughts, feelings, and memories. Based on our design of, and experience with, such projects as Hyperscore and the Brain Opera, we are developing a totally new environment to allow the incorporation of personal stories, images, and both original and well-loved music and sounds. Personal Opera builds on our guiding principle that active music creation yields far more powerful benefits than passive listening. Using music as the through-line for assembling and conveying our own individual legacies, Personal Opera represents a new form of expressive archiving; easy to use and powerful to experience. In partnership with the Royal Opera House in London, we have begun conducting Personal Opera workshops specifically targeting seniors to help them tell their own meaningful stories through music, text, visuals, and acting.
  • Powers Sensor Chair

    Elena Jessop and Tod Machover

    The Powers Sensor Chair gives visitors a special glimpse into Tod Machover’s robotic opera Death and the Powers, by providing a new way to explore the sonic world of the opera. A solo participant sitting in a chair discovers that when she moves her hands and arms, the air in front of her becomes an instrument. With a small, delicate gesture, a sharp and energetic thrust of her hand, or a smooth caress of the space around her, she can use her expressive movement and gesture to play with and sculpt a rich sound environment drawn from the opera, including vocal outbursts and murmurs and the sounds of the show’s special Hyperinstruments. This installation explores the body as a subtle and powerful instrument, providing continuous control of continuous expression, and incorporates Elena Jessop’s high-level analysis frameworks for recognition and extension of expressive movement.

  • Remote Theatrical Immersion: Extending "Sleep No More"

    Tod Machover, Punchdrunk, Akito Van Troyer, Ben Bloomberg, Gershon Dublon, Jason Haas, Elena Jessop, Brian Mayton, Eyal Shahar, Jie Qi, Nicholas Joliat, and Peter Torpey

    We have collaborated with London-based theater group Punchdrunk to create an online platform connected to their NYC show, Sleep No More. In the live show, masked audience members explore and interact with a rich environment, discovering their own narrative pathways. We have developed an online companion world to this real-life experience, through which online participants partner with live audience members to explore the interactive, immersive show together. Pushing the current capabilities of web standards and wireless communications technologies, the system delivers personalized multimedia content, allowing each online participant to have a unique experience co-created in real time by his own actions and those of his onsite partner. This project explores original ways of fostering meaningful relationships between online and onsite audience members, enhancing the experiences of both through the affordances that exist only at the intersection of the real and the virtual worlds.

  • Repertoire Remix

    Tod Machover, Benjamin Bloomberg and Akito Van Troyer

    Repertoire Remix was a live-stream event where the pianist, Tae Kim, wove repertoire fragments in constantly evolving ways based on website visitors' input. The web interface showed a colored bubble for each composer. Stirring on-screen with a mouse point influenced the closest composer bubbles to grow. Picking up and stirring directly with a bubble amplified that composer's presence. Composer Tod Machover simultaneously controlled a second interface, determining how the composer-fragments overlapped, bounced against each other, or fused together. Website visitors' input, together with Machover's, immediately changed the size, appearance, and behavior of the bubbles, and this in turn became Tae's "score" for the improv session. He followed the community's preferences and performed accordingly.

  • Vocal Vibrations: Expressive Performance for Body-Mind Wellbeing

    Tod Machover, Charles Holbrow, Elena Jessop, Rebecca Kleinberger, Le Laboratoire, and the Dalai Lama Center at MIT

    Vocal Vibrations is exploring the relationships between human physiology and the resonant vibrations of the voice. The voice and body are instruments everyone possesses–they are incredibly individual, infinitely expressive, and intimately linked to the physical form. In collaboration with Le Laboratoire in Paris and the Dalai Lama Center at MIT, we are exploring the hypothesis that the singing voice can influence mental and physical health through physicochemical phenomena and in ways consistent with contemplative practices. We are developing a series of multimedia experiences, including individual "meditations," a group "singing circle," and an iPad application, all effecting mood modulation and spiritual enhancement in an enveloping context of stunningly immersive, responsive music. For spring 2014, we are developing a vocal art installation in Paris where a private "grotto” environment allows individual visitors to meditate using vibrations generated by their own voice, augmented by visual, acoustic, and physical stimuli.