An Interview with Barry Vercoe
Nyssim Lefford, Eric D. Scheirer, and Barry L. Vercoe
Machine Listening Group

MIT Media Laboratory

Cambridge, MA 02139-4307

{nyssim, eds, bv}@media.mit.edu

(617) 253-0112


Professor Barry L. Vercoe is one of the great developers and popularizers of computer-music technology. He is best known as the inventor of the Music-360 , Music-11 , Csound and RTCsound languages for digital music synthesis, which have been used by thousands of composers around the world. He is a respected composer, teacher, and software developer, and a broad thinker who was one of the founding faculty members of MIT's renowned Media Laboratory.

Professor Vercoe was born in New Zealand in 1937. He received bachelors degrees in music and mathematics from the University of Auckland, followed by the Mus. D. from the University of Michigan, where he studied under Ross Lee Finney. After brief stints at Princeton, Oberlin, and Yale, he settled at MIT in 1971, where he was granted tenure in 1974 and became Full Professor. Professor Vercoe founded the MIT Experimental Music Studio (EMS) in 1973, the first facility in the world to dedicate digital computers full-time to research and composition in computer music.

The EMS was one of the great innovating studios in the field, and it was responsible for developing or significantly improving technologies such as real-time digital synthesis, live keyboard input, graphical score editing, graphical patching languages, synchronization between natural and synthetic sound in composition, and advanced music languages. In 1976, the EMS hosted the First International Conference on Computer Music; in 1981, Professor Vercoe's support encouraged the MIT Press to take over publication of the Computer Music Journal beginning with Volume 4. In 1985, the EMS was integrated into the new MIT Media Laboratory, to carry on its work in a new, cross-disciplinary context of multimedia research.

At the Media Lab, Professor Vercoe has directed research groups on Music and Cognition, Synthetic Listeners and Performers, and Machine Listening. His own publications span many fields of research, from music theory , to signal processing , to music perception , to audio coding , His students from the EMS and the Media Lab have seeded the academic and industrial worlds of computer music and music technology.

Among his many honors, Professor Vercoe was awarded a Guggenheim Fellowship in 1982-83 for his innovative work exploring Synthetic Performers and other forms of automatic accompaniment systems. In 1992, he received the Computer World/Smithsonian Award in Arts and Media. The study of interaction between human and computer performers remains his closest research interest.

This interview was conducted by Nyssim Lefford in February 1999, as part of the Media Lab's Digital Rewind celebration of the 25th anniversary of the Experimental Music Studio.

Lefford: Describe the first MIT experimental studio. Where was it located?

Vercoe: We began that work when I first arrived in 1971. The first studio we had was in the basement of Building 26, where we had a computer given to MIT by Max Mathews-the Honeywell DDP-24. Max initially developed his Groove system on this machine and was kind enough to give it to MIT when I joined the faculty. It was a well-worn piece of hardware, but I was ably assisted in its maintenance and software development by MIT graduate Steve Haflich.

That computer was put into the basement room at the far end of Building 26, just at the time they started constructing Buildings 36 and 38, the new EE buildings. There was lots of construction going on next door to Building 26; they had to dig down deep. The roads here aren't too high above the water level and the river, so there was lots of flooding. I think when they drove in the piles for the new buildings, Building 26 actually cracked a little. Anyway, we had basement flooding. Fortunately, the DDP-24 was high enough off the ground that we never had problems from the six or eight inches of water that we would sometimes find in the studio. That was our first place!

At the time, the composers taking my composition class were primarily MIT students plus a couple of outsiders. They were actually generating sound using my Music 360 program, which ran on the IBM 360 or 370 system in the adjacent building. Mostly, they lived in the keypunch room in the EE computer facility, but used our DDP-24 for digital-to-audio conversion.

In ?73, after spending two years developing a real-time digital synthesizer design-which I still think was the best thing I ever did at MIT-I received an offer from Jerry Weisner [President of MIT at the time] to have Digital Equipment build this machine for us. This would have been the world's first real-time digital synthesizer, but I never followed up on it because along came another offer, from Jerry and Ed Fredkin, who was the head of the Computer Science Lab at the time.

Jerry and Ed asked DEC to give us an entire PDP-11/50, the top of the line at the time, so that we could use it to build the whole synthesizer in software. We would never have to worry about hardware obsolescence, and this had a certain appeal to me. So I accepted that solution in the summer of 1973.

This coincided with Professor [Amar] Bose moving out of some space on the third floor of Building 26. He had stopped doing research at MIT at that point; he had been given what I think remains the only post of a full professor at MIT who just teaches just one class and doesn't do any research on campus. He moved out of his lab and we promptly moved in because it had a nice acoustic treatment and was a much better environment for the new PDP-11.

For the next 12 years, this was the area that the composers lived in.

Lefford: What influenced your decision to build an all-digital experimental music studio? Did you draw from the experiences you had at other institutions?

Vercoe: As a composer I had no experiences in electronic music whatsoever in New Zealand where I did my undergraduate degrees. When I arrived at Ann Arbor, Michigan in 1962, there was a visiting composer by the name of Mario Davidovsky who had helped Ross Finney do some work at the Columbia University Studios. Ross asked Mario to come and help set up the studio at Ann Arbor.

So for my three years at the University of Michigan, there was an analog studio under development. I had a little bit of experience with the cut-and-splice technique, but I wasn't really enamored of that--although Mario was clearly a master at that classical studio method of constructing pieces, and I admired his music. In fact, years later, on several occasions, I conducted his music, which I still like very much.

In parallel with this, I was supporting myself as a grad student at the University of Michigan working as a statistician. My degree in mathematics got me a job in human genetics and muscular dystrophy research. I learned to program out of sheer need. Fortran wasn't around; well, maybe it was, but I learned to program using the Michigan Algorithmic Decoder or MAD language. So, this MAD language got me first into computing.

By this time I was finishing my doctoral work at Ann Arbor, and had accepted a teaching job at Oberlin Conservatory; but I was already familiar with the possibilities of digital sound synthesis, and spent next summer at Princeton with Godfried Winham. There was no DAC there, so we would travel to Bell Labs to do digital-to-audio conversions from a big 2400 foot reel. By the end of the summer, I had written parts and pieces of things that ended up getting incorporated into real pieces I wrote later that year as the composer-in-residence for the city of Seattle. I wrote a big piece for the MENC, the Music Educators National Convention, which was in Seattle that year, for a very large orchestra: wind ensemble, brass band, string ensemble, two choirs, (one singing Latin, the other a text by the Japanese Toyohiko Kagawa), percussion ensemble, soloist, and computer sounds [Digressions, recording, Crest Records 1968]. Those computer sounds had come from my work at Princeton that summer.

After my year as composer in residence in Seattle, ?67-?68, I decided to go to Princeton and work seriously on digital sound synthesis. I did this for the next two years, the result of which was Music 360. Next I spent a year at Yale where I taught sound synthesis, as well as Schenkerian analysis and theory. By the time I was invited to MIT, I was totally committed to digital synthesis, as opposed to constantly trying to keep analog devices in tune. There was no doubt in my mind that it was a digital studio that I was going to develop.

I think my appointment here was because MIT's John Harbison and David Epstein had visited Bell Labs under Jerry Weisner's urging to see what they could find out about the electronic music scene. They apparently heard one sound that Max Matthews made on his Groove system under computer control, looked at each other, and said, "that's the future."

I was appointed was because I was so totally committed and, by that time, somewhat expert in digital sound synthesis. This thereby affected the nature of the studio that took off at MIT. Whereas some people might have thought in ?71 that starting an electronic music studio at MIT would mean buying a Moog synthesizer, putting it in the basement, and handing keys out to undergraduates, my idea was very different. So, we started out with a computer studio from the word "go."

Lefford: Do you think it's important to build tools that can be used directly by the composers without the aid of an engineer?

Vercoe: Yes, in the sense that I?ve always practiced that. Whenever I?ve run summer workshops for composers, I?ve forced them to come to grips with the computer language. It's not very difficult to patch oscillators and filters together using some kind of symbolic representation. Some of them find it difficult to get into, but if they have some degree of control over their instrument, they are able to intuit more naturally, to be more imaginative and to be more creative.

Moreover, they have faith in the system, and believe it will be able to do what they want it to do. I think when you buy an innovative but shaky black box-particularly computing devices that sometimes crash and need rebooting-if you don't know the technology inside at some level, you're just in fear and dread that this thing is suddenly going to gobble up your favorite tune.

Some kind of familiarity is essential. Even during the seventies, although we had developed quite an orchestra instrument library that one could tap into, whenever I taught a course to composers I never told them about the instrument library. I just told them about the language and how to put things together - oscillators and filters - and said, "Okay, go to it." Only later would they discover the library and say, "Well, this could have saved us a lot of time." But also, it would have saved them learning things, and I think it was the knowledge that gave them power.

Lefford: Did the EMS have a unique compositional aesthetic? If so, what characterized the works created there?

Vercoe: I think the relatively few pieces that I composed at the studio tended to combine computer-processed sound with live instruments. I come from a tradition of live performance, and in particular, as a conductor, I?m still in love with choir music. I?ve always been in love with the live aspects of music and see them as an extension of natural body motion.

I don't think computers will ever replace that. There's a contribution that humans do make, either through vocal cords or tactile control or whatever, that is an essential human communication. Despite some claims to the contrary, I?ve never found it a really successful venture to replace this with cybernetic activity.

My work, at that time, was writing pieces like the one I did for [MIT Professor] Marcus Thompson, which was Synapse for viola and computer (recording, CRI 393, 1981). It was rather like a viola concerto with the computer playing the orchestra part. But you can tell from the name "Synapse" that I was very interested in the real touchy-feely contact between the live acoustic instrument and the computer. That is, whatever that could mean in an environment where the computer part had to be pre-computed, pre-processed, pre-recorded and canned on a fixed tape, invariant under any performance conditions.

At that time, human/computer interaction, or the relation of the two forces, was strictly on the shoulders of the live performer. This obviously deserved some attention, so when I went over to Paris [on the Guggenheim Fellowship] to work with Boulez and his group, I developed an automatic synthetic following system. Notice that I describe that as a following system, since I regard the human performer as the leader and the computer as responding with accompaniment. I have never really successfully modeled, nor even had very much faith in, the notion of the computer leading the way. It doesn't seem very appealing to me. It might occasionally be an interesting thing to experience, but I still have primary interest in humans as the entities I?m communicating with and hearing from.

Lefford: And the other composers that came to work here? Did this characterize their work as well?

Vercoe: It was about half and half. Half of the people were inspired by the models I had given them, which models were, in fact, based on Mario Davidovsky's models. He was writing synchronisms, which as their name implies, seek some sort of synchronization between the tape part and the live instruments. In point of fact, the synchronization is [often] very sparse and not tightly organized at all.

What I set out to do in the seventies with my pieces was to engage the live performer and the computer in a tightly constructed composition where the performer had to listen very carefully to what was going on in order to achieve the proper ensemble. The pieces demanded that.

I think those two models, either Mario's or mine, inspired the pieces by composers like Peter Child and Martin Brody. Their pieces gave the same challenge to performers. They had a canned, pre-computed part to synchronize with, so that the entire load of synchronization fell on the shoulders of the performers. The idea of seeing these as two forces of equal importance but of very different natures made for a more interesting piece. This was at the back of the composers? minds.

The other half of the pieces we produced were for computer alone. Since we had very high quality four-channel floating-point D-to-A converters (due to the work of Richard Steiger), we could get wonderful sounds which had their own appeal to composers. For instance, when I went to Paris in the ?83-?84 year, I invited Jim Dashow to come and take over my classes. Jim was a Brandeis-trained composer, living and working in Italy. He had been a real expert user of MUSIC 360, and while at MIT wrote a wonderful piece for computer alone called In Winter Shine. This work is a big four-channel piece with lots of interaction between all four channels.

This piece was an example of just what a carefully-constructed computer piece could do. It controls the timbres in very special ways, where the harmonic overtones of one sound are carefully related to the harmonic overtones of an adjacent sound. Jim was building complex tone structures that related to adjacent tone structures in the way that Schubert often related one key area to another key area using pivot note modulations. I found this work of Jim's quite fascinating, the sort of thing that only a really carefully crafted computer synthesis system could deliver.

We were also challenged to deliver that sound in concert form. So, every summer we put on big concerts that came from this repertoire of EMS pieces; the concerts were very popular and successful. They always presented a mix between live instrument participation and computer alone. I found from a concert programming standpoint you could not have more than two computer alone pieces played back to back in concerts before the audience became a little restless-they needed something to look at. So, we would typically alternate them with pieces involving live players, too.

I think the composers felt the same way. They might do one piece for computer alone; the next time they would do a piece including live instruments which they were probably born and bred to love. They would never just change over completely if they had any sense.

Lefford: Did the EMS collaborate with other institutions and studios?

Vercoe: Not directly. I visited places, of course: I had visited Dartmouth; I spent two and a half years at Princeton; and I taught at Yale for a bit. So, there was some interaction even before I came to MIT. After I came here, people from other institutions would come and visit and perhaps take the summer workshops that I was conducting every year.

I don't think one would say there was any real collaboration until three of the leading studios at the time, MIT, San Diego, and Stanford, each got funding from the System Development Foundation. In our case, we already had our own computer. Stanford had been using the PDP-10 that belonged to their AI Lab, which was moving onto the main campus, so they were in real need. San Diego had just purchased a VAX computer, but had no way to maintain or support it.

The support from the System Development Foundation was timely in that regard. We'd already had the PDP-11 running for some time, so we didn't need a new VAX, but the funding did enable us to support some grad students working on serious research projects. John Stautner and Miller Puckette did a lot of their early work in this field, and they turned out some first-rate papers on various topics.

Lefford: Were most of the composers-in-residence familiar with computer assisted composition when they arrived, or were they exploring the medium for the first time?

Vercoe: Well, the composer-in-residence notion didn't really crystallize until we moved to the Media Lab in 1984-85. Prior to that, the composers came during the summer as part of my composers? workshop, and paid a summer workshop fee to come and work here, typically for six weeks or so.

The first two of these workshops included an in-depth crash course on how to use computer techniques. There would be about thirty-five people in the workshop, twelve or maybe fifteen being serious composers. The others would be from industry, looking into this new world of digital audio and wishing to get a head start in it.

The money from the industry attendees, who usually paid full freight of a thousand dollars or so, helped support the composers, who were mostly without industry support. The composers would not only attend that workshop but were then able to stay on for another three or four weeks and write a piece. In that way you might say those early composers were composers-in-residence because they were being assisted by industry to stay for the additional time, to get a piece out the door and onto a concert, plus a review in the Boston Globe.

I didn't make the final decision as to who got to stay on until the second week of the workshop. There was nothing guaranteed, but towards the end of the second week it became clear to me who had the potential to really do pieces for our public concert. It was typically the trained composers who had the musical maturity to get a piece out the door. But sometimes the people from industry were musically surprising, and some composers turned out to be technologically inept and simply hopeless in the medium. I think only one of the selected people ever failed to complete a piece in time during several years of summer concert.

By the time we moved to the Media Lab, I had been giving these summer workshops for six to eight years, and I was quite worn out from it. After I came back from Paris, and my Guggenheim at IRCAM, I really was looking to do something different.

At that time, we had Judy Whipple, who was a wonderful assistant and a real source of ideas for our continuation. Judy wrote wonderful proposals to the National Endowment to the Arts based on the workshops we had been giving. As a result of the music from the workshops, we managed to get money from both the Massachusetts Arts Council and the National Endowment for the Arts to bring in real composers-in-residence for three or four months at a time. Some would come for one month, get to know things, then go away, write a piece, and come back and spend three months putting it together. Others would be in-residence throughout.

What I enjoyed about those interactions was that having composers around for that long enabled the Media Lab students to really rub shoulders with them, to become assistants in their endeavors, to interact with them, and to learn a lot from the whole creative process. This doesn't happen these days. We don't have artists coming through and being in residence for periods like that. In part this is because the Endowment for the Arts has ceased to be so active in areas like this.

But also, it has had to do with the fact that in the early days of the Media Lab we were looking for any points of application that would set us apart from other activities on campus or even around the country. I think we?ve become a little too set in our ways of late. We?ve settled down into doing much more fundamental research as opposed to artistic exploration. Maybe that's what the world believes to be the only way to have technology transfer froma lab into industry. But I think the most progress is made when the technology is being pushed by the creative artist.

I do miss the creative challenges that went along with the early days when we were exploring all kinds of things, where the visiting composers had good support from the Endowments. We would typically obtain for them a $8-10,000 stipend for a 3-4 months residency, plus additional money to produce the piece. We had good rehearsals and smashing performances.

When I conducted the premiere of Jonathan Harvey's "From Silence", I had some wonderful players-Lucy Stoltzman, Marcus Thompson-superb performers, with adequate rehearsal time which most contemporary music performances around town don't get. We were able to put on a really definitive performance of those pieces. But, that phase has passed. I have less energy now, and we don't have the financial support to keep doing that.

Besides, running over and putting on concerts at Kresge [the main MIT performance hall] on the other side of campus was always such an effort. We did operate in the Media Lab's Cube for a few years (where the assisting computers could be right in the next room) for a few years but now we?ve lost that as a convenient performance space [the Villers Experimental Performance Space at the Media Lab was converted into offices in 1998].

Lefford: What technological revolutions have been important in shaping the EMS and the field of computer music?

Vercoe: In the early days at Princeton we initially worked on the IBM 7094 and IBM 360 computers, which were by today's standards very slow; to synthesize a good three-minute high-fidelity piece took some 3-6 hours of computer time. We must have had some belief in where it was all going; the difference these days is that you can now do pieces of that complexity at real time or better.

Once the level of synthesis gets to be computable in real time, you have a very different situation. If it's 99% up to real time then it has to be recorded and played back later. But once it hits 100%, you're suddenly in a situation where you can play the entire piece and interact with it while it's actually being created. That induces a whole new way of thinking about things, and it engenders new, interactive opcodes in the computer music language.

The pieces that can happen these days, on either big expensive workstations which most composers don't have access to or relatively inexpensive PC's with good, fast DSP cards, can happen in real time. This invites all manner of interaction and ways of thinking about audio music processing and control processing, and enables a very different aesthetic. I think, if anything, the technological breakthrough has just been the natural one of computers getting faster and changing our low-expectancy ideas into real-time, right now things. That has been the breakthrough.

Of course, there have always been new techniques for synthesis, from phase vocoders to physical modeling. The practical signal processing techniques are driven not by music/audio needs alone, but from DSP research. We then become the beneficiaries, just as composers in every day and age have always been the beneficiaries of the technology at the time. People have always learned to develop musical instruments with responsive control, were able to build violins instead of viols, and make instruments that could stand out from an ensemble and be solo instruments, and put valves on trumpets.

When you have a technology that enables composers and performers to do new and different things, that will always excite the imagination. That happened with instruments in the Western culture around 1600, when they became a real independent force, able to do more than just double the voices. What we have in this day and age, besides voices and instruments, is a third medium - the electronic medium. It is still in its relative youth but it will benefit from parallel industrial developments, which composers and artists, who are always the scavengers of contemporary technologies, will also put to creative use.

Lefford: What sort of research did you pursue during your Guggenheim Fellowship?

Vercoe: The Guggenheim Fellowship was really for composition, but when I got to Paris, I developed a system using computers to follow live performers. On going back and reading my Guggenheim application, it was quite clear that this was on my mind at the time, but I had only sketchy ideas on how to do it.

I?m quite sure I didn't have the technological credentials to convince a Guggenheim Scientific committee that this project would be a good thing to back. Nor did I yet have enough proof that this was a sure-fire way to the future. I think I was a good bet because of the strength of our summer workshops for composers, and because we had caused a lot of innovative computer music to be produced.

I?ve always been torn devoting half my life to one thing and half to other things. My misspent youth was to spend hours making music (my Dad was a Jazz performer) and the other hours exploring Math and Science, the two never seeming to have a future together until they suddenly found confluence in this new field of Computer Music. I was fortuitously equipped in both fields, able to guide the synthesis of their combination.

Lefford: What were the early days of the Media Lab like? What was the EMS's involvement?

Vercoe: In my early years at MIT there were several activities on campus that were using computers in one way or another. One was the EMS. In parallel Nicholas [Negroponte] was developing the Architecture Machine which began as a sophisticated architectural drafting system, then got increasingly into graphics and human interaction. Muriel Cooper and Ron McNeil were getting computers to control mural painting, and Ricky Leacock had pioneered film-making with the Super-8 camera. Each of us was struggling to apply technology to artistic expression. But fortunately, we were perceived by the MIT president, Jerry Weisner, to be people that, again, were worth betting on.

What would happen if we put all these people in one place? The notion of a Media Lab came from the desire to bring these disparate efforts from the different corners of campus to one place so they could gain strength in numbers, and to foster some interactions between them.

The initiative was first called Arts and Media Technology, or Computers in the Arts. The word Media was there, but the idea of calling it a Media Lab was Nicholas's. MIT is, in many ways, very territorial. You can't suddenly call yourself a Computer Science Department. Similarly combining "computers" and something like "art" will immediately raise eyebrows in some other corner. But when Nicholas proposed the word Media, which carried such bad connotations around MIT, everyone around campus said "Sure, you're welcome to it." I think they simply lacked the foresight to realize that Media was going to be big business in the near future.

We got our own space and our own curriculum because no one wanted the association. We were expected to fail, of course. We had lots of meetings on building plans, and how the groups were going to interact. Then on moving day everyone moved the old equipment from their labs to in here and spent time making it all work the same way again. The six originals were Marvin [Minsky], Seymour [Papert], Nicholas [Negroponte], Ricky Leacock, Muriel Cooper, and myself. We were six almost-independent entities just operating under the same roof, but the difference was that we were now able to share students. And while the senior faculty rarely got to collaborate, the students were the ones who combined the individual territories and put together something greater than the parts.

Lefford: How have your research interests changed since the Media Lab opened?

Vercoe: I don't think they have. I have always been interested in finding out enough about how music works, as to make technology a part of it. The musically responsible thing is to avoid having the technology tail wagging the artistic dog, but rather to have the technology following and even enabling the musician.

What has changed and will continue to change is how I implement that research interest. Getting computers to follow live performers is one thing, but there are still some real problems such as: how do you separate one sound from another? If computers are hearing music the way humans do, then they must be able to do signal separation and instrument recognition.

These are things that I had to gloss over in my early instantiation. Now, I?m happy to see the students deeply involved in what we call Machine Listening. For me, Machine Listening is a subset of the larger goal of Synthetic Listeners and Performers in a total musical sense.

Lefford: You continue to have a closer working relationship with industry than some of the other experimental music groups. How does this impact your research?

Vercoe: The original Experimental Music Studio was not supported by industry at all, except for the original gift from Digital. The first research support was from the System Development Foundation, which had been a spin off of Rand Corporation. I suppose you might call that an industrial relation, but I don't think that what we were doing in music research and music production was of serious interest to industry except in very small ways.

In the early days, we needed a reliable, high quality digital-to-audio conversion system. The best DAC converters were those manufactured by companies like Analogic Corp, north of Boston, and Analog Devices, south of Boston. We went to Analogic president Bernie Gordon and said, "You may make the best DACs in the industry but they don't have enough dynamic range for real music." To which Bernie responded, "How come?"

We explained that the ear has a sensitivity to distortion of about 70 dB but also an adaptability to local loudness levels that moves really fast. We outlined for him [graduate student] Rick Steiger's notion of building a floating point D/A system, which would convert a sound from digital to audio and also to scale it like the ear scales incoming signals-a sort of automatic gain control. You could have very loud or very soft representations of the same signal-to-noise ratio. This is basically a floating point representation.

He got very interested in this problem of creating a floating-point D/A converter which he helped us build. He put his top designer Bob Talimberus on this, and we built the first floating-point digital audio converters. They were wonderful gold-plated things, beautifully engineered, with very wide range, clean, four-channel sound, and they served us well for many years.

We thought that when the audio world ultimately went digital-which it had to eventually-it would be with a floating-point representation that covered all of the capacities of the ear in the most efficient way. We had shown that to be about 13 bits of SNR as the fractional part and 3 bits (a base 2 factor of eight) of floating point dynamic range. In effect a 21-bit dynamic range, which is about 125 dB. But we were wrong. The industry eventually went for 16 bit linear (96 dB), which never made any sense to me because the ear is not linear. This has all sorts of noise floor problems in a 16 bit DAC. At that time, it was very hard to guarantee the stability of the least-significant bit because that requires the most-significant bit resistor to maintain its accuracy to one part in 65 thousand, which was beyond consumer technology at the time.

One of the MIT professors who was an early skeptic of our building an all-digital studio was, Francis Lee, who was teaching a digital course in EE. Soon after we had built our floating point D/A converters, Francis used them in his own research on digital audio reproduction. A bit later he was doing audio work in his new company, which he called Lexicon. So, despite his initial scepticism, he must have felt we were on the right track, and saw there was opportunity here for industry.

Although the fractional part of our D/A converters was from Analogic, the floating point scaling was from Analog Devices (using their multiplying DAC). We showed Analog Devices that this could be used potentially in a consumer context.

I think what artists often do as a function in society is show engineers new ways of doing things-creative things. Engineers like to feel they're creative too, but they must realize that artists are creative in a different way. Perhaps only artists know how to push the limits of devices and thereby engender a rethinking of how they can be used.

Lefford: What is the next step in Synthetic Performer/Machine Listening research?

Vercoe: I think progress in Synthetic Listeners and Performers will depend on how much energy there is, and how many people become committed to the work of insuring that technology remains a tool in the hands of the human artists. Human arts and crafts are, after all, what's important - peoples? ways of communicating.

It's so easy to turn on a bit of technology these days, just let it go "widdly-widdly," and some will think that's music or art. To some degree that's okay, but I like to believe there will always be people who have a higher sense of what art and expression can be, and that technology must be a servant of man's art and craft. To make sure it's a servant, it needs to be well-schooled and well-trained and practiced in serving our needs in the musical arts-to serve the needs of composers and performers in their goal of communicating with audiences.

If computers can find a way to be active or to have responsible roles in collaborative production, then they can be musically mature fountains, just like us. But if computers remain rack-mounted buttons and switches at arms distance, then that will be their only role. It's a question of where people think technology can go, what they aspire technology to be. It also depends whether the composers, the creative artists, are willing to give up some composing time to make sure technology does get there. It won't happen without the artists? participation since the engineers don't know what we need. But if we put some energy into redirecting the technology, then this can be the third medium we have been waiting for.

Closing remarks about the EMS:

It was a lot of fun. There's a lot of music and reliable software out there in the world now that wouldn't have happened without it. While it was very time-consuming and I gave up a lot of composing time to make it happen, it was a sure way to move the technology ahead in artistic ways. I think a lot of people have benefited from this, and I?m hopeful that the goals will continue to be a driving force for some people. They are the ones who will make sure that technology is of the form that society really needs.
 
 

REFERENCES

Vercoe, B. L. (1968). Reviews of books on computers and music. Perspectives of New Music 10(1), 323-330.
Vercoe, B. L. (1973). The MUSIC 360 Language for Digital Sound Synthesis. Program reference manual, Cambridge MA, MIT Experimental Music Studio.
Vercoe, B. L. (1978). The MUSIC-11 Language for Digital Sound Synthesis. Program reference manual, Cambridge MA, MIT Experimental Music Studio.
Vercoe, B. L. (1982). New dimensions in computer music. Trends and Perspectives in Signal Processing 2(2), 15-23.
Vercoe, B. L. (1984). The synthetic performer in the context of live performance. In Proceedings of the 1984 International Computer Music Conference (pp. 199-200).  Paris: International Computer Music Association.
Vercoe, B. L. (1985). Csound: A Manual for the Audio-Processing System (rev. 1996). Program reference manual, Cambridge MA, MIT Media Lab.
Vercoe, B. L. (1997). Computational auditory pathways to music understanding.  In I. Deliège & J. Sloboda (eds.), Perception and Cognition of Music (pp. 307-326). London: Psychology Press.
Vercoe, B. L. & Ellis, D. P. W. (1990). Real-time CSound: Software synthesis with sensing and control. In Proceedings of the 1990 Int. Comp. Music Conf. (pp. 209-211).  San Francisco: ICMA.
Vercoe, B. L., Gardner, W. G. & Scheirer, E. D. (1998). Structured audio: The creation, transmission, and rendering of parametric sound representations. Proceedings of the IEEE 85(5), 922-940.
Vercoe, B. L. & Puckette, M. S. (1985). Synthetic rehearsal: Training the synthetic performer. In Proceedings of the 1985 ICMC (pp. 275-278).  Burnaby BC, Canada.