Want to contribute stories or media to Thereminvox.com? Just follow the links.

Mapping Human Gesture into Electronic Media

Saggini: What are the main influences on your work?

Coniglio: Certainly for me, my teacher Morton Subotnick has been a major influence. It was because of him that I became interesting interactive performance in the first place. Mort was doing this kind of work in the mid-sixties, well before the advent of readily available digital computers. There are many ways in which he has influenced me along the way, including the some of the stylistic influences of his compositional techniques, and the way in which performers are related to the media that they control. But perhaps the most significant was that I always perceived him as an artist first, technologist second. The artworks always received the attention they deserved, even in the midst of very complex technical situations. This is something that I always strive for.

Saggini: Please, describe the goal, though mutable, of your artistic and technological research.

Coniglio: I would like very much to continue to improve the MidiDancer system. To me, it is an instrument that shows much promise, but there are many improvements in sensitivity that can be made. If you look at traditional instruments that have survived the test of time, they are all excellent transducers of the performer’s most subtle intention. The MidiDancer does a good job of this, but it needs to be better. If the instrument is not as sensitive as the performer, the performer will eventually be limited by the instrument. Also, my work with interactive robotic devices (like the light sculpture in the Electronic Disturbance) is extremely interesting to me. Because these devices give a physical presence to the electronic data flying around the stage. I would very much like to see some little robotic performers wandering around the stage with the dancers.

Saggini: Your work with the MidiDancer presents similarities – as much as differences – with that carried on by Léon Theremin during the twenties and the thirties with the Theremin and the Terpsitone. It seems to me that the main difference is that the MidiDancer is not conceived as a musical instrument but rather as a tool to control scenic devices. Do you think that this could change in the future? And have you ever played a theremin?

Coniglio: I’m afraid that I cannot say that I have ever played a Theremin, though I would enjoy the opportunity to try. But what you say is true: the MidiDancer is a device that measures the flexion of the joints of the performer’s body and reports that data to a computer and no more. It only becomes a musical, or, if you will, media, instrument when the data reported to the computer is interpreted and used to manipulate sound, video, light… whatever. However, it seems to me that the Theremin (actually, all musical instruments) exhibits several properties that one should keep in mind when attempting to make use of a device that maps human gesture into another medium.

First, it is consistent in its interpretation of the gesture, i.e., the performer can depend on a particular gesture, or combination of gestures, to produce a particular result. Without this quality, it becomes impossible to practice and become more skilled as a performer of the instrument.

Second, there is a clear visual correlation between the gesture and the sonic/visual result. (This operates in tandem with my first point.) As the audience witnesses the performance, they can come to an understanding of how the instrument is played, even if the instrument is not familiar to them. This is important because, one of the reasons we go to hear music played live is that we know it is not a recording! It has been my experience that, if the audience cannot perceive the relationship between gesture and result, it will be their assumption that the performer is not actually playing the instrument. When I see a live performance, I sit in anticipation of seeing the performer exhibiting skill and improvisational ability (even if it is only in the form of interpreting pre-composed material) as they performing. It is possible that, on this night, the performer may give the performance of a lifetime, and I will be there to witness this one time event.

So, there is no reason that, with the appropriate programming of the software, MidiDancer could not be used purely to control the performance of music. And I have done that on many occasions. But typically, I am attempting to give the performer the ability to control an entire environment of media, because it is this sense of hybrid control that interests me at the moment.

Saggini: Through TroikaTronix you sell software designed for artists who want to incorporate digital media into their live performance or installations. How is the feedback? And are you aware of artists using your software?

Coniglio: Well, the response to Isadora® (my real-time video and audio manipulation environment) has been very positive, which is very pleasing to me. I wrote Isadora primarily for my own use, and to use in teaching our workshops. But , as people saw the software, many wanted to have it. With funding for the arts very hard to come by in the United States, I saw that there may be a possibility to support Troika Ranch indirectly through the sales of Isadora, so I began to sell it online. There are about one hundred users out there now, and the number is growing daily. I would say that 75 percent are artists of one kind or another: many who are doing installations, a smaller percentage who are doing live performance. The other 25 percent are VJs (Video Jockeys) who use the software to create visuals that accompany the music in clubs and discotechques. (One of my most ardent beta testers is Italian VJ by the name of Giorgio Oliverio of softlykicking.com) This latter group was a surprise to me, as that is not a world that I know much about. But the number of users in this area is growing. Another notable user is Jean-Baptiste Barriére from France. He is using Isadora in combination with visual-sensing software like David Rokeby’s Very Nervous System as a way of generating visual material for his installation work.

Saggini: Is Isadora an evolution of Interactor? Can you describe this application?

Coniglio: Isadora is a graphic programming environment that provides interactive control over digital media, with special emphasis on the real-time manipulation of digital video. An Isadora program is created by linking together graphically represented building blocks, each of which performs a specific function: playing or manipulating digital video, capturing live video, looking for MIDI input, controlling a DV camera, etc. By linking the modules together you can create complex interactive relationships that can be controlled in real time, either with the mouse and keyboard, or with external MIDI devices.

Yes, I do see it as the evolution of Interactor (I actually like to think of it as Interactor’s daughter.). I wanted to take the best parts of Interactor (scene based structure, easy to use interface) and add the capability of processing visual information, and the ability for third parties to write their own plugins. I will be releasing the SDK for Isadora soon, and this will allow anyone who can write code in C or C++ to make their own Isadora modules.

Saggini: What are your plans for the future?

Coniglio: Currenty we are hard at work preparing for the premiere of our newest evening-length work, “The Future of Memory” in Manhattan in February 2003. It will combine dance, theater, video projections, and interactive digital media to explore how memories are created, stored, romanticized, repressed and lost. The premiere will take place at The Duke on 42nd Street in Manhattan in February 2003.

One of the things I am doing with Isadora for this performance is to capture live video and store it in the computer as the work is performed. These visual materials reappear and are manipulated by the performers as the piece progresses. I am using Isadora and the computer as a metaphor for our own memory. It allows me to recall and present material the audience has seen earlier in the piece while simultaneously distoring it temporally or visually.

We will also be spending 2003 in residence at Dance Theater Workshop, an important presenter of dance in New York City. They have recently rebuilt their theater from the ground up, and added a media lab for use by its choreographer/dancer members. During the residency, we will be beginning work on our next piece, and spending some time researching possible improvements to the MidiDancer system. Measuring the angular flexion of joints has been the most obvious and useful place to begin. But now I am interested in 1) increasing the resolution of the data that I am capturing, and 2) beginning to use accelerometers to measure the “impluse” of gesture from any part of the body. This will allow me to do things like react to the speed at which the torso moves. This research will then be incorporated into the new work, which we expect to be premiered sometime in 2004.

Subscribe to our newsletter to be notified of new articles.

Leave a Reply

Your email address will not be published. Required fields are marked *

The reCAPTCHA verification period has expired. Please reload the page.