The recent evolution of technology allows a broad community to create live interactive works, based on gestural capturing devices and real-time sound generation, which are presented in different contexts, such as interactive installations and dance or instrumental pieces. Due to its inherent multi-disciplinarity, the creation of an interactive system depends on numerous fields of expertise requiring the collaboration of artists and engineers. In order to design interactive systems controlled by gesture, it is important to analyze current works with the help of theoretical elements. Interactive systems can indeed be seen and evaluated from different perspectives. In this talk we consider two main perspectives regarding technological and semantical aspects. The technological perspective encompasses the discussion of topics such as sensors, analysis techniques for sensors outputs, mapping strategies between sensors and sound generation, and sound synthesis methods. On the other hand, semantical considerations focus on goal of the interaction, the role of users, the types of actions, gestures or postures, and the role of sound output. We will further analyze each of these perspectives by considering two case studies: sound installations and instrument performances.
- Marcelo M. Wanderley is currently finishing his Ph.D. at Ircam on acoustics, signal processing and computer science applied to music. His thesis deals with gestural control of sound synthesis. He is the co-editor of the electronic publication Trends in Gestural Control of Music, published by Ircam, and is the coordinator of the ICMA/EMF Working Group on Interactive Systems and Instrument Design in Music.