We present a novel approach for live performances, giving musicians or dancers an extended control over the sound rendering of their representation. Contrarily to the usual sound rendering of a performance where sounds are externally triggered by specific events in the scene, or to usual augmented instruments that track the gestures used to play the instrument to expand its possibilities, the performer can configure the sound effects he produces in a way that its whole body is involved.
We developed a Max/MSP toolbox to receive, decode and analyze the signals from a set of light wireless sensors that can be worn by performers. Each sensor node contains a digital 3-axes accelerometer, magnetometer and gyroscope and up to 6 analog channels to connect additional external sensors (pressure, flexion, light, etc.). The received data is decoded and scaled and a reliable posture information is extracted from the fusion of the data sensors mounted on each node. A visualization system gives the posture/attitude of each node, as well as the smoothed and maximum values of the individual sensing axes. Contrary to most commercial systems, our Max/MSP toolbox makes it easy for users to define the many available parameters, allowing to tailor the system and to optimize the bandwidth. Finally, we provide a real-time implementation of a gesture recognition tool based on Dynamic Time Warping (DTW), with an original ”multi-grid” DTW algorithm that does not require prior segmentation. We propose users different mapping tools for interactive projects, integrating 1-D, 2-D and 3-D interpolation tools.
We focused on extracting short-term features that detect hits and give information about the intensity and direction of the hits to drive percussive synthesis models. Contrarily to available systems, we propose a sound synthesis that takes into account the changes of direction and orientation immediately preceding the detected hits in order to produce sounds depending on the preparation gestures. Because of real-time performance constraints, we direct our sound synthesis towards a granular approach which manipulates atomic sound grains for sound events composition. Our synthesis procedure specifically targets consistent sound events, sound variety and expressive rendering of the composition.
- Todor Todoroff. Born in 1963, Todor Todoroff (B) received an Engineering degree at the Free University of Brussels, then a First Prize and a Higher Degree in Electroacoustic Composition in the Royal Conservatoires of Music in Brussels and in Mons. After research in the field of speech processing at the FUB, he directed a program of Computer Music Research at the Polytechnic Faculty and the Royal Conservatoire of Music in Mons for five years. He is currently researcher on the Numediart Project at the University of Mons and teacher at the ESAPV (Ecole Supérieure des Arts Plastiques et Visuels). Since 1993 he developed in parallel interactive systems at ARTeM (Art, Recherche, Technologie et Musique). His electroacoustic music, live or acousmatic, shows a special focus on sound spatialisation and on research into new forms of sound transformation with gestural control. Intrigued by the dialogue between electroacoustic music and other art forms, he composes for video, film, theatre, contemporary dance, mainly through his long term collaboration with choreographer Michèle Noiret, and installations, often with other artists, amongst who Marie-Jo Lafontaine, FOAM, Fred Vaillant and Laura Colmenares Guerra.
- He received commissions form the Paris Opera, IMEB, Art Zoyd, Musiques Nouvelles, Festivaal van Vlaanderen, ZKM and Musiques & Recherches. Prize of the Audience at Noroit (France, 1991), First Prize (2007) and Mentions (2002, 2005, 2009) at the Bourges Competitions (France) and several times finalist. Video: European Joystick Orchestra – Métaboles compositeurs.be/en/composers/todor_todoroff/48/bio numediart.org tcts.fpms.ac.be/homepage.php?Firstname=Todor&Lastname=Todoroff
- Cécile Picard-Limpens is post-doc researcher, part time working at theHaute Ecole de Musique de Genève (HEM), (CH), and at UMons/TCTS, within the numedia research program, (BE). Her current research focuses on sound interaction, augmented instruments and gesture analysis. She obtained her Ph.D. in Computer Science at INRIA, France, in the REVES team in December 2009. Her Ph.D. research focused on real-time sound rendering for virtual reality. After receiving her diploma in Mechanical Engineering with specialization in Acoustics in 2005 at the Université de Technologie de Compiègne (UTC), France, she continued in the domain of sound and obtained in 2006 a Msc. in Sound and Vibrations at the Chalmers University in Goteborg, Sweden.
Full text (PDF) p. 2389-2395