[FISEA 1988] Paper: Philippe Menard — Towards a Universal and Intelligent MIDI-Based Stage System: A Composer/ Performer’s Testimony


There is a straight line linking the ‘cybernetic’ paradigm of the mid-1950s to the various ‘robotic’ applications of the 1980s. During the last 3 decades, in the most vital research in sound synthesis, sound processing and sound recording, there have been continual attempts to bring ‘control’ and ‘auto-control’ into the field of music. I remember, in the early 1970s, when I was still a student, being thoroughly impressed by Peter Beyls’s and Joel Chadabe’s experiments; they were real ‘control-voltage sorcerers’ to me. During that glorious period in analog electronics, the role of electronics was huge compared to that of digital technology. This ratio has been completely reversed since then.
In the past decade there has been an explosion of control experiments, as never before — an eagerness to apply to the arts, and especially to music, what had been or was being developed in other, usually less peaceful fields, such as the military/industrial field. I am thinking in particular of pattern and speech recognition, artificial vision and audition, and the like. I suspect that one of the main reasons for this explosion is the shift from heavy electronics to digital computing and microcomputing. A great deal of electronic operations have shifted to programming, making this world accessible to many more people. I would say, ironically, that the control field has become affordable to ‘ordinary’ researchers, in the sense of ‘computer-literate individuals not necessarily supported by large research and teaching institutions’