A basic set of terms and models are developed to describe a range of possible sound-action models. Mention is made of how existing technologies could serve to implement these models.
Most of the discussions concerning the rapid technological evolution of desktop and networked-based multimedia have focused on the domain of progress, a “progress” given by sets of numbers that are clearly greater in magnitude with every passing year. This numerophilia is understood palpably as a Being-towards-convergence, in particular, as the hope that multimedia over the Internet will, once we have conquered the problems of bandwidth and the right feature set, provide as flexible and useful a medium as desktop multimedia has been. Java, ShockWave, and ahost of nascent networking technologies exist as the current emissaries of this hope, though they have yet to advance any real works of quality multimedia. Underlying this hope, however, is a fundamental myopia that reinforces a human computer interaction model derived from a conceptually sophomoric transactional client/server model of distributed systems. Akin to the assumptions about technology as progress raised by our Heideggerian post-atomic consciousness, we have no corresponding epochal awakening that the fetishization of progress in informational technologies is overshadowing the exploration of creative alternative interfaces by its own relentless and demonstrable gains in bandwidth and speed. Sound models, in particular, have only evolved in small steps from a simplistic, isomorphic and stateless action/response model. Little thought, if any, has been given to a detailed consideration of alternative models of sound/action interaction for networked multimedia. In making an inquiry intended to systematic and comprehensive, one must speculate intelligently upon the perceptual and psychological ramifications of different sound models upon the user. Since these effects are not obvious, experiments could subsequently be derived in which it could be seen how users would react to more complex or intelligent uses of sound in the context of their actions.This paper articulates a detailed taxonomy that adds another level of complexity to the simple and ubiquitous action/response model. This level explores the possibility of an added level of indirection, an agent, between a set of sounds and actions. This idea is developed with examples by examining potentially useful data structures to associate with the action, a consideration of models that recognize sound as both static sound objects and as dynamic sound streams (with modifiable real-time parameters), and a consideration as to the different levels (mircostructural/macrostructural) that can be affected by the agent. Relationships and interaction with bundled visible objects (sprites) are also considered. This schemata, comprising seven possible models, proceeds from defining the different models, to discussions of how the models might be implemented in Director/Lingo, or on the web in HTML, Java, or ShockWave. Lastly, a perceptual model of reinforcement is examined in a speculation as to the perceptual ramifications of more complex sound/action relationships and their possible efficacy and uses.
- Brett Terry, USA, Burnett Group
Full text p.91-93