[ISEA2022] Paper: Anna Forgette, Bill Manaris, Meghan Gillikin & Samantha Ramdsen — Speakers, More Speakers!!! – Developing Interactive, Distributed, Smartphone-Based Immersive Experiences for Music and Art

Abstract

Full Paper. Session: Humans and NonHumans / Sonic and Audiovisual Interfaces

Keywords: Interactive music and art, distributed music and art composition, smartphone-based interface, human-computer interaction, generative music

A multi-speaker, smartphone-based environment for developing interactive, distributed music and art applications. It facilitates audience participation via smartphones without installing new software. A new avenue for music composers and artists to design highly distributed, participatory, immersive music and art experiences, utilizing input sensors and actuators available in today’s smartphones.

We describe a multi-speaker, smartphone-based environment for developing interactive, distributed music and art applications, installations, and experiences. This system facilitates audience engagement through participation via personal smartphones, potentially connecting with traditional computing devices via the Internet without additional software or special configurations. The proposed approach has been inspired and motivated in part by the COVID-19 pandemic and builds on earlier works and technology. It demonstrates a design approach that is more efficient and provides a new avenue for music composers and artists to design highly distributed, participatory, immersive music and art experiences, utilizing various input sensors and actuators available in today’s smartphones. These include individual smartphone accelerometers, video cameras, and – of course – speakers. The use of smartphones also provides for relatively precise geolocation through GPS or simple social engineering approaches, such as using dedicated QR codes for different locations (e.g., seats in an auditorium). This allows for composing experiences to be rendered in the same room / auditorium, highly distributed across the Internet, or a combination of both. The paper presents the technological background and describes three case studies of such experiences, in an attempt to demonstrate the approach and inspire new avenues for artistic creativity and expression towards highly immersive, participatory installations / performances of music and art works for the 21st century.

  • Bill Manaris is a computing in the arts researcher, educator, and musician. He is Director of the Computing in the Arts program, and Professor of Computer Science, at the College of Charleston, USA. His interests include computer music and art, human-computer interaction, and artificial intelligence. He explores interaction design and modeling of aesthetics and creativity using statistical, connectionist, and evolutionary techniques. He designs systems for computer-aided analysis, composition, and performance in music and art. He studied computer science and music at the University of New Orleans and holds an M.S. and Ph.D. degrees in Computer Science from the Center for Advanced Computer Studies, University of Louisiana. Manaris has published a textbook in Computer Music and Creative Computing and is co-developer of the JythonMusic environment. (http://jythonmusic.org).
    http://manaris.org
  • Anna Forgette is a graphic artist and visual designer who is currently studying Computer Science and Computing in the Arts with a concentration in Digital Media at the College of Charleston, USA. She is especially interested in the intersection of visual art, graphic design, digital media, computer programming, and technology. She has been focusing on creating compelling new media art installations and performances, employing a user-centered experience design approach.
  • Meghan Gillikin is a multi-media artist studying Computing in the Arts, and Studio Art at the College of Charleston, USA. She aims to create works that combine the logical methods of the medium of algorithm with the organic nature of traditional studio art techniques. She explores techniques on how to allow these elements to combine naturally to create interactive, ever-evolving, immersive artworks.
  • Samantha Ramsden is a musician studying Computing in the Arts, and Computer Information Systems at the College of Charleston, USA. She explores the relationship between music composition and computer programming and has developed various techniques for creating generative music through the sonification of textures and colors found in aesthetic images. She aims to create restorative, interactive experiences which bring users together in shared, community soundscapes.