[ISEA2016] Artist Talk: Andrew Blanton — Virtual 3d Sound Sculptures for Realtime Performance: Reapropration of Game Engines for Visual Music Performance

Artist Statement

Working with game development environments and custom built software, Microplex works with three virtual environments that produce sound. Using the conceptual framework of commonality in structures between Urban Environments, Computer Processors, and the connectivity of the Human Brain, Microplex is a real time cinema/ visual music performance 15 min. in length and an installation as a piece of real time animation.

MICROPLEX is an electro acoustic composition for percussion and realtime visualization. The work is based on transcripts of talks given by Benjamin Bratton[1] as well as Anil Bawa-Cavia[2] comparing micro biological structures, complex dense networks (such as the human brain and micro processors), and large scale human urban growth. The work is in three movements that each have distinct sonic and visual environments associated with them. The first environment is a rendering of the Intel Montecito[3] chip that has been extruded in three dimensional space to create an urban landscape. The second movement is based on connectivity of the human brain visualizing the macro level connectivity. And the third movement visualizes cellular growth and life cycles. All three movements are based around the idea of virtual 3d sound sculptures that, when visualized, produce sound.
Using four small drums, a visual and sonic representation of each data set is played in real time. Custom software receives input in the form of audio signal from each drum and excites specific parts of the visuals to create the sonic output. Each of the three movements will present a unique visual and sonic representation.
Based around the idea of visualization of live audio feeds from each drum, the system uses both the live audio signal for visualization as well as software side threshold detection for real time triggering of events within the 3D scene. Multiple processes are then used to extrapolate audio information from the visualizations. The first process renders the scene into a two dimensional matrix that maps scene luminosity to a bank of sine tone generators. The second process is to track three dimensional points as real time x, y and z coordinates and drive synthesizers with that data. The third process uses topographical scan line processing to scan the surface of the objects and derive sound.
Excerpts of this work have been shown at the 2015 Transplanted Roots: Percussion Symposium in Montreal Canada[4], The 2015 Understanding Visual Music conference in Brasilia Brazil[5], Gray Area Arts in San Francisco[6], and the Mckinney Avenue Contemporary in Dallas Texas as a part of the 2015 Dallas Video Festival[7].

This work has been developed as custom software by Andrew Blanton. The framework allows for rapid prototyping and construction of audio visual works that both visualize sound as well as extract sound from visuals. The principal idea of the framework is to create feedback between multiple systems and insert the performer into the feedback loop to control it in real time. The work is based on previous work done in this area by groups such as NOISEFOLD[8], Semiconductor[9] and the Vasulka’s[10] among others.

References

  1. Bratton, Benjamin. “On A.I. and Cities: Platform Design, Algorithmic Preception, and Urban Geopolitics”, Het Nieuwe Instituut, Published January 11th 2015, Accessed January 2, 2016. bennopremselalezing2015.hetnieuweinstituut.nl
  2. Bawa-Cavia, Anil. “Microplexes” Urbagram.net, Published March 24 2010, Accessed January 2, 2016. urbagram.net/v1/show/Microplexes
  3. “Montecito” Wikipedia: The Free Encylopedia. Wikimedia Foundation, Inc. 22 July 2004. Accessed January 2, 2016 en.wikipedia.org/wiki/Montecito_(processor)
  4. transplantedroots.org
  5. uvm2015.unb.br
  6. grayarea.org/event/creative-code-meetup-xix
  7. the-mac.org
  8. videofest.org
  9. noisefold.com
  10. semiconductorfilms.com
  11. vasulka.org