[ISEA2016] Artists Statement: Roderick Coover, Arthur Nishimoto, Scott Rettberg & Daria Tsoupikova — Hearts And Minds: The Interrogations Project

Artists Statement

Virtual Reality/CAVE, 2016, Approximately 60 mins

“Hearts and Minds: The Interrogations Project” is an immersive digital media project that foregrounds veterans testimonies of US military interrogation practices and human rights abuses during the Iraq War, often by young and ill-trained soldiers who never entered the military expecting to become torturers and who find themselves struggling to reconcile the activities they were asked to do. Drawing upon extensive interviews with veterans carried out by political scientist John Tsukayama following the Abu Ghraib accounts of abuse, this project is unique in building understanding of how a military with a just vision of its practices might allow the conditions for human rights abuses to occur. The hybrid project was developed through a unique collaboration between filmmakers, artists, scientists, and researchers from four universities and developed in the immersive 3D CAVE2 at the University of IllinoisChicago (UIC) for exhibition, educational institutions, museums and libraries, and distribution using tablets/ipads and Oculus Riff. The Virtual Environment. “Hearts and Minds: The Interrogations Project” was developed at the Electronic Visualization Lab (EVL) at the University of Illinois Chicago (UIC) for the CAVE2™, the nextgeneration large-scale virtual-reality 320-degree panoramic environment which provides users with the ability to see 3D stereoscopic content in a near seamless flat LCD technology at 37 Megapixels in 3D resolution matching human visual acuity. The CAVE immerses people into worlds too large, too small, too dangerous, too remote, or too complex to be viewed otherwise [1]. The immersive 3D environment of the CAVE is here intended to provide an affective environment that produces a space for interpretation. The visualization environment serves as a dispositif for enacting individual and cultural memory of an institutionalized atrocity. The project presents the audience with a narrative environment that begins in a reflective temple space with four doors opening to ordinary American domestic spaces: a boy’s bedroom, a family room, a suburban back yard, a kitchen. The user navigates the environment using motion tracking and a wand, a 3D mouse, to interact with and control a VR experience in the CAVE2™. The virtual scene is continuously updated according to the orientation and position of the head, as measured with head and arm trackers, and the 3D view of the scenes is focalized on this perspective. Moving through and exploring each these rooms inside the virtual scene creates a sense of being immersed in the virtual environment. Using a wand with buttons, the navigator triggers individual objects, such as a toy truck, a Boy Scout poster, or a pair of wire cutters. When each object is activated, the walls of the domestic space fall away and a surreal desert landscape is revealed in 2D surrounding panorama, and one of the four voiceover actors is heard recounting particular acts and memory related metaphorically to the object selected. The objects also function very much like hyperlinks in moving us from one narrative element to another. Viewers travel through the domestic spaces and surreal interior landscapes of soldiers who have come home transformed by these experiences, triggering their testimonies by interacting with objects laden with loss. The project extends and make accessible disturbing narratives based on the actual testimonies of veterans who bravely chose to share their experiences. The immersion the system provides allows for a different type of affective experience of these accounts, activated through the visceral immersion afforded by the visual and auditory environment. The work offers models for engaging with testimony and oral history. It uses visualization to build new discourse around challenging topics and to create communicative virtual environments that enable storytelling through visual metaphor. While many uses of visualization technologies are focused on providing accessible representation of “big data”, in this case, the same technologies are being used to narrativize a complex contemporary issue and to provide a platform for discussion and debate of military interrogation methods and their effects on detainees, soldiers, and society. theinterrogationsproject.com

Based on research by John Tsukayama.
Voices by Richard Garella, Jeffrey Cousar, Laurel Katz, Darin Dunston.

  • Roderick Coover is Founding Director of graduate programs in Documentary Arts and Visual Research and in mediaXarts: cinema for emerging technologies and environments at Temple University, USA. A pioneer in interactive documentary films, installations and webworks, his works are distributed through Video Data Bank, DER, Eastgate Systems. Coover is the recipient of Whiting, Mellon, LEF, and SPIRE awards, among others and his works are exhibited internationally.
  • Scott Rettberg is Professor of Digital Culture in the Department of Linguistic, Literary, and Aesthetic studies at the University of Bergen, Norway. Rettberg was the project leader of ELMCIP (Electronic Literature as a Model of Creativity and Innovation in Practice), an EUand HERA-funded collaborative research project, and a founder of the Electronic Literature Organization.
  • Daria Tsoupikova is an Associate Professor in the School of Design and the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago, USA. Her work includes the development of virtual reality (VR) art projects and networked multi-user exhibitions for VR projection systems, such as the Cave Automatic Virtual Environment theatre (CAVE).
  • Arthur Nishimoto is a doctoral student in the Department of Computer Science and Research Assistant at the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago, USA. He has previously developed interactive applications on the EVL Cyber-Commons multi-touch wall including the 20 foot Virtual Canvas and Fleet Commander which has been exhibited at SIGGRAPH and Supercomputing.

Made possible with support from the Electronic Visualization Laboratory of the University of Illinois, Temple University, and the University of Bergen

Full text (PDF) p. 72-75