The work employs an individual-based evolutionary social simulation. In the simulation, virtual people are born, grow up, fall in love, bear babies, age, become separated, and die over the course of multiple generations. Each simulated individual is mortal but its genetic information that is passed on to its offspring can potentially exist eternally. Nevertheless, most hereditary traits are fragile and quickly change through mutation and selection. The evolutionary process running in the simulation often leads to the emergence of multiple geographically separated races. This phenomena can remind us of the fragility of our racial identity.
The simulation is rendered perceivable through dynamically created visuals and sounds. The acoustic content consists of short sentences that are spoken by the computer in accordance with certain events taking place in the live of the simulated individuals or that recapitulate the biographic histories of recently deceased individuals. These spoken texts are combined with a mixture of sounds consisting of whispered proposals, sighs of disappointment, baby cries, and funeral bells. All together, these sounds express the collective atmosphere of the population. The atmosphere might sound like noise but is also represents the harmony of the society.
The computational model of each individual in the simulation is very simple and less intelligent than most contemporary AI systems. Nevertheless, this model is sufficient to evoke in us the impression that the simulated individuals possess intelligence and emotions. As such, it is mainly through our imagination that the simulated behaviors become elements for fictional stories of dramatic life experiences. Therefore, it is also through us, that even the simplest computational entities can achieve enlightenment.
The installation includes two tablet computers that allow visitors to explore the simulation through two types of browsing interfaces. The first interface displays the individual stories of one of the agents that is currently shown on the main installation screen. The full name and the birth and death dates of six agents are listed in a column on the left. By selecting one of these rows, the detailed life events are shown in a column on the right. The second interface allows to exhaustively explore a database containing all the life events of a previously finished simulation run. This database collects approximately 210000 individual live stories over a duration of 3000 simulated years. The interface offers two different views. The first view displays an individual and its parents and lovers. The second view displays a couple and their children. The visitor can touch a graphical depiction of an individual agent to switch to the first mode or he/she can touch a line connecting two agents to switch to the second mode.
Since 2003, Tatsuo and Daniel have worked together on new-media art projects. They received several awards: Honorary Mention by Vida 9.0 in 2006 for “Flocking Messengers”, Excellence Award by 10th Japan Media Art Festival in 2006 for “MediaFlies”, Audience Prize by WRO 2011 for “Cycles”, the Best Artwork Award by ALIFE 2016 for “Visual Liquidizer”, and Excellence Award by 21st Japan Media Art Festival for the first version of “Rapid Biography”. They designed interactive stage effects for four contemporary dance projects choreographed by Jirí Kylián in 2008 and 2009.
- Tatsuo Unemi was born in Kanazawa, Japan in 1956. After he graduated from Department of Control Engineering, Tokyo Institute of Technology in 1978, he worked in the fields of Artificial Intelligence as a graduate school student, a research associate, and a lecturer. He received a doctor’s degree in 1994 from the same university. He has been teaching Computer Science at Soka University since 1992, and working as a professor since 2012. He develops artistic software such as SBArt for visuals and SBEAT for music.
- Daniel Bisig was born in Zurich, Switzerland in 1968. In 1998, he received a PhD in Protein Crystallography at the Swiss Federal Institute of Technology. He joined the Artificial Intelligence Laboratory at the University of Zurich in 2001 as a senior researcher. Since 2006, he has a research position at the Institute for Computer Music and Sound Technology, Zurich University of the Arts. He works as an artist in the fields of artificial live and generative art and has realized algorithmic films, interactive installations, and audiovisual performances.