Modern techniques for high resolution still image display offer new expressive possibilities for photographic portraiture and exhibition. Responsive Portraits challenges the notion of static photographic portraiture as the unique, ideal visual representation of its subject. Editors are usually confronted with choosing ONE ideal portrait from a limited set of pictures which represent poses, gestures, and expressions that ALL contribute to define a character. In our view the entire set of a subject’s typical portraits should be kept for interactive exhibitions. A responsive portrait consists of a multiplicity of views whose dynamic presentation results from the interaction between the viewer and the image.The viewer’s proximity to the image, head movements, and facial expressions elicit dynamic responses from the portrait, driven by the portrait’s own set of autonomous behaviors. This type of interaction reproduces an encounter between two people: the viewer and the character portrayed. The individual’s experience of the portrait is unique, because it is based on the dynamics of the encounter rather than on the existence of a unique, ideal portrait of the subject.The sensing technology that we used is a computer vision system which tracks the viewer’s head movements and facial expressions as she interacts with the digital portrait; therefore, the whole notion of “who is watching who” is reversed: the object becomes the subject, the subject is observed. Face recognition techniques allow the portraited character to keep a record of previous encounters with a visitor and adjust their response based on the history of their interactions.
- Flavia Sparacino, USA, is a graduate student at the M.I.T. Media Lab. Her main research interest is in combining computer graphics endowed with perceptual intelligence (media creatures), with film/photography, for storytelling in interactive performance spaces, web-based worlds, advertisement, and news presentation. Other interests include sensors, story interfaces, and computer generated music. Sha has presented her work at Siggraph 96 (Digital Bayou), the Sixth Biennal Symposium on Arts and Technology, and IJCAI, among others. She recently completed her Master, degree at the M.I.T. Media Lab, where she built a Typographic Actor, an interactive DanceSpace that generates graphics and music according to the dancer’s movement, and a voice- and gesture-driven Net Space for surfing the web. Flavia received a B.S. in Electrical Engineering from Politecnico di Milano, a B.S. in Robotics from Ecole Centrale Paris and a M.S. in Cognitive Sciences at the Ecole Pratique des Hautes Etudes, Paris, France. She received a number of scholarships and awards for her academic work and career including ones from the European Comm unity, the Italian Center for National Research, the French Center for National Research and Fulbright, among others. She spent some time in Film School and has done travel photography in many countries around the world.
- Nuria Oliver, USA, is a graduate student at the Vision and Modeling Group at the M.I.T. Media Lab. Her main research interest is understanding human behavior in video. Her most recent work is [AFTER, a real-time face detection and tracking system using an active camera.This system has been presented at Siggraph 96 (Digital Bayou), the Second IEEE International Conference on Face and Gesture Recognition (October 96) and the IEEE Computer Vision and Pattern Recognition Conference (CVPR 97), where she also presented other work on modeling human behavior and multiple agent interactions. She received her M.S.degree in Electrical Engineering and Computer Sciences (EECS) from the Madrid’s Technical University (ETSIT at UPM) in 1994. For her M.S.thesis she developed a car detection and tracking system for highway traffic video sequences. Before starting her graduate studies at MIT she worked as a permanent Software Engineer in Telefonica R–D. She has received a number of scholarships and awards for her academic work and career, such as the First [KS B.S. Award to the best EECS B.S. graduate (1992), the First EECS M.S. Award to the best EECS M.S. graduate (1994) and the Spanish First National Award for EECS M.S. graduates (1994).She has been a ‘Siemens International Student Circle’ fellow for two years (1992-1993), which offered her the opportunity to work as a Research Assistant Engineer in Munich (Germany) for two summers. Since 1994 she has been a ‘La Caixa Foundation’ fellow, a distinguished award to outstanding graduate students to pursue graduate studies in American universities.
- Alex Paul Pentland, USA, is the Academic Head of the M.I.T. Media Laboratory. He is also the Toshiba Professor of Media Arts and Sciences, an endowed chair last held by Marvin Minsky. He received his Ph.D. from M.I.T. in 1982. He then worked at SRI’s Al Center and as a Lecturer at Stanford University, winning the Distinguished Lecturer award in 1986.1n 1987 he returned to M.1.T. to found the Perceptual Computing Section of the Media Laboratory, a group that now includes over fifty researchers in computer vision, graphics, speech, music, and human-machine interaction. He has done research in human-machine interface, computer graphics, artificial intelligence, machine and human vision, and has published more than 180 scientific articles in these areas. His most recent research focus is understanding human behavior in video, including face, expression, gesture, and intention recognition, as described in the April 1996 issue of Scientific American. He has won awards from the AAA! for his research into fractals; the IEEE for his research into face recognition; and from Ars Electronica for his work in computer vision interfaces to virtual environments.
- Gloriana Davenport, USA, is Principal Research Associate at the MIT Media Lab of which she is a founding member. In 1988, she formed the Interactive Cinema Group to research and prototype digital media experiences in which narration is split among authors, consumers and computer mediators.Trained as a documentary filmmaker, Davenport’s stories focus on reinforcing the human desire to learn with and about each other. Davenport is a recent receipient of the Gyorgy Kepes Fellowship for excellence in the arts at MIT. Davenport’s work in customizable, personalizable storyteller systems has resulted in inventions at the interface (micons, video streamer, contextual selection, concept maps, elastic media), innovations in story form (evolving documentary, stories”with a sense themselves,” and transformational environments), and explorations of issues inherent in the collaborative co-construction of digital media. Davenport has taught, lectured and published internationally on the subjects of interactive multimedia and story construction.