Euphoria & Dystopia: Physics, Perception, Immersion

42

Transcript of Euphoria & Dystopia: Physics, Perception, Immersion

Published by Banff Centre Press, Riverside Architectural Press

Copyright © 2011 Banff Centre Press, Riverside Architectural Press All rights reserved.

No part of this book may be used or reproduced in any manner without written permission from the publisher, except in the context of reviews.

Every reasonable attempt has been made to identify owners of copyright. Errors or omissions would be corrected in subsequent editions.

Publication Design and Production: Philip Beesley Architect Inc. Art Director: Hayley Isaacs Copy Editor: Claire Crighton eBook Development: WildElement.ca

Riverside Architectural Press, www.riversidearchitecturalpress.com

Banff Centre Press, www.banffcentre.ca/press

Printing and binding by Regal Printing LimitedThis book is set in Zurich Lt BT and Garamond

Library and Archives Canada Cataloguing in Publication

Euphoria & Dystopia : the Banff New Media Institute dialogues / edited by Sarah Cook and Sara Diamond.

Accompanied by HorizonZero DVD.Includes bibliographical references and index.ISBN 978-0-920159-71-2ISDN 1-894773-24-1

1. Art and technology. 2. Banff New Media Institute. I. Cook, Sarah, 1974- II. Diamond, Sara, 1954-

N72.T4E97 2011 700.1’05 C2011-905947-9

Black

CMYK

Pantone

THE BANFF NEW MEDIA INSTITUTE DIALOGUES

EDITED BY SARAH COOK & SARA DIAMOND

EUPHORIA DYSTOPIA

Banff Centre Press . Riverside Architectural Press

&

Contents

PHYSICS, PERCEPTION, IMMERSION

2SARA DIAMOND 170 Introduction

JEAN GAGNON 190 Realism at Play

BLAST THEORY: JU ROW FARR & MATT ADAMS

204 Intimate Technologies/Dangerous Zones, 2002

TONI DOVE 210 Out of the Box, 1998

CHRISTOPHER SALTER, MAJA KUZMANOVIC,

SHA XIN WEI & HARRY SMOAK 213 Inside/Outside, 2004

CLARK DODSWORTH 221 Out of the Box, 1998

MYRON KRUEGER 223 Smart, Sexy, Healthy, 2001

TAMIKO THIEL 228 Out of the Box, 1998

SHELDON BROWN 233 Out of the Box, 1998

JIM MACKIE 237 Out of the Box, 1998

SANG MAH 239 Out of the Box, 1998

THECLA SCHIPHORST 242 Out of the Box, 1998

JOHN OSWALD 244 Out of the Box, 1998

ROBERT F. NIDEFFER 246 Simulation and Other Re-enactments, 2004

BARBARA MONES-HATTAL 250 Out of the Box, 1998

LINDA WALLACE 252 Living Architectures, 2000

BILL SEAMAN 256Living Architectures, 2000 Inside/Outside, 2004

RON WAKKARY 263 Inside/Outside, 2004

TOM DONALDSON 267 Outside/Inside, 2004

KATHERINE MORIWAKI 268 Outside/Inside, 2004

CELIA PEARCE 273 Simulation and Other Re-enactments, 2004

NIGEL GILBERT 275 Simulation and Other Re-enactments, 2004

ELIZABETH BRUCH 279 Simulation and Other Re-enactments, 2004

NINA WAKEFORD 285 Simulation and Other Re-enactments, 2004

MARK H. HANSEN 289 Simulation and Other Re-enactments, 2004

DAVID WISHART 294 Simulation and Other Re-enactments, 2004

MARCOS NOVAK ET AL. 301 Living Architectures, 2000

ANNE FAIRBROTHER 305 Quintessence, 2002

2

Physics, Perception, Immersion

SARA DIAMOND

THE M ATER IA L K NOWN AS DATA

171

EUPHORIA & DYSTOPIA: THE BA NFF NE W MEDIA INST ITUTE D IA LOGUES

Over the last three decades, technological change has reordered the relationship between visual perception and spatial experience, most markedly through 3d graphics production, photographic technologies, and presentation technologies, which have elided virtual and physical architectures. New tools that extend our perception include virtual-reality systems; optical, infrared, and millimetre-wave systems, and electron optical microscopy. The time-based nature of image capture and production charac-teristic of many of these technologies has conflated the domains of architecture, media art, and visual art. In addition, these technologies have invoked temporal and spatial considerations, as they are able to engage and immerse the full sensorium, incorporat-ing sound, touch, and even smell. Bill Seaman defines a quadangle of transformations as artificial and biological technologies converge on physical spaces: “The living archi-tecture of negotiated spaces … the living architecture of information ecologies, of the definition of nature … the living architecture of consciousness.” 1 In his excerpt in this chapter, he further proposes that these technologies have allowed the extension of the body, leading to new forms of perception, consciousness, and sense of meaning.

The Banff New Media Institute developed a window on the visual through ongoing research and aligned theoretical dialogues into the nature of “physics, perception, and immersion”; the bNmi shared this line of inquiry with its predecessors, in par-ticular the Art and Virtual Environments Project and The Bioapparatus residency. Paul Kaiser (a 3d and motion capture digital designer who has collaborated with Merce Cunningham, William Forsythe, and Bill T. Jones) dubbed this domain “The Republic of Vision,” suggesting the omniscient power of visual technologies, whether animation, visual effects, virtual reality, or photography.2 The use of visual tools is fundamental to visual and media art, as well as to the everyday culture of

1 See Bill Seaman’s talk on page 256. See also Mary Anne Moser and Douglas MacLeod, eds., Immersed in Technology: Art and Virtual Environments, (Banff: The Banff Centre, 1996). This chronicles the artworks of the Art and Virtual Environments Project and also provides insights into immersion and the body. Also of value is Catherine Richards and Nell Tenhaaf, eds., Virtual Seminar on the Bioapparatus (Banff: Banff Centre Press, 1991), which includes chapters such as “Natural Artifice,” “Art in the Virtual,” and “Cyborg Fictions.”

2 Paul Kaiser used this term at the Living Architectures: Designing for Immersion and Interaction summit (2000).

SA R A D IA MOND & SA R A H COOK

172

scientific methods. Artists’ and scientists’ shared reliance on visual tools in the mak-ing of art and science is inherent to both their cultures, expressed through everyday practices and institutions, hence reaffirming their habitus.3 In the instance of the bNmi, however, visual culture overlapped with scientific discovery, visual research, and critique. The bNmi did not only explore creative expressions made with emerg-ing visual tools; from the beginning, the institute engaged with the science, math-ematics, and philosophy behind the development of these technologies. It did this for two reasons: first, to ground artistic experimentation in the practices of science; second, to push scientific thinking forward through artists’ work and to contribute to scientific and cultural theory. In doing so, the bNmi inevitably uncovered signifi-cant gaps between systems of knowledge in the fields of physics, mathematics, art, design, and philosophy.

The visualization of mathematical information and visually imperceptible natural phenomena is a centuries-old practice expressed, for example, through a drawing of the cosmos. Bruno Latour proposes that the scientific revolution was a revolution in seeing.4 Other instruments beyond the naked eye—such as maps, microscopes, telescopes, and cameras—have facilitated human observation and the measurement of natural and human systems. The assumption that scientific images represent a cer-tainty has served to create a consensus of beliefs in science, politics, and economics. Many contemporary artists and cultural theorists are constructivists influenced by the Frankfurt School’s critiques of representation and then the art world’s interest in semiotics, structuralism, and poststructuralism in the late 20th century; they there-fore approach the visual as a construct, a mediation produced by specific technolo-gies (techné),5 psychic processes, and conjunctures of cultural understanding. These methods are opposed by far more utopian analysts who have recently re-engaged phenomenology, recognizing that it can be used to analyze experience as contingent and immediate. Phenomenology—stripped of existentialism and exemplified by the works of Martin Heidegger, Edmund Husserl, Hans-Georg Gadamer, and Maurice Merleau-Ponty—was attractive because when combined with studies of cognition and empirical evidence regarding the physical apparatus of perception, it could

3 Habitus are unconscious social norms reinforced by institutionalization. See Pierre Bourdieu, Outline of a Theory of Practice (Cambridge, UK: Cambridge University Press, 1977).

4 See Bruno Latour, Science in Action (Cambridge, mA: Boston University Press, 1987), Bruno Latour. “Visualisation and Cognition,” Culture Technique (June 1985), http://www.bruno.latour.fr.

5 “Techné is the rational artistic method involved in producing an object or accomplishing a goal or objective.” Joseph Dunne, Back to the Rough Ground: “Phronesis” and “Techne” in Modern Philosophy and in Aristotle (Notre Dame, Indiana: University of Notre Dame Press, 1997), 251.

THE M ATER IA L K NOWN AS DATA

173

help to theorize immersion (rather than the mediation of the simulacra) and flow. This discourse extended thinking about perception to the entire affective appara-tus. Pragmatism (in particular the ideas of philosopher Richard Rorty) provided an understanding of consciousness as contingent to any point in history, again challeng-ing the mediation of representation. These ideas contradicted the philosophy of the Frankfurt School theorists and Walter Benjamin, who placed culture within a histori-cal and materialist analysis. The critique of phenomenology is based on recognition of its inability to provoke understanding from outside an experience.

These schools of analysis of the visual have articulated varied interpretations of nature and the real that are of relevance to science. As Jean Gagnon’s essay points out, tensions around scientific realism, notions of truth, and vastly differing critical readings of imagery raced through the summits and the chapter that follows. Another dynamic friction was the suggestion that emphasis had shifted in visual art, cultural studies, social sciences, and humanities to the transactional and performative; according to some, these disciplines now favour local interaction and specificity rather than uni-versality.6 This only played out partially in the sciences; experimentalist and applied physics study specific phenomena, while theoretical physics remains committed to discovering grand theories (such as string theory, most recently). At the same time, the underlying physics—whether Newtonian or quantum—played in the background, and was fundamental to the comprehension of emerging technologi-cal capacities. Unexpectedly, when the bNmi held Carbon versus Silicon: Thinking Small/Thinking Fast (2003), it created an environment in which both physicists working with big physics and those concentrating on small worlds could come together to share their work, in what turned out to be a rare encounter. Optical physicists like Pierre Boulanger7 and Marc Rioux8 exchanged knowledge about the laws of physics at the subatomic and the astrophysical levels in dialogues with molecular nano-physicists such as Russell Taylor.9

After 1995, The Banff Centre moved beyond vR and began to conduct research into the fecund world of the Internet, exploring in particular its potential for networked virtual environments, visualization, 3d online games, installations, and physical interfaces. It was possible for the bNmi to apply the lessons learned from previous vR work

6 In the case of actor-network theory and science and technology studies, analysts show how universality is established through the circulation of particulars.

7 Professor, Department of Computing Science, University of Alberta8 Research Officer, National Research Council Canada, Institute for Information Technology (Ottawa)9 Research Associate Professor, Computer Science, Physics & Astronomy, Material Science,

University of North Carolina at Chapel Hill

SA R A D IA MOND & SA R A H COOK

174

while embracing the dynamic new world of interconnectivity and communication. The Summer Summit at the Summit (1998) had identified “out of the box” technologies, 3d experiences, and gesture-based and physical computing as key future directions for the bNmi.10 A series of summits then undertook the exploration of “physics, perception, and immersion,” either in whole or as part of other subject matter.11

The bNmi held an enduring interest in the engines behind 3d gaming, intelligent spaces, and emergent interactive systems (artificial intelligence and artificial life) at the philosophical, computational, and application levels. Location-based perva-sive gaming and augmented reality linked physical and virtual realities, and were developed through co-productions (such as those of Blast Theory)12 and research projects—in particular, the Mobile Digital Commons Network. It soon became possible to embed intelligence in nano-materials and shape metals, and to create tangible interfaces and malleable screens that could interact with graphics worlds. These opportunities brought the amalgam of sciences into play. Projects dealing with wearable or portable technologies—which allowed interaction on and between gar-ments, and between garments and spaces, using real-time data, tactile surfaces, and the resultant memory system—were at the core of the Am-I-Able research network.13

Concurrently, explorations of these technologies heightened awareness of a shift-ing and contradictory politics of control. Environments were increasingly intel-ligent and capable of gathering the data required to build immersive

10 The bNmi was successful in achieving funding from the Stentor network, Telus, Heritage Canada, and the Province of Alberta for a three-year project that would explore the future of the interface. This was augmented by The Banff Centre’s first SSHRC grant for research and development initiatives that forged new research methodologies, obtained with the goal of creating a series of interdisciplinary summits designed to facilitate experiential learning.

11 The first event was entitled Telus Presents: Out of the Box: The Future of Interface (1998). This was followed by Living Architectures: Designing for Immersion and Interaction (2000); Emotional Architectures/Cognitive Armatures/Cognitive Science in Interactive Design (2001); Quintessence: The Clumpy Matter of Art, Math and Science Visualization (2002); Carbon versus Silicon: Thinking Small/Thinking Fast (2003); Simulation and Other Re-enactments: Modeling the Unseen (2004); Outside/Inside: Boundary Crossings, a Wearable Design Workshop (2004); Inside/Outside: Responsive Environments and Ubiquitous Presence (2004); Bodies in Play: Shaping and Mapping Mobile Applications (2005); and Bodies in Motion: Memory, Personalization, Mobility and Design (2005).

12 UK–based art group 13 These were presented at Outside/Inside: Boundary Crossings, a Wearable Design Workshop (2004).

Co-productions such as those of Kate Hartman explored wearable technologies. Am-I-Able at The Banff Centre, under my leadership as co-principle investigator, included designers Di Mainstone and Tamoko Hayashi, Maria Lantin, Tom Donaldson, Jan Erkku, David Gauthier, Geoff Lillemon, Tanya Woolard, Anita Johnston, Kelly Holmes, Jaroen Keijser, Greg Judelman, Catherine Thomson, Mireille Dore, and Belle Orchestre.

THE M ATER IA L K NOWN AS DATA

175

environments; the very tools needed to create enhanced human experience were equally capable of undermining autonomy. Both David Lyon14 and Sarah Kember15 discuss a modern panopticon in which individuals trade their privacy for 24/7 accessibility to personalized information and services. Citizens accept ubiquitous video surveillance in return for promises of security, permitting a constant, 360-degree lens to gaze on their world. This condition describes contemporary techno-culture. Biosurveillance practices such as retinal scanning add to the surveillance-data stream, as does the dramatically expanded monitor-ing capacity on the global Internet. Alexander R. Galloway and Eugene Thacker propose that, with the advent of the Internet, control remains but is distributed and hence more invisible.16 These concerns filtered in and out of summits such as Living Architectures: Designing for Immersion and Interaction (2000), Emotional Architectures/Cognitive Armatures/Cognitive Science in Interactive Design (2001), Carbon versus Silicon: Thinking Small/Thinking Fast (2003), and Simulation and Other Re-enactments: Modeling the Unseen (2004). However, the bNmi philoso-phy was to engage in providing alternative practices with these technologies rather than to step away, with the expectation that disruptions (even if contin-gent) would occur.

In order to take full advantage of the potential of discussions about emerging tech-nologies, bNmi summits often searched for new methodologies to match these new technologies. For example, when speaking about his interest in designing intelligent systems at Telus Presents: Out of the Box: The Future of Interface (1998), Richard Loveless suggested three interlocking methodologies:

The premise is to place ourselves in the realm of electronic imaging technologies and join in their play on their terms. The conceptual goal is to invert the traditional pro-cess of a human being using technology to create forms in visible light fields to one where technology uses human beings to create forms in an invisible light field. The technical goals are to create an apparatus that reciprocates a human performance by producing a delayed image of our participation.17

14 David Lyon, Surveillance after September 11 (Cambridge, UK: Polity Press, 2003); David Lyon, ed., Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination (London and New York: Routledge, 2004).

15 Sarah Kember, Cyberfeminism and Artificial Life (London: Routledge, 2003).16 Alexander R. Galloway and Eugene Thacker, The Exploit: A Theory of Networks, (Minneapolis:

University of Minnesota Press, 2007).17 Susan Aaron, Out of the Box, Banff Centre Multimedia Institute (1998), http://www.banffcentre.ca/

bnmi/programs/archives/1998/out_of_the_box/reports/out_of_the_box_aaron_report_1998.pdf, 12. The report includes transcripts that are excerpted in this book.

SA R A D IA MOND & SA R A H COOK

176

Lovelace proposed that a new kind of technology would emerge through this new method: “The technological presence will be a mechanically assembled infrared vision of our encounter with robotic motion and digital memory in video space.” 18

The bNmi encounters also changed the approaches of scientists, who would attempt to reintegrate their new experiences into the scientific method. During Living Architectures, architect Marcus Novak proposed n-dimension architectural construc-tions that cannot be physically built and could only be realized through abstract imagery; a group of leading mathematicians, physicists, and computer scientists then jumped up to illustrate and exchange different conceptual models of n-dimen-sionality. The visual nature of the dialogue and the passionate discussion captured in this chapter engaged all participants—artists and scientists alike. Furthermore, the discussion was pointed, suggesting the value of representing that which cannot be seen in imaginative and abstract terms as much as through realistic modelling.

This chapter features Out of the Box, as it was an initial event that brought together many of the early technologies and concepts that would inform almost a decade of future thinking. It began with a poetic statement of intent: “to move the muse beyond the screen.” 19 To do so, the summit drew together a wide array of scientists, inventors, designers, and artists. The summit reflected on “sensitive interfaces that engage one’s perceptions, all the while taking advantage of technologies such as virtual reality and artificial intelligence.” 20 Out of the Box sought sweeping jurisdic-tion over 3d virtual environments, online virtual worlds, smart costumes, intelligent objects, ubiquity, networked performances, installations, theme parks, and robotics, defining these as interfaces potentially linked by broad bandwidth delivery systems. The event was organized thematically, beginning with the potential of synaesthesia, defined both as total sensory experience and the ability to use media to substitute one sense for the other. Other themes were visualization, sensory computing (which explored sentient expressions of artificial intelligence), touch (tactile interfaces), sound, taste (which was used as a metaphor for taste in culture and popular culture), and smell (the smell of nature and artists’ physical and interactive explo-rations). Case studies of immersive art and design explored the fields of cultural expressions, education, and health; the latter line of inquiry follows from the bNmi’s ongoing interest in using immersive and interactive technologies to heal.

18 Ibid.19 Sara Diamond, program description (1998), 1.20 Susan Aaron, Out of the Box, Banff Centre Multimedia Institute (1998), http://www.banffcentre.ca/

bnmi/programs/archives/1998/out_of_the_box/reports/out_of_the_box_aaron_report_1998.pdf, 4.

THE M ATER IA L K NOWN AS DATA

177

There were exceptional examples of networked performing arts, with live dance performances included in the summit program. The bNmi showcased two co-productions: first, dancers performed three intimate choreographies by Jools Gilson-Ellis, in which movement and breath triggered interactive video and sound; and second, Gretchen Schiller presented a responsive dance platform through which an individual controls a video by balancing and by moving backward, forward, and sideways. There were demonstrations of intelligent stage systems and creative applications of robotics to build dJ systems. Video, dance floors, and audio-feedback tools combined technology and physical presence to create platforms for collaborative performances. David Martin of SmART Technologies launched Smart’s intelligent whiteboards at the summit, and Japanese robotics scientist Hideaki Kuzuoka and Saul Greenberg of the University of Calgary shared video-collaboration technology using devices that allowed people to subtly inform others about their level of social availability. As audience members stroked the white velvet surface of Thecla Schiphorst’s Body Maps: Artifacts of Mortality, the location and the intensity of their touch was mapped, activating images of Schiphorst’s body and the body of her son, accompanied by an audio score of breathing and water sounds.21 Participants investigated the immersive qualities of audio, in part through physical, non-digital exercises led by composer John Oswald. Taste and smell interfaces were also discussed, and these two senses were used as metaphors. On account of Out of the Box’s generative significance as an event and the continued relevance of its experiments, this chapter contains a sampling of excerpts from the summit.

As a boundary term—capturing varied meanings from dispersed fields—the dis-course of “architecture” held sway at many bNmi events. Whether through method, philosophy, or metaphor, architecture bore relevance to information and commu-nications technologies (iCT) and new media.22 The term stood in for any space that was diagrammed, planned, structured, and built, and it referred to both the space and the time of that structure. Endorsed by the Alberta Association of Architects, the Living Architectures summit played off of the migration of the term “architec-ture” into software, network design, and virtual spaces, as well as the broader use of the term within many forms of systems theory, especially when the structure of a system is described, whether a system of living organisms, natural systems, or human-constructed biotechnologies. The discourse revealed the intersections,

21 This work was exhibited at the Walter Phillips Gallery.22 As well, architect Douglas MacLeod had led the Art and Virtual Environments Project (1992–95)

and had introduced thinking about architecture into the investigation of virtual spaces.

SA R A D IA MOND & SA R A H COOK

178

Left Thecla Schiphorst, a veteran of many summits, discusses the physical and virtual body. Bridges II: A Conference about Cross-Disciplinary Research and Collaboration, 2002. Courtesy of The Banff Centre. Right Sha Xin Wei discusses the Topological Media Lab. Inside/Outside: Responsive Environments and Ubiquitous Presence, 2004. Courtesy of The Banff Centre.

parallels, and differences between these differing uses of “architecture.” 23 Early in the event, Linda Wallace gave an engaging talk by (excerpted in this chapter on page 252)24 in which she theorized “architectural media space” (physical spaces permeated by media), the possibility of “streaming out to simultaneous global spaces,” and the ways that subjects negotiate these spaces.25 Gathering together practicing architects, online architects, theorists of architecture, medical researchers, and scientists, Living Architectures then delved into “approaches to designing highly responsive spaces and their contents, psycho geography, utopian architecture and its relationship to tra-ditional architecture; the needed intelligent software and tools, including surfaces, network capabilities, and cellular technologies, motion sensing systems, projection and neural networks.” 26 Discussions addressed practical matters such as building affordable online design environments and the state of 3d design tools. There were

23 In addition to architects, the summit attendees included: technology innovators; virtual-reality designers; technology researchers; performers designing and using new forms of staging; and animation, imAX, and 3d directors.

24 In bNmi fashion, the summit began by setting the critical context with a panel of theorists, linguists, scientists, programmers, and curators, including Linda Wallace (Machine Hunger); N. Katherine Hayles (Department of English, Princeton University), Jean-Claude Guédon (Département de littérature comparée, Université de Montréal), and Jakub Segen (Multimedia Communications Research, Bell Labs).

25 See Linda Wallace’s talk on page 252.26 Sara Diamond, program notes (2001). For further analysis of the outcomes of the event, see Sara

Diamond, “Living Architectures,” Leonardo Online (2001).

THE M ATER IA L K NOWN AS DATA

179

overt tensions between architects working in the built space—who were slowly adopting computer-aided design (CAd) technologies—and those at work in virtual worlds. The summit considered the ways in which CAd and other design tools such as the 3d graphics program Maya27 were changing the aesthetics of the built space.

The bNmi brought performance theory into the mix, as it was a means to under-stand the presence and responsiveness of spaces. A panel entitled “Performing Spaces/Spaces that Perform” examined the ways in which architecture’s modern-ist traditions figured the body and contested the value of continuing this figura-tion in virtual space. A session on data visualization and the “architectures” of knowledge explored notions of the architectural diagram and model that could be used to model systems and flows of knowledge. With immersive visualiza-tions in mind, the summit asked: “What does it mean to be inside a model?” Various tools and collaborative environments for design, work, education, and play were demonstrated and analyzed, such as CAvEs, reality centres, worksta-tions, responsive spaces, intelligent stages, and performance tools. A final panel, co-moderated by the CodeZebra team and entitled “Putting the Living into Architectures: Biology as Metaphor,” considered the impacts of biotechnology—that is, living-systems architecture—on ecological systems. The panel explored the instances in which metaphors enabled a meeting of the fields of computer science, biology, design, and art.28

Emotional Architectures adopted a different perspective on the expanded field of archi-tecture. It gazed on physical and virtual architecture through the lens of emotion and cognitive systems. The event description called for analysis of “both the architecture of the brain and the psyche, the ways that cognitive processes and psychological states are evoked and constructed by the emotional architectures that we design.” 29 This statement evoked dialogues about the nature of human and technological memory in built and virtual environments. Emotional Architectures continued a bNmi interest in biomimicry—design concepts and practices that were inspired by ecological circum-stances—considering built structures for housing digital technology and new-media activities that could make use of natural forms and materials. Franklin Hernandez Castro of Little Intelligent Communities and tropical architectures described a

27 Maya was originally created by Alias Research (which became Alias|Wavefront) and is now owned by Autodesk. It is a 3d animation software that offers an end-to-end creative workflow for the user.

28 It attempted to apply these fairly abstract notions to applications of networks for tele-health and healing, again looking at the value of architectural-systems models.

29 Sara Diamond, Program Notes (2001).

SA R A D IA MOND & SA R A H COOK

180

participatory-design process with Indigenous communities in the mountains of Costa Rica; this resulted in the creation of shared online computer resource centres that are housed in tropical architecture and configured to blend in with the existing culture.

New-media art had expressed a keen interest in phenomenology, hoping to under-stand presence and how it played out in networked environments, believing that the field could provide understanding about the process of interaction, and connecting philosophical thought to the physiological processes explored by cognitive science. Psychologist Brian Fisher was a regular contributor to summits.30 Rigorous debates occurred—in particular, about the ways that cultural specificity might mediate cognition and the normative assumptions of cognitive science. In a session entitled “Cognition, Space and Place, Memory: The Narratives of the Virtual and Real,” the lens of science and technology studies was used to suggest that cognitive science had its own narratives. Cultural theorist Warren Sack deconstructed the biases in cognitive science, and other participants offered critique from poststructuralist or psychoanalytic points of view.

Emotional Architectures was a poignant event, as a number of participants were unable to travel due to the disruptions that occurred following the events of September 11, 2001. Other participants, such as Scott Paterson, had been in close proximity to Ground Zero in New York City. The gathering paused and considered 9/11, reflect-ing that “spaces and our relationships to these, both real and virtual, are filled with cultural and ideological assumptions about feeling,” and asking: “Why do some spaces evoke certain feelings? How does power flow within spaces? The tragic events of the last week underscore issues of power, space, and the symbolic. As designers, what are the emotional values that we carry into spaces, virtual and real?” 31 A specific session was held to discuss 9/11.

In a presentation entitled “Autonomous Architectures, Architectures of Time and Space, Zero G,” Ted Krueger32 gave a remarkable talk about magnetism as a sixth sense, charting the movement of alternate science into science. He discussed his NASA design

30 Associate Director, Media and Graphics Interdisciplinary Centre (mAGiC), University of British Columbia

31 The event program notes presented the session “Architectures of Power, Architectures of Violence, Architectures of War, Architectures of Peace” suggesting that “as an international group, air travel permitting, it feels critical to create a time on our agenda for those of us who wish to, to meet and discuss the implications of the events of last week on our understandings of global virtual and physical architectures.”

32 Associate Professor Architecture, Rensselaer Polytechnic Institute

THE M ATER IA L K NOWN AS DATA

181

work, which imagined designing for a world in which humans stayed in space and their bodies adapted to zero gravity. Other presentations explored impossible, emo-tionally powerful architectures that could be imagined but not yet built. The summit included presentations of practical tools as well as hands-on workshops with tools.

Smart, Sexy, Healthy (2001) brought together able-bodied and disabled designers, technology users, theorists, researchers, health-care practitioners, and technology developers to explore the capabilities of networked and immersive technologies to educate and heal, and to redefine sensory perception and experience.33 The event originated in conversations with disability ethicist Gregor Wolbring, who had suggested that a symposium might explore the discourse of empathy. An overview of the summit is provided in the introduction to Chapter 5 (“Social & Individual Identity in New Media”). A section of the summit included all manner of immer-sive and interactive environments that used technology to mitigate against pain, to assist with healing, or to allow an Other to experience disability through media re-enactment. A highlight was a talk and demonstration by Myron Krueger, an American computer artist who is considered (along with Jaron Lanier) to be one of the first-generation virtual-reality and augmented-reality inventors and researchers. Krueger shared anecdotal histories about the development of early human-com-puter interfaces.34

Simulation and Other Re-enactments was a SSHRC–funded conference that extended the bNmi discussion of visualization into the sister practice of simulation.35 Visualization uses actual data to build a structural model, and then to create a visual image from that structure. Simulations may or may not use actual data. If they do, the data is often used to model a current state, with the simulation used for foresight. Simulations often use assumptions about the behaviour of a physical or abstract state or condition. Simulations have a narrative assumption embedded in their images and processes. Simulations are used to represent complex phenomena in many sciences, and to imagine the intangible—such as nano pro-cesses or n-dimensions.36 Years of evolution can thus be represented in a

33 This is excerpted from Sara Diamond, program notes (2001).34 As early as 1965, Krueger and a team of computer programmers developed an early form of the

head-mounted display currently used in virtual reality. His research has since moved to embrace the full sensorium of smell and touch.

35 I developed this conference with a collaborative team that included Mark Hansen, W. Bradford Paley, Sarah Cook, and Mark Resch, as well as Maria Lantin and an extended conference committee.

36 For example, physicist John Dubinski spoke about the use of simulations to experiment with physical phenomena that are not accessible in the usual lab because of the disparity in time and distance scales between humans and the universe.

SA R A D IA MOND & SA R A H COOK

182

few minutes.

Simulation and Other Re-enactments paid particular attention to the increasing use of simulation in social sciences such as economics, criminology or sociology, in which the approach is employed to simulate “historical phenomena, economic processes, lin-guistic occurrences and practices, historical crises and crimes.” 37 The bNmi tracked the use of simulation in popular culture (in movies such as The Matrix and Permutation City) as well as in successful “sim” games, such as The Sims. The event therefore con-sidered the seepage of simulations into everyday life, asking whether these acted back on our understanding of phenomena: “How do simulations shape our sense of space and time within physical reality?” Cultural theorist Machiko Kusahara38 suggested that, for some youth cultures, there is a thin line between belief in virtual worlds and belief in physical reality, due to the immersive quality of contemporary. The bNmi was particularly interested in how simulations captured ideas about time, representing continuity between the past, present, and future. The summit linked these ideas to the rise of historical re-enactment in popular culture, performance art, and cinema. As in all events, the bNmi explored the computational and technical challenges behind the practice.39 It contrasted scientific realism with artistic license, evaluating the utility of each approach to understanding the phenomena behind a simulation.

37 Sara Diamond, Preface, Program Agenda, Simulations and Other Re-enactments: Modeling the Unseen.38 Media Scholar, Media Art Curator, and Professor, School of Letters, Arts and Sciences, Waseda University39 This summit in particular considered immersive virtual environments, next-generation 3d imaging

technologies, AL agents and technologies, geo-tagging technologies, and other tools for building multimedia entertainment and computer games.

Left Christopher Salter demonstrates Sponge’s adaptations of phenomenology. Inside/Outside: Responsive Environments and Ubiquitous Presence, 2004. Courtesy of The Banff Centre. Right W. Bradford Paley describes the mechanisms of TextArc. Simulations and Other Re-Enactments, 2004. Courtesy of The Banff Centre.

THE M ATER IA L K NOWN AS DATA

183

Jonathan Drori40 suggested that simulations could serve to frame debate and to encourage democracy, and he discussed the ways in which simulations are deployed in science communication, television, and education. Social scientists and scientists demonstrated research that used simulation as a tool for prediction, comparing results against known real-world observations and seeking criteria with which to evaluate models. Medical researcher David Wishart, a regular at the bNmi, demonstrated the CyberCell project, a broad international effort aimed at predicting nearly everything that goes on inside a living cell through computer modeling techniques. Elizabeth Bruch41 presented her extensive research on how race plays out in neighbourhood choice over time; she modeled segregation and neighbourhood change by race and income, providing dynamic models of neighbourhood change in order to establish the relationship between individual-level behaviour and population-level processes. Her simulations were used to find new explanations for behaviour—explanations that challenged standard sociological sample approaches. Panels considered the practical work of building a simulation while acknowledging that assumptions are cultural and not objective, and hence asking: “How can we achieve a shared under-standing of a model? What are the stages of building a metaphor from phenomena?” Nigel Gilbert42 revealed his concept of NEW TiES, “an ambitious project that aims at simulating the evolution of cultures in large populations of agents through individual and social learning of, e.g., behavioural skills and language.” 43 Debate occurred about the assumptions that were being programmed into NEW TiES agents as they built a society; participants suggested that preconditions for altruism and collaboration could be structured into the agents as much as conflict and competition were. Technologist Katrin Becker44 discussed the challenges of building shared models between com-munities when working in cross-disciplinary teams.45 Ethnographer Nina Wakeford46 warned of the hidden politics of simulation and suggested that great care be taken in the ethics and methods of speaking for others.

40 Director, Culture Online, Department of Culture, UK Government41 Graduate Student, Department of Sociology and Statistics, University of California, Los Angeles42 Vice-Chancellor and Professor of Sociology, University of Surrey43 Excerpt from Nigel Gilbert’s presentation “Solving Social Issues, Collaborative Design and

Stakeholder Participation: The Zurich Water Game, Music Market Simulations; Comparison with NEW TiES Project, and Agent Based Simulation of Society Building” on the panel “How Can Simulations Be Best Applied to Social Problems and Structures? What Are the Social Applications of Simulations in Promoting Democracy, Stimulating Debate?” at Simulations and Other Re-enactments: Modeling the Unseen (2004).

44 Senior Instructor, Department of Computer Science, University of Calgary45 In a complementary discussion, W. Bradford Paley argued that data represents human

communication and discussed the use of visualization as a subjective tool that may bring sensation and perception to “the knowledge-acquisition pipeline.”

46 Director, iNCiTE, University of Surrey/Goldsmiths College, University of London

SA R A D IA MOND & SA R A H COOK

184

On a panel that asked how simulation could be used to grapple with improbability, computer scientist Christian Jacob presented agent-based simulations of swarm-intelligence systems that help to understand other collective phenomena—whether traffic patterns or gene-regulation mechanisms inside a bacterial cell. He was joined by Julie Tolmie,47 who proposed that simulation and advanced gaming environ-ments could be used to encode concepts that are not easily accessible and/or are constrained by space/time language. In order to illustrate this challenge, artist Guy Hundere showed Ornithmancy (2003), an animation of more than 200 Boeing 747s encircling each other in a chaotic but organic pattern. He creates simulations of “the impossible” by blending the simulated and the physical to explore the aesthetic outcomes of such mixed realities.

Eric Brochu demonstrated the technical means to simulate agents that were indistinguishable from humans. In the ANimUS project, physicist Pierre Boulanger added human character traits to agents, which “mimic awareness of the environ-ment of other agents and of human audience.” 48 Graphics scientist Ken Perlin49

shared his work on agent-driven realistic characters for the web. These presentations provoked questions about what criteria were used to define human traits. Warren Sack provided a critique of the practice of simulation, drawing on the philosophi-cal, psychoanalytic, Foucauldian, and theatrical forms of simulation, and calling for “the non-utilitarian, reflective potential” in simulation. He reflected on the debate between Plato and the sophists, suggesting that there was a technological legacy to the 20th-century debate on artificial intelligence. Discussion went further, asking the ethical question, “Can and should we simulate the human body or mind?” 50

A lively debate occurred concerning the biases of simulation-game physics toward realism. Panellists considered how simulation games operate as experiences and

47 Visiting Faculty, Computer Science, Dalhousie University, and Assistant Professor, School of Interactive Arts + Technology, Simon Fraser University Surrey

48 Excerpted from presentation the “The ANimUS Project: Synthetic Characters—Modeling Human-Like Agents From Biological Organisms for Virtual Environment” on the panel “How Can We Achieve a Shared Understanding of a Model? What Are the Stages of Building a Metaphor from Phenomena? Where Does the Original Model Fit into This (Artist’s Model or Computer Artifact)” at Simulations and Other Re-enactments: Modeling the Unseen (2004).

49 Professor, Department of Computer Science and NYU Media Research Lab, New York University50 Excerpt from Warren Sack’s presentation “Plato Versus the Sophists: Is Mimesis a Good Thing?

Plus: The Aesthetics of Information Visualization” from the panel “Is Realism and Reality the Goal of All Simulations, of Some? How Does The Suspension of Disbelief Operate in a Simulation? How Do Agency, Character, Fantasy Operate in Games and Animation?” at Simulations and Other Re-enactments: Modeling the Unseen (2004).

THE M ATER IA L K NOWN AS DATA

185

whether participants understand the game as another reality, challenging the notion that realism is the goal of simulations and analyzing how suspension of disbelief occurs in simulations. Robert F. Nideffer (excerpted in this chapter on page 246) critiqued the ways that game physics mirrored the tendency of game narratives towards realism. Panellists including game-design theorist Celia Pearce and technol-ogy developer Jim Parker51 attempted to separate simulation from realism.

The summit contrasted artistic and scientific aesthetics, asking, “If scientific realism is powerful because of its realism, then what is powerful about artistic re-creation? What simulation media are appropriate for specific phenomena?” 52 With sponsor-ship from Banff Mountain Culture and the bNmi, sociologist Kris Cohen53 and artist Ben Coode-Adams created a 34-hour endurance re-enactment entitled Is somebody coming to get me?, representing three simultaneous events: the first ascent to the top of Mount Everest, the memorable moments of the Queen’s coronation, and the drama of the deadly 1996 Mount Everest disaster:

The project features two wooden 1:240 scale models of the summit ridge of Mount Everest and a space representing Westminster Abbey. Every 15 minutes, the pair—who met at a Banff Centre conference last year—will move wooden blocks that represent the climbers and the royal guests through the spaces and read first-person accounts of the events. The artists will document their preparations and post the event live at: www.issomeonecoming-togetme.blogspot.com.

The assembly of the models will begin at about 6 p.m. on May 28 in the Centre’s Main Dining Room. The 1996 climbers will begin their ascent of the mountain at 11:30 p.m. The next morning, high on the South East Ridge, Hillary and Tenzing will leave their bivouac at 6:30 a.m., reaching the summit at 11:30 a.m. on May 29. As they ascend the final ridge, the crown will descend on the head of Elizabeth. The 1996 climbers will begin to summit at 1:07 p.m. from which point disaster unfolds until dawn on May 30—re-enacting a tragedy that resulted in the death of nine of the 32 climbers on the mountain at the time.54

The collaborators spoke of their method, which blends art and ethnography.55 To complement the discussion of artistic simulation, evening cultural programs were held. Curator Sarah Cook presented the works of Jeremy Deller, Rod Dickinson,

51 Professor, Computer Science, University of Calgary52 Sara Diamond, Preface, Program Agenda for Simulations and Other Re-enactments: Modeling the

Unseen (2004).53 Research Fellow, University of Surrey54 Media release (The Banff Centre, May 21, 2003).55 Artist Jennifer Steinkamp showed her work The Wreck of the Dumaru, which combines re-enactment and

simulation and is a lyrical approach to architectural space, motion, and phenomenological perception.

SA R A D IA MOND & SA R A H COOK

186

Iain Forsyth and Jane Pollard, Nina Pope and Karen Guthrie, and Marcus Coates. There were also film screenings and an evening of re-enactment performance videos and simulation works, which Paul Wong and I developed; these works included many of our projects, from our early practices in the ’70s and ’80s to our recent collabora-tion in and around the CodeZebra Habituation Cage performance.56

One of the highlights of Simulations and Other Re-enactments was the fact that participants collaborated with the bNmi Advanced Research Technology (A.R.T.) Labs team to build a tool that would simulate the event and the participants’ contribu-tions. As a concept emerged from, or was used by, a group, it would be accompanied by an image of a blast of energy, and the word describing the concept would simul-taneously show up for the first time or change characteristics (density and colour) and location or position in the language simulation. If a concept such as “gaming” maintained popularity, it moved to the centre of the simulation. Viewers could easily see competing ideas. The simulation ran in real time throughout much of the conference. Participants could influence the popularity, weight, and positioning of concepts by using and rating these key words. The simulation operated as a game and as a useful moderation and knowledge-generation tool. At the end of the conference, the participants and organizers could see what kind of spaces generated what kinds of concepts, where the most ideas were brainstormed, who was developing these ideas, and the frequency and intensity of these ideas’ use, and could evaluate the conference with respect to key concepts and their emotional weights. Scientist Maria Lantin and designer Greg Judelman later developed the prototype into Flower Garden, an inter-active language game for conferences.

Four events brought the dialogue about immersion into the context of interaction between intelligent objects and physical space, exploring both wearable technology and location-based experiences. These were Outside/Inside: Boundary Crossings, a Wearable Design Workshop (2004), Inside/Outside: Responsive Environments and Ubiquitous Presence (2004), Bodies in Play: Shaping and Mapping Mobile Applications (2005), and Bodies in Motion: Memory, Personalization, Mobility and Design (2005). The latter events delved into issues of data management, visualization, and memory, and are discussed at length in the introduction to Chapter 1 (“The Material Known as Data”).

56 I locked artists (including Wong) and scientists (including Nina Wakeford) in a “laboratory” environment that referenced 20th-century psychology experiments.

THE M ATER IA L K NOWN AS DATA

187

The bNmi was able to gather many of the leading designers working in wearable computing from a fashion and technology perspective at Outside/Inside. The title of the workshop placed it within the bNmi’s ongoing discourses about ubiquity and about the boundaries between the virtual and material and the visible and invisible. This workshop considered issues of personal and collective identity and the aesthet-ics and physics of smart technologies, linking innovation in traditional technolo-gies to new materials, smart fabrics and garments, and engineering techniques. The workshop contained practical training sessions, demonstrations, sketching, fast prototyping, and presentations that examined the context and theory behind gar-ments and systems. The technologies explored were sensor systems, mobile devices, electroluminescence, inks and dyes, conductive textiles, and responsive materials (i.e., shape metal).57 The workshop was framed around the following questions: In what contexts do men and women make use of wearable computers? What are crea-tive approaches for making wearable content experiences, and what are the technical potentials and challenges? What are the markets?

Garments are physical objects that literally soak up the body essences of their wearers, evolving a physical memory over time. At the same time, garments dete-riorate—signifying memory loss, in a sense. Garments also resonate with psycho-logical memory for their wearers—not only in symbolic terms, but as mnemonic devices in which smell and tactility are as important as the visual. These elements can be amplified through the addition of sensor systems, responsiveness, and intelligence, as well as through garment changes that respond to body heat, per-spiration, or breath. Garments and garment systems (in which garments respond to other garments, to architecture, or to devices) can constitute a deeply immer-sive context. The workshop and subsequent conference discussed the possibility that garments could represent memory—both personal and social—time, and emotion. Speakers on this topic included Joanna Berzowska, Jenny Tillotson (who works with systems of smell and memory),58 and Eliso Co.59 Sha Xin Wei and Maja Kuzmanovic presented active sensate materials and switching systems. The workshop explored the mathematics and physics that inform systems of indirect response. Hence, design procedures that used topology to create unexpected

57 The workshop discussed the problem of scaling up fashion and technology prototypes. Many of the garments were extremely fragile and some were almost dangerous with open electrical circuitry.

58 Senior Research Fellow, Fashion Textile Design, Central Saint Martins College of Art & Design59 Designer, Mintymonkey (Santa Monica)

SA R A D IA MOND & SA R A H COOK

188

responses to inputs were of interest to the collaborative team FoAm60 and to other designers. This was captured in the session “Data Aesthetics, and Topographical Approaches to Design” with Tom Donaldson and designer Katherine Moriwaki, which is excerpted in this chapter on page 267.

Donaldson and Berzowska, both engineers, were joined by computer scientist Michael McCarthy,61 a collaborator with Blast Theory who helps to develop devices for their mobile games.62 The workshop searched for new approaches to textiles that drew not only from digitization but equally from production processes in traditional craft. A ses-sion entitled “Weaving and Knitting with New and Traditional Materials” delved into

60 FoAm is a research group that “consists of a network of generalists, committed to supporting and developing a holistic culture. [Its] members include artists, facilitators, gardeners, cooks, technologists, designers, entrepreneurs, writers and scientists from all walks of life.… FoAm is committed to growing inclusive, resilient and abundant worlds. [FoAm does] this by providing a context and a structure to research, design and reflect on transdisciplinary creative practices. Our actions and their outcomes should nourish our cultural and natural environment, as well as sustain ourselves.” See http://fo.am/.

61 Research Assistant and PhD Candidate, Mobile & Wearable Computing Group, University of Bristol

62 A dialogue about communication systems, responsive environments, and fashion added designer and technologist Sabine Seymour (CEO and chief creative officer, Moondial Inc., and Design Fellow, Parsons School of Design).

Designer Victoria Lawton dresses a dancer in a CodeZebra interactive costume. Outside/Inside: Boundary Crossings, a Wearable Design Workshop, 2004. Courtesy of The Banff Centre.

THE M ATER IA L K NOWN AS DATA

189

new kinds of textures and luminosities for textiles and garments. It included Barbara Layne’s garments, which deployed LEd displays woven into materials; these garments stood in dramatic contrast to the products of Senegalese gallerist, textile designer, and furniture designer artist Aissa Dionne. Dionne had revolutionized weaving techniques in Senegal by redesigning traditional looms, and had also perfected new forms of indigo dye. Responding to the proliferation of electronic pollution from a developing-world perspective, Dionne offered insights on solar power technologies and under-scored the importance of using these rather than alkaline batteries.

The workshop expanded into a summit in which participants presented their research and debated the future of fashion and technology—its trajectories and scalability in a larger context of ubiquity and architecture included the garment as a technology in space. During the event, the bNmi staged a fashion show that premiered interactive works produced during a CodeZebra fashion residency (led by Joanne Berzowska, Susan Jenkyn-Jones, and me, as well as by student design-ers from Central Saint Martins63 and other collaborators64). Garments and acces-sories by designers participating in the workshop were also displayed. The summit counterbalanced the optimism inherent in inventing applications of new technolo-gies against a set of anxieties about ubiquity in the context of political and social concerns. It proposed a tension between surveillance and responsiveness, asking: “Is ubiquity frightening, desirable?” 65 The summit also explored the relation-ships between ecological and human environments, suggesting that the collapsing together of intelligent garment, body, and environment had disrupted cultural understandings of nature and had obscured the line between living and non-living. Speakers challenged the ecological viability of wearable technologies, with their dependencies on external power sources.

The following chapter excerpts discuss immersive experiences—interweaving vR, interactive-installation, garments, mobile, and web experiences. Each excerpt dis-solves the boundaries between the virtual and the real in different ways. Themes that Jean Gagnon elucidates in his essay “Realism at Play” will cycle back time and again in these excerpts: realism and play, instrumentalism and experimentation, affect and space, becoming, flux, and mobility.

63 These students were Victoria Lawton and Larissa Verdee.64 These collaborators included Jeroen Keijer, Maria Lantin, Jhave David Johnston, Rich Lachman,

and Artem Baguinski.65 Sara Diamond, Preface, Agenda, Inside/Outside: Responsive Environments and Ubiquitous Presence (2004)

N. K ATHERINE H AYLES

190

Realism at PlayJEAN GAGNON

The best I can do here is to use the words of others, since I did not attend any of the conferences. I only listened to audio files online and read a few of the transcripts that I received from Banff. From this remote position, I took notes of many words, con-nected them with others, and created clusters, nodes of thoughts. But do we ever do otherwise when writing? As a “thinking engine,” writing, textuality, and intertextuality are the base of language games.

What fascinated me in listening to some of the presentations and in reading some of the transcripts was both the breadth of experiments in many contexts—in universities and research centers, or in art contexts—and the language used, which is that of techno-science and media arts. I was amazed by the formulations and concept formation involved in speaking about these unbound fields of computer simulation in art and science, of virtual realities, of responsive architecture. It is floating language, neither fixed nor precise—still a field of language inventions. The beauty of practitioners is that when they are eloquent communicators, it is worth observing their discourses as they unfold through words. There is an interesting lan-guage production here; there is an imaginative approach to naming things, actions, and interactions—from Bill Seaman’s vuser (viewer/user), to Paul Woodrow and Alan Dunning’s virtactual (virtual/actual), etc.

THE M ATER IA L K NOWN AS DATA

191

But while listening to and reading these presentations, two themes came to the forefront for me: realism and play. Actually, this was suggested to me by one of the panels, which asked, “Is realism and reality the goal of all simulations? How does the suspension of disbelief operate in a simulation? How do agency, character and fantasy operate in games and animation”?66 The first speaker on this panel was Robert F. Nideffer, who began by addressing the subject of “game engines, agency, and social structure.” 67 Thus I associated the two notions—realism and play—that I want to highlight here. Realism, as it raises the question of the real, of reality in the context of simulations, virtual and responsive environments, augmented reality, and distributed networks. And play, as it refers to a fundamental and ontological positioning of the human subject in the world in relation to agency, fantasies, and the suspension of disbelief.

Although I cannot pretend here that I will resolve the issues raised by the ques-tion of realism—or that I even have the competence to do so—it is worth noticing that the Dictionnaire d’histoire et philosophie des sciences claims that realism is in opposition to instrumentalism.68 For the fields of research that concern many of the talks and conferences in this section, it should be of interest to explore realism at play through instruments, to look at how our instruments shape the world that is observed and described, and to examine how these instruments allow human hyper-perceptions of unseen forces, or hyper-action or -reaction in confronting matter, objects, space, and time, and the laws governing their behaviours. In describing their practices, speakers use terms like “models of behaviours,” “modelling situa-tions involving humans,” “building simulators of environments,” “sensuous spaces,” and “seeking to affect the user or the interactor.” In an exchange during a question period, Sidney Fels talked about “the possible location of emotion between two subjects.” 69 Some speakers used a rather vague and possibly confused concept of “aesthetic data” 70 that does not allow us to comprehend what’s at stake in these

66 The panel occurred at Simulations and Other Re-enactments, Modeling the Unseen (2004). 67 These remarks were part of Nideffer’s presentation on the panel above.68 Dominique Lecout, ed., Dictionnaire d’histoire et philosophie des sciences (Paris: Presses Universitaires

de France, 1999), 1032.69 This exchange occurred during Smart, Sexy, Healthy (2001) after the panel “Imagining the User

Experience: Intimate Technologies/Cultural Concepts.” Fels presented a performance and discussion in collaboration with Sachiyo Takahashi. That panel asked the questions: “Does peer to peer engineering allow intimate experiences to be designed [as] close relationships? Does P2P enhance collaboration? Is open-source the correlative of P2P? Do these engineering models provide social and cultural models as well?” Fels addressed the same issues through the lens of technology design.

70 See Patrick Lichty’s Living Architectures Summit Paper (2000), available in the bNmi Archives.

N. K ATHERINE H AYLES

192

questions but that points to a commonly accepted notion that art and aesthetics have their place in the exploration of new technologies.

The debate in science between realism and instrumentalism can be summarized as the following hermeneutic exegesis of causality: realism posits that observable facts pro-vide data that indirectly confirm the existence of unobservable entities, and science’s theories describe this unobservable reality. These theoretical entities postulated by the sciences are indispensable for their explications and hence cannot be eliminated. The success of these theories—in particular, in proofs matching predictions—can only be explained by the fact that they are true. On the other hand, instrumentalism would counter-argue that observable facts don’t allow the inference of the existence of unobservable entities, and that theories are only instruments for making predictions. Theoretical entities can be reduced and eliminated in favour of constructions built from observations; theories may be successful in their predictions without being true in any world. Thus summarized, this debate presents theories as permitting predic-tions and repeatability, as well as explanations. But theories can

Audio artist John Oswald and performer and keynote speaker Laurie Anderson in friendly huddle. Interactive Screen, 1996. Courtesy of The Banff Centre.

THE M ATER IA L K NOWN AS DATA

193

also be considered instruments—means by which science establish parameters for comprehending the world.

The great difference between the two approaches is that realism finds recourse in idealistic entities, while instrumentalism postulates the impossibility of knowing such unobservable facts. The first one, if we push its logic to the end, could argue for the possibility of a general theory of everything, of all facts and actions—an integrated meta-theory of the world. The other postulates that you can only observe one world at a time, and that no superseding theory of the world is possible outside of the parameters provided by instruments or sets of instruments. In this last instance, this would mean that terms referring to a particular context or situation established or enhanced by instruments cannot be translated into another context or situation. Since Gaston Bachelard and Thomas Kuhn, we know that the meaning of the terms of scientific theories vary depending on theoretical contexts or para-digms, to use Kuhn’s terminology. This is the relativistic and constructivist position whereby social, economic, and technical factors inform the construction of scien-tific theories. This position points to the constructed character of reality itself.

Here, it might be worth mentioning that a philosopher of science like Don Ihde would note that recent thinking around realism in science has increasingly focused on the instrumentation of science. His book Instrumental Realism: The Interface Between Philosophy of Science and Philosophy of Technology has a chapter devoted to “the embodi-ment of science in technologies.” 71 In this work, Ihde shows how some philosophers of science have elaborated a notion of embodied intelligence, presenting human intelligence as embodied and therefore going against trends in the Ai field. And even if one day exist wet computers exist,72 this does not mean that they could experience embodiment as humans do. From this point on (and here I can only touch this as an aside, as it would require too much space to explore further), Ihde demonstrates how central the question of scientific instrumentation (measuring and testing variables through using technology) is as embodiment of knowledge. Philosophically, this approach is a shift—from negative evaluations of embodiment in the Platonist and Cartesian traditions,

71 Don Ihde, Instrumental Realism: The Interface Between Philosophy of Science and Philosophy of Technology (Bloomington: Indiana University Press, 1991), 159.

72 By “wet computers,” the author is referring to computers that chemically simulate brain function and can act as smart agents inside the body. These devices are made of lipid-covered cells that handle chemical reactions similarly to neurons.

N. K ATHERINE H AYLES

194

to positive ones in which the “body is seen to play a crucial role in all epistemology.” 73 But this embodiment also means that technologies are “in use and in relation to users”—that they cannot be conceived “apart from their context of involvements and referentialities.” 74 More importantly for my argument, Ihde affirms that while tech-nologies and instruments can be seen as “extensions” of the body or as enhancements of human perception, there exists “a second group of relations [that] does not extend or mimic sensory-bodily capacities but, rather, linguistic and interpretive capacities.” 75

He calls this a second order of hermeneutic relations. He adds that “in hermeneutic relations the technology is not experienced-through [as in the extended-body experi-ence through technology] as experienced-with.” 76 Thus, in this last instance, technology become a “quasi-other,” just like a game is. By way of anticipating my conclusion, play shares with this conception of instruments or technologies the hermeneutic aspect.

Gaston Bachelard wrote that instruments are materialized theories.77 They are also cognitive forms and means of enhancement for the human body, which bring to light facts that are typically hidden from the normal human percep-tual apparatus. If digital instruments are both gatherers of data and simulators of aspects of the world, is it not possible to conceive of artistic usages of these instruments as being somewhat à cheval between realism and instrumental-ism—in other words, where play begins? As determined equally by pre-existing theories of the functioning of the physical world or its rendering in visualization and in simulation, and by capacities of instruments to interact with humans and react or be in “dialogue” with humans, these virtual-reality technologies that produce digital spaces are based on encoded entities that form expected reality, and on assumptions about the human emotional world (i.e., technologies carry a priori assumptions that are hard-wired into them). How, then, can the affect of these environments procure aesthetic experiences? How do we define such expe-riences in the virtual-reality world when we are rather confronted with sensory immersions? I doubt that data can be simply called “aesthetic,” or that it can produce aesthetic experiences without some form of symbolization that paves the way for hermeneutic interpretations. There must be symbolic order for human to comprehend anything. Aesthetic experience must be more than mere sensorial overload or hyper-surveillance.

73 Don Ihde, Instrumental Realism: The Interface Between Philosophy of Science and Philosophy of Technology (Bloomington: Indiana University Press, 1991), 73.

74 Ibid.75 Ibid., 75.76 Ibid.77 Gaston Bachelard, Le nouvel esprit scientifique (Paris: Presses Universitaures de France, 1999), 47.

THE M ATER IA L K NOWN AS DATA

195

In his talk, Nideffer affirms that “a lot of human and social biases get embedded in the design of game engines.” 78 These result in “all sorts of assumptions about what kind of functionalities are important to include” in the game engine.79 One presup-poses a self-centred subject driven by power motives and self-mastery in and of the game world. Another route might be to be decentred, collective, and collabora-tive—that which would create another frame of reference for behaviours in the game world. Nideffer adds that frames of reference can be used to realign previ-ously fixed frames, in order to defy assumptions about humans and how they must or should react to a given environment. He concludes by pointing to the task of “context provision as opposed to contact provision.” 80 To take this line of thought further, in a way, Jacques Perrin (a collaborator of Mary Flanagan’s) underlines the context of the simulation and ideas of believability and agency, suggesting that these are other parameters to consider.

Warren Sack remarks that you might program as “an Ai symbolic program-mer.” 81 But pointing forcefully to ways of conceiving context-driven environments advanced by humans’ differing preconceived assumptions, he adds that you might also program like a sophist or a psychoanalyst, or like a cyberneticist or an ethno-methodologist. Sack also asks: “Do histories add up to any History?” 82 In asking this question, he shows that he longs for a return to a History that would contradict Jean-François Lyotard’s notion that everything has become data in the postmodern era. Sack tries to reformulate this desire for Historical anchoring through arguing for technologies that allow for “more elicitation than mimesis.” 83 The problematic is how to conceive of interfaces that allow us to make sense out of the énormité of data.

The example of “developing interfaces for the blind people,” which Suzanne Weghorst brought forward, is significant.84 She argues that in developing interfaces for blind people, there is no way to perfect the transmission of vision of the world,

78 Robert F. Nideffer, “Separating Simulation from Realism—Biases and Potentials in the Physics and Narratives of Game Design” on the panel “Is Realism and Reality the Goal of All Simulations, of Some? How Does the Suspension of Disbelief Operate in a Simulation? How Do Agency, Character, Fantasy Operate in Games and Animation?” (lecture, Simulations and Other Re-Enactments: Modeling the Unseen, 2004.)

79 Ibid.80 Ibid.81 Software Designer and Media Theorist 82 Ibid.83 Ibid. See page 91 for an excerpt of this talk. 84 Suzanne Weghorst of the HiT Laboratory at the University of Washington spoke on the panel

“Imagining the User Experience: Intimate Technologies/Cultural Concepts” at the Smart, Sexy, Healthy summit (2001).

N. K ATHERINE H AYLES

196

as a blind subject assigns a different set of codes and meanings to “vision” (as this question may be raised in the context of “interface developments”). She mentions her “smell project” and concludes that even through the enhancement of smell, a new system of language would have to be developed in order to decipher a suppos-edly reliable syntax. Here, we are in the order of hermeneutic relations. One might ask why a blind person would need to assign meaning to “vision” in the first place. The person who raises such a question is certainly one who experiences seeing, or who was not blind at birth—someone for whom vision is a lived experience.

In a way, this example points to the biases of embodiment—something disabled people remind us of, sometimes painfully. It also points to two broad paradigms that organize our conception of the world and of technological media in it. One is based on vision and characterized by linearity. The other is based on what Marshall McLuhan called the aural/tactile, and is non-linear. More fundamentally: linear systems use the classic dichotomy between “free will and determinism,” where the first term must ideally dominate the second; non-linear systems use a binary and dynamic pairing, such as “organism and environment.” Even if our virtual spaces, affective spaces, and responsive environments are interactive, they may still produce worlds based on either paradigm. In programming environments or interactive

Led by Ron Wakkary, researchers explore body identity in relation to responsive garments. Am-I-Able Workshop, 2004. Courtesy of the bNmi.

THE M ATER IA L K NOWN AS DATA

197

systems, one paradigm posits a definite world of causes and effects that are encoded as predictable. These are based either on binary logic or on known laws of nature. The other one, while still based on logic in terms of internal machine programming, will put emphasis on variables such as “activity, relationality, on communal-collabo-rative values” and “will be situated.” 85

At the intersection of the two paradigms lies the principal terrain of interface design—the debate between two conceptions of the human subject and of human agency in the world. While a lot of efforts are made to perfect technologies, game engines, algorithms, and sensors in order to model worlds of human behaviours through techno-instruments, it seems that little attention is paid to defining the human subject in its vulnerability, to use Sara Diamond’s term. This is an important task—especially if we want to talk about affectivity and emotions in computer-driven worlds. Is emotional and affective computing at all possible? Is it even desirable? Can aesthetic experiences be derived from encoded data? If we use a model that implies that humans are driven as much by exterior or unconscious factors as they are by free will and rationality, we enter a realm of complexity in which a language, a symbolic order and an imaginary navigation are possible.

In his presentation “Ambient Intelligent Environments, Theory and Practice,” Ron Wakkary used the example of the library to illustrate this second paradigm, and Régis Debray, who uses the same example in his book Introduction à la médi-ologie, can help complete this idea. At the beginning of the text, Debray presents libraries as an excellent medium for transmission. Not only is a library a ware-house for memory, it is also the “matrix of a well-read community” that possesses rituals (exegesis, translation, compilation, etc.). The library’s transmission is thus active and does not concern only the passive supply of books. There are also readers in our libraries who, in return, begin to write; “A library spawns writers just as a cinémathèque spawns filmmakers.” 86 A library is a productive estab-lishment originally created by an act of sovereignty, explains Debray. Libraries are always “royal, caliphate, pontifical,” whether created by a congress, senate, president, or foundation. Within this institutional genealogy is the essential part of transmission—of which libraries are the medium but not the

85 Ron Wakkary of Simon Fraser University spoke at Inside/Outside: Responsive Environments and Ubiquitous Presence (2004) on the topic of “Ambient Intelligent Environments, Theory and Practice” on a panel entitled “Architectures: Public and Private Spaces: What Value Does Ubiquity Bring to Architecture? What is the Nano Dream Home of the Future? Where Does Design, Architecture and Body Meld?”

86 Régis Debray, Introduction à la médiologie (Paris: Presses Universitaires de France, 2000), 6–7.

N. K ATHERINE H AYLES

198

driving force. While libraries are the “support for supports, the invisible opera-tor of transmission,” it is the community established around the library that “transforms the warehouse into a vector.” The memory stored in books, which Debray calls external memory, acquires power only through “the internal memory of a group.” If we don’t want our new networks for transmitting information to be founded on a misunderstanding, we must realize that the physical transfer of information is often confused with the social transmission of knowledge. Debray stresses that we mustn’t mix up “mnemonics and memorization”—in other words, we mustn’t confuse the manipulation of information contained in a database with the assimilation of new knowledge.87

Many speakers who participated in bNmi events used the terms affective space and emotional computing. One of the problems with most discussions about affective reality is that they are imprecise and quite one-dimensional—few speakers dare to define it. Despite the many disciplines—psychology, psychoanalysis, history, philos-ophy, semiotics, hermeneutics, and so on—that can be called on to say something about “affective thinking,” few speakers mention that this line of exploration has been discussed by many thinkers and artists. One such artist is Sergei Eisenstein; he took the contradictory concept of “affective thinking” from Lucien Lévy-Bruhl, who compared it to the behavioural patterns of primitive societies characterized by mythological and animist worldviews. It is as though speakers take refuge, for the most part, in physical or neurological descriptions of the real and of the human per-ceptual apparatus. They look for a model of the real and try to simulate the world, but they seem to have little concern for the articulation of the symbolic order—which is, properly speaking, the human world.

Becoming, flux, and mobility are also terms used to designate non-linear approaches. They belong to philosophical trends associated with materialism, Friedrich Nietzsche’s thinking, Henri Bergson’s redefinition of consciousness in relation to the concept of durée (duration), and reconsiderations of human time. Paul Woodrow and Alan Dunning explore the issue of how time relates to human consciousness and communication among humans. They say,

The Einstein’s Brain project is a virtual, augmented and mixed reality work predi-cated on neuroscientist Antonio Damasio’s idea of the disposal of the self. This is a continual moment by moment, construction of a self that Damasio describes as “an evanescent reference state, so continuously and constantly reconstructed that

87 Ibid.

THE M ATER IA L K NOWN AS DATA

199

the owner never knows that it is being remade unless something goes wrong with the remaking. Present continuously become past, and by the time we take stock of it we are in another present, consumed with planning for the future, which we do on step-ping stones of the past. The present is never here.” 88

This is exactly the aporia of human time that Paul Ricoeur has identified in his thorough philosophical analysis of told time (le temps raconté): time, narrative, fiction, and history intertwined. Woodrow and Dunning will also resort to a notion of narrative, having built “an auratic, data picture of the body” with a panoply of “biological sensors—heat, electroencephalography (EEG), carbon-diox-ide, aroma sniffers, sound, motion and so on.” 89 Their notion of narrative is as vague and undefined as the notions of aesthetic data or aesthetic spaces proposed by other speakers. Yet Woodrow and Dunning also speak of textuality, thus entering a hermeneutic field.

The body is at the centre of all these experiments with digital technologies. The surrounding of the body with multisensing environments and the interfacing the body with distributed networks, signal a “shift in praxis from screen, to the palm, to the body, to the space.” 90 In a sort of aesthetics of digital gadgetry, the artist wishes to work in the cracks of our culture and in interstitial parts of society. He wants to explore a culture of computational ubiquity and he supports moving art into the digital age in order to exploit a communicative mode of expression. Artists use modern technologies—technologies that are conceived to map out space and bodies, and designed to enhance surveillance of spaces and exercise better control over bodies and minds—in a counterfeit manner, as a détournement and as a bricolage, to borrow a term from Michel de Certeau. Artists are not nec-essarily cynics, but they are probably more inclined to be ironic, as irony allows for playful and distanced fascination.

Some of the speakers in the different panels—Woodrow and Dunning among them—called on the situationists and their urban dérives (drifts) in order to find a model of non-linear perusal of the world. The situationist dérive is a way to walk around (déambuler) the city, leaving the guidance to subjective drives and affecta-tions that may lead your steps in one way or in many others. For the situationists,

88 Woodrow and Dunning provided a case study of The Einstein’s Brain Project at The Beauty of Collaboration: Methods, Manners and Aesthetics (2003) on the “Art Meets Science” panel.

89 Ibid.90 Patrick Lichty, Living Architectures Summit Paper (2000), http://www.banffcentre.ca/bnmi/programs/

archives/2000/living_architectures/reports/living_architectures_summit_paper_patrick_lichty.pdf.

N. K ATHERINE H AYLES

200

this is also linked to a concept of psycho-geography that stipulates that differ-ent sectors of cities (such as Paris and Amsterdam) have different psychological effects, according to the particular architectural, social and economic functions within them. We find this definition of psycho-geography in a December 1958 issue of Internationale Situationiste: “Psycho-geography studies the laws and precise effects of a geographic milieu consciously designed or not, intervening in affective behaviour and, according to Asger Jorn, presenting itself as the science-fiction of urbanism.” 91

The concept of dérive is probably rooted, at least in part, in the figure of the flâneur, first established as a figuration of modern man by Charles Baudelaire in the 19th century. In the 20th century, Walter Benjamin theorized the flâneur as a prom-enading city dweller witnessing the spectacle of the city. It is certainly possible and convenient to use the concept of dérive or of the flâneur to characterize the type of dwelling in cyberspace that digital environments allow. But these characterizations, I think, fall short of the most fundamental trait of digital simulation and spaces: that they must be looked at as play. By nature, they are games with interactive aspects, and they must lead us to replace the flâneur with the joueur, the player. Not surprisingly, many talks at the bNmi over the course of the decade that this book spans were about games and game design.

It so happens that the notions of play and game define the experience of art for the German philosopher Hans-Georg Gadamer. In his famous book Truth and Method, Gadamer argues that to play a game is to surrender to one of the most crucial capacities of humans—that of immersing oneself in the representation of the game.92 Despite the psychological effects that such immersion entails, it is the ontological aspects of the game and of play that Gadamer stresses. Like Maurice Blanchot’s literary experience, the game or the artwork ultimately signifies its own being, the representation of being—c’est une affirmation de l’être. It is worth mentioning that Gadamer is the 20th-century rejuvenator of the old science of hermeneutics, modernizing and widening the concept, practice, and object of hermeneutic.

It would require too much space here to provide a complete description of Gadamer’s notions of game and play, but I want to stress a few points, as they may open up reflections in this area of virtual realities. Among the essential aspects of

91 Guy Debord, Internationale situationiste (Paris: Éditions Champ Libre, 1975), 13. Translation provided by the author.

92 Hans-Georg Gadamer, Vérité et méthode: Les grandes lignes d’une herméneutique philosophique (Paris: Seuil, 1976). Translation provided by the author.

THE M ATER IA L K NOWN AS DATA

201

the game for Gadamer, we find the fact that the game is a representation and, as such, it defines the playful nature of art: “To ‘represent’” Gadamer writes, “is always in essence to represent for someone. That this possibility be conceived as such, con-stitutes the playful character of art.” 93 In addition, the game is a “median process”: it represents a mediated state of play between the players and between those who don’t play, and between all of them and the game context, rules, and objectivity as a quasi-other—these can also be seen as symbolic or hermeneutic relations. Finally, one aspect of play that seems to be central for Gadamer is the “essential definition of the play”: that the play is concerned with movement, a movement of va et vient (come and go)—it represents movement. Interestingly, Gadamer points to the fact that the signification of the word play (jeu, in French) can be found in a primitive sense to link play and dance. He adds, “antique theory of art, which bases all arts on the concept of mimesis, of imitation, manifestly has its origin in play which under the guise of dance is the representation of the divine.” 94

Seen this way, the concept of play takes a critical meaning in the context of our explorations of virtual reality and other responsive and interactive environments and technologies. The theatrical representational aspect is to be stressed. It can be claimed that dance, movement, and performance inform the field of practices cov-ered by the different talks. Nevertheless, by injecting Gadamer’s take on the notion of play into the discussions, one can properly and pertinently approach questions of aesthetics in relation to digital technologies and their relation to humans as play on realism, or as realism at play.

93 Ibid., 34. 94 Ibid., 39.

2

TRANSCRIPTS

MATT ADAMS & JU ROW FARR Intimate Technologies/Dangerous Zones, 2002TONI DOVE Out of the Box, 1998CHRISTOPHER SALTER, MAJA KUZMANOVIC, SHA XIN WEI & HARRY SMOAK Inside/Outside, 2004CLARK DODSWORTH Smart, Sexy, Healthy, 2001MYRON KRUEGER Out of the Box, 1998TAMIKO THIEL Out of the Box, 1998SHELDON BROWN Out of the Box, 1998JIM MACKIE Out of the Box, 1998SANG MAH Out of the Box, 1998THECLA SCHIPHORST Out of the Box, 1998JOHN OSWALD Out of the Box, 1998ROBERT F. NIDEFFER Simulation and Other Re-enactments, 2004BARBARA MONES-HATTAL Out of the Box, 1998LINDA WALLACE Living Architectures, 2000BILL SEAMAN Living Architectures, 2000 and Inside/Outside, 2004RON WAKKARY Inside/Outside, 2004TOM DONALDSON Outside/Inside, 2004KATHERINE MORIWAKI Outside/Inside, 2004CELIA PEARCE Simulation and Other Re-enactments, 2004NIGEL GILBERT Simulation and Other Re-enactments, 2004ELIZABETH BRUCH Simulation and Other Re-enactments, 2004NINA WAKEFORD Simulation and Other Re-enactments, 2004MARK H. HANSEN Simulation and Other Re-enactments, 2004DAVID WISHART Simulation and Other Re-enactments, 2004MARCOS NOVAK ET AL. Living Architectures, 2000ANNE FAIRBROTHER Quintessence, 2002

204

M AT T A DA MS & JU ROW FA RR

The first excerpt, from artists involved in Blast Theory, considers immersion through the lens of performance. It looks at mechanisms that allow for the suspension of disbelief—mediated in part by technology and in part by face-to-face interactions—producing layers of complicity, identification and engagement: those of actors, performers, media and, audiences. This is fol-lowed by an excerpt from Toni Dove, who continues Blast Theory’s discussions of performance and theatricality by considering the structures of immersive experiences through concepts and case studies, and by considering the history of interaction and illusion. These dialogues bring ideas of choreographic and gestural movement into immersive experiences, technologies, and collaborations. Playfulness and improvisation are key components of interactor experience—whether performer or audience. This first section ends with a discussion led by three members of Sponge, from the panel “Topological Desire and Responsive Performance Environments,” which was moderated by Magdalena Wesolkowska 95 and Sara Diamond.

BLAST THEORY: MATT ADAMS & JU ROW FARR Intimate Technologies/Dangerous Zones, 2002

Matt Adams & Ju Row Farr / Transcribed: Track 22 0:00–12:15, Track 24 in full, Track 26 in full / Talk: “Blast Theory—Case Study (Kidnap, Desert Rain… and Come—Play with Us!)” / Panel: Keynote / Event: Intimate Technologies/Dangerous Zones / Date: Thursday April 25, 2002, 8:15–10:00 p.m.

Blast Theory, a team of six that is based in Brighton, UK, is led by Matt Adams, Ju Row Farr, and Nick Tandavanitj. Blast Theory explores interactivity and the relationship between real and vir-tual space, with a particular focus on the social and political aspects of technology. Beginning in 1997, the group’s work diversified, from multimedia performances to immersive installation and interactive works, such as Kidnap and Desert Rain. Since 2000, Blast Theory has been exploring the convergence of online and mobile technologies in collaboration with the Mixed Reality Lab, University of Nottingham, to create groundbreaking new forms of performance and interactive art that mix audiences across the Internet, live performance, and digital broadcasting.96

95 During the summit, Magdalena Wesolkowska, an ethnographer, was pursuing PhD studies and was a lecturer and researcher at the Faculty of the Built Environment (Université de Montréal), where she was participating in two research groups: GRAdiENT (Advanced Research Group in Interactive and Experience Design and New Technologies) and CidER (Centre for Interior Design and Education Research). Her research concentrated on the larger issues surrounding social ecologies of enabling new technologies, design for wellbeing, and the integration of new media and interactive art practices into multi-user spatial designs, through an exploration of four P(ea)s: presence, participation, pleasure, and play. More specifically, she was researching the benefits of an inclusive design approach to responsive, multisensorial, playful, and creative environments.

96 The bNmi co-produced Uncle Roy is All Around Us with Blast Theory. Blast Theory won a Paul D. Fleck Fellowship award for their work in extended-reality gaming.

THE M ATER IA L K NOWN AS DATA

205

Matt Adams: In 1998, Blast Theory ran a lottery in England and Wales. For £10

registration fees, entrants would have the chance to be kidnapped for 48 hours.

By paying extra, they could modify the kidnap according to their taste. These

options ranged from traditional choices—such as being interrogated or kept

naked—through to secretarial support, massage, and a hot bath. The most

expensive options involved four fictional scenarios, such as the leftist revolution-

ary kidnapped by secret services, or the son and daughter of a millionaire. Ten

entrants were picked at random and put under surveillance. By receiving a plain

brown envelope containing an eight-by-ten photograph of themselves, they were

alerted to the fact that they were on the shortlist. On the sixth of July, Russell

Wood (a 19-year-old convenience-store assistant) and Debra Baird (a 26-year-old

secretary, recently arrived from Australia) were abducted and taken via a circuitous

route to a secret location. Throughout the two days, they were locked in a wooden

room under surveillance by webcam. The camera allowed Internet users to pan,

tilt, and zoom from anywhere in the world. A £500 prize was offered if either of

the two kidnappees escaped, made it to a phone, and called a free phone number.

Following a range of relatively discreet interventions by the kidnappers, the two

were finally released at a press conference at the Institute of Contemporary Arts in

London. I will show you a clip of a practice kidnap that we did on a Sunday Times journalist who gave us three pages. We thought it was a good trade.

Space is significant in Kidnap because the core event—the 48 hours of the actual

kidnap—took place in a secret location, but it was observed live by an audience

of thousands via the web-broadcast. As a performance, it was almost a non-

event; via the media and the live web-broadcast, it had many properties of a live

event. Its outcome was unknown. It was designed to be observed. Via e-mail, the

watching audience could send messages to the kidnappers and influence events.

Kidnap had audiences instead of a single audience. And these audiences included

spectators, observers, visitors, consumers, and surfers. They ranged from the two

winners watching their own incarceration, to the kidnappers watching the two kid-

nappees pretending to be incarcerated, to the 10 shortlisted entrants waiting to be

kidnapped. Their whole world temporarily transformed into a criminal conspiracy.

Since 1998, these techniques have spread into global television collective shows

like Big Brother, Survivor, Cast Away, The Mall, and dozens of others. Part of the

premise is the idea that the frame of an artwork is a critical part of its construc-

tion. Any work placed in a theatre or an art gallery is immediately governed by the

context. The context limits the social and political implications of the work. Artists

like John Cage, Joseph Beuys, Chris Burden, and Marina Abramović push the

206

boundaries of artistic production to their limit, yet operate within a clearly defined

artistic context—usually the gallery. Kidnap blew open the boundaries between art

and commerce, between the real and the fictional, between a happening, a media

event and a publicity stunt, between a joke and a game. Even the two winners had no

interest in the artistic elements of Kidnap. They merely felt that to pay £10 for

the chance to be kidnapped was a fair transaction, a gamble worth taking.

We wanted to communicate some of the dilemmas and contradictions inherent in our

attitudes to control. We became fascinated by the appeal of handing over control to

someone else in a culture in which freedom and self-determination are benchmarks

of successful living. From roller-coaster rides to horror films, from drug taking to sado-

masochism, from militants and courts to political extremists, there seems

to be abundant evidence of the attraction that people have to giving up control for

a few hours or for the rest of their lives.

In England, around the same time as the Kidnap project, a trial took place, which

convicted 16 men who practiced pseudo [consensual] sadomasochism. They did

not have the right to decide what they could do to themselves in their own homes.

Despite all these acts being consensual, four of the men went to prison. This was one

of our inspirations.

Kidnapping is one of the most intimate of all crimes of violence, so much so that the

victim frequently develops a profound emotional bond with their aggressor—hence

the “Stockholm syndrome.” Kidnappers are often looking for media coverage as a

main outcome of their actions. Partly because of these reasons, it is also one of the

favourite crimes for fictional treatment. The kidnappings of John McCarthy, Brian

Keenan, and Terry Waite lasted over four years. The constant media interest in the

unfolding of these events culminated in triumphal book releases from each of the

three, and documentaries, dramatizations, and feature films of the stories. And as a

true reflection of the media in the ’90s, we have an article from a women’s magazine

in which one of the actors who portrayed one of the “kidnappees” gave an emotional

interview about the horrors of pretending to be kidnapped. All three of these men

were prepared to admit, however, the kidnapping gave them a chance for reflection,

and hence insight into their own lives.

Kidnap allowed us to explode these complexities and to work the line between pre-

tending to do something and actually doing something. Despite nine months of legal

consultation throughout the weeks of the surveillance leading up to the kidnaps, the

sound of the police siren was always a source of concern for us. We employed a psy-

chologist to observe us and the two “kidnappees,” and to monitor our mental health.

Desert Rain was the first collaboration we made with the University of Nottingham,