Special Issue: Rethinking Affordance
Rethinking Affordance Media Theory 3.1 (2019)
Edited by Ashley Scarlett & Martin Zeilinger
Introduction Rethinking Affordance Ashley Scarlett & Martin Zeilinger ................................................................................... 1 Articles Once Again, the Doorknob: Affordance, Forgiveness, and Ambiguity in Human-Computer Interaction and Human-Robot Interaction Olia Lialina .................................................................................................................... 49 (Digital) Media as Critical Pedagogy Maximillian Alvarez ....................................................................................................... 73 Destituting the Interface: Beyond Affordance and Determination Torsten Andreasen ......................................................................................................... 103 K.O. Götz’s Kinetic Electronic Painting and the Imagined Affordances of Television Aline Guillermet ............................................................................................................ 127 Reframing the Networked Capacities of Ubiquitous Media Michael Marcinkowski .................................................................................................. 157 Rethinking while Redoing: Tactical Affordances of Assistive Technologies in Photography by the Visually Impaired Vendela Grundell .......................................................................................................... 185 The Affordances of Place: Digital Agency and the Lived Spaces of Information Mark Nunes ................................................................................................................. 215 Forensic Aesthetics for Militarized Drone Strikes: Affordances for Whom, and for What Ends? Özgün Eylül İşcen ......................................................................................................... 239 Take Back the Algorithms! A Media Theory of Commonistic Affordance Shintaro Miyazaki ....................................................................................................... 269 The Art of Tokenization: Blockchain Affordances and the Invention of Future Milieus Laura Lotti ................................................................................................................... 287
Special Issue: Rethinking Affordance
Rethinking Affordance
ASHLEY SCARLETT
Alberta University of the Arts, Canada
MARTIN ZEILINGER
Abertay University, UK
Media Theory
Vol. 3 | No. 1 | 01-48
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Fig. 1: Still image (section) from Jol Thomson’s Deep Time Machine Learning (2017-2019).
Courtesy of the artist.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
2
Introduction
Jol Thomson’s Deep Time Machine Learning (2017-2019) is a single and multi-channel
video installation that captures, among other things, the playful investigation of a very
old mechanical device by way of a very new technological apparatus (Fig. 1). The role
of the old is filled by the first fully functional 4-stage hand-cranked calculator –
conceptualized and built by the German pastor, astronomer and inventor Phillipp-
Matthäus Hahn in the 1770s, the calculator is a wondrously intricate mechanical device
capable of addition, subtraction, multiplication, and division; Phillipp-Matthäus was
amongst the first to build a functional machine capable of all four basic arithmetical
operations, initiating the precision industry in Württemberg (Klemme & Kuehn, 2016).
The video installation captures Hahn’s device as it is scrutinized by an equally
wondrous next-generation six-axis robotic arm. Designed by Bosch GmBH engineers
specialising in ‘robot-human collaboration,’ the APAS robotic arm is a deceptively
simple-looking machine equipped with a wide range of advanced imaging optics, and
sheathed in a proximity-sensing “skin” that allows the robotic arm to operate at high
speeds in very close proximity to humans.1 In Thomson’s video, the APAS robot
subjects the mechanical calculator to a variety of different sensor-based and
computational ‘ways of seeing’ that range from regular video capture to laser-guided
3D-measurement and the recording of optical data that is invisible to the human eye.
This allows the device to observe the object before it through a perceptual apparatus
that far surpasses what human agents generally mean by ‘seeing.’ Thomson reveals this
to the viewer by pairing video documentation of the interactive environment in its
entirety alongside visualizations of the different forms of visual and non-visual data
captured during the project (Fig. 2). The work is punctuated with textual excerpts that
are drawn from a European Parliament report on Civil Law Rules on Robotics (2017)
and that call for a consideration of the ‘subjectivity,’ rights, and liabilities of intelligent
machines.2
SCARLETT & ZEILINGER | Rethinking Affordance
3
Fig. 2: Still image from Jol Thomson’s Deep Time Machine Learning (2017-2019). Courtesy of
the artist.
Deep Time Machine Learning explores the speculative interfacing of the historical with
the futuristic, of the familiar with the unknown, and in doing so thematizes the
perceptual limits of what is humanly knowable about mathematics, computational
logic, machine vision, and interactions between technological devices. To this end, the
arrangement of old and new in Thomson’s work, as well as the new modalities of non-
human perception that are forefronted, press viewers to attend to the shifting
affordances of technological tools and intelligent systems, as well as of machine-human
and machine-machine interactions. From a human perspective, the interactions
depicted in the video still (Fig. 2), which rely on advanced stereoscopic vision and light
detection 3D measuring (essentially the same LiDAR machine vision technology used
in self-driving vehicles and other mobile, semi-autonomous devices), become a
meditation on the purposes and affordances of emerging technologies. While the
APAS arm has been praised primarily for its proximity- and touch-sensitive leather
‘skin,’ an innovation that engineers at Bosch imagine will significantly alter human-
robot relations as well as labour and industry practices (Thomson, 2017-2019), it also
triangulates a diverse range of data that enables it to navigate its surroundings in a
manner that both surpasses and marks the uncanny limits of human capability.
Thomson’s visualization of this information exposes this discrepancy and points to
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
4
alternative and augmentative means through which one might perceive, relate to, and
make use of the environment. Like the APAS robotic arm, the old mechanical
calculator was also once a cutting-edge, futuristic technology, equipped with powers
that allowed it to undertake calculations beyond normal human cognitive ability. The
mechanical calculator ‘divines’ complex mathematical truths; the robotic arm ‘feels’ its
human co-workers – both technologies produce and deploy ‘invisible’ realities that
otherwise are not immediately available to human agents.
Thomson’s work was produced as part of the “Wimmel Research-Fellowship,”
situated on the main campus of the Robert Bosch GmBH’s engineering arm in
Southern Germany near Stuttgart, and co-organized with nearby Akademie Schloss
Solitude, a public foundation that hosts artist and research residencies. Such
connections between experimental art and technology research centres continue a long
tradition, which includes illustrious examples such as residencies hosted at Bell Labs,
Xerox PARC, or, more recently, the Pier 9 residency program at AutoDesk. Like the
Wimmelforschung residency, these predecessors sought to leverage experimentation
in art-making to help push the functional, commercial and discursive parameters of
existing and emerging technologies. Artists, in exchange for access to new tools and
technical expertise, are regularly invited to collaborate across disciplinary or medium-
specific boundaries in an effort to elicit the imagination, identification and
communication of new and unforeseen affordances (Noll, 2016; Scarlett, 2018). This
applies particularly to emergent technologies, where conceptual frameworks or
contexts for practical implementations may not yet have been determined or rendered
habitual.
Deep Time Machine Learning captures the intersecting practices and pressures that
initiated this special issue. Thomson’s work explores the horizons of possibility
associated with the uses and functions that a given technology may afford. Not only
does it employ devices that have stood at the forefront of technological innovation,
expanding the potential for human action and interaction with the environment, but it
also captures a generational shift in how and where and to what extent computational
machines are interfacing with and making use of their surroundings. Critical in this
case is the sense that these operations unfold largely beyond the limits of human
SCARLETT & ZEILINGER | Rethinking Affordance
5
perception and would therefore have remained invisible had Thomson not provided a
visual representation of them, at least where the APAS robotic arm is concerned.
Furthermore, Deep Time Machine Learning was developed in a corporate research
environment that was designed to facilitate a reimagining of technological potential
and use by pushing participants (artists, engineers, researchers, etc.) beyond familiar
frames of reference, and into challenging new constellations of cross-disciplinary
collaboration. At stake in each of these instances appears to be a renegotiation and
reconceptualization of ‘affordance.’
‘Affordance,’ which we introduce and survey in greater detail below, features centrally
across a growing number of scholarly disciplines, including: psychology; design;
human-computer interaction (HCI); communication studies; media studies;
organizational studies; and education. As is widely acknowledged in these fields, the
term was coined by cognitive and ecological psychologist, J.J. Gibson. In transforming
the verb ‘to afford’ into a noun, Gibson sought to account for the fundamental means
through which agents (human or otherwise) navigate, conceptualize and more
generally relate to their environment. “The affordances of the environment,” he
explained “are what it offers the animal, what it provides or furnishes, either for good
or ill” (Gibson 2015: 119). For Gibson, then, agents’ perception and implementation
of what the environment offers, provides or furnishes – ultimately what behaviours it
enables – is the primary way in which they make sense of and become enmeshed with
their surroundings. Drawing upon Gibson’s work, if only for inspiration, researchers
within the domains of design (e.g., Norman, 1988) and HCI (e.g., Haugeland, 1993;
Smith, 1996) were quick to amend, apply and popularize the term. Most prominent
amongst these scholars and practitioners was Don Norman, who argued that a
designer’s task was to make the intended uses of an object or environment – treated
here as nearly synonymous with ‘affordances’ – readily apparent to and easily enacted
by an imagined user (Stone et al., 2005). As we discuss further below, in this
configuration of affordance, Gibson’s focus on the concept’s relationality gave way to
the assumption that the concept circumscribes clearly delimited uses that can be
determined and rendered explicit by the designer in order to direct (and constrain) use
and prescribe action. In the work of Norman and others, this has increasingly included
an application of affordance theories to digital artefacts and environments.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
6
The APAS robotic arm and hand-cranked calculator that feature centrally in Deep Time
Machine Learning offer human agents a series of affordances; for example, when
embedded in their ‘natural’ environs, both devices have enabled humans to
interactively overcome particular limits where labour and reliable calculability are
concerned. Both devices also stand as exemplars of humans’ drive to expand their field
of action through the innovation and design of new technologies and, by extension,
novel affordances. This being said, human agents are not featured in Thomson’s multi-
channel video. Instead, the APAS robotic arm surveys the machine; its
multidimensional and triangulated perception of the interaction drives (and therefore
enables) the arm’s subsequent behaviours. What becomes apparent is not only that the
robotic agent has the capacity to autonomously identify and enact environmental
affordances, something that would be required within the industrial context in which
it is intended to operate, but also that these actions mark the culmination of cascading
operations that unfold below the perceptible surfaces of mediation. Thomson’s work
suggests that the range of affordances at play here might not simply be those that
humans can perceive in the surrounding environment, but also those that exist and are
enacted within the algorithmic underbelly of digital computation.
The allusion here marks a significant departure from canonical accounts of affordance.
Despite being common parlance within contemporary design and HCI discourse
(Nagy & Neff, 2015), the original theoretical apparatus out of which the concept of
affordance emerged has yet to undergo a critical re-examination in light of the term’s
‘digitization;’ scholars concern themselves increasingly with the identification of
affordances associated with particular digital tools and artefacts, but rarely is the
affordance concept revisited in order to better account for the complex computational
and algorithmic grounds through which these objects of analysis are constituted.
Consequently, ‘affordance,’ rather than contending explicitly with the computational
or algorithmic, continues to operate within a conceptual framework of objects and
environments that are defined by their physicality, phenomenological accessibility, and
liveliness (e.g., Wells, 2002; Morineau et al., 2009). Many of the defining characteristics
of affordance, as it had been previously conceptualized, therefore conflict with what
have been described as the evasive realities of digital media (Lovink, 2014; Parisi, 2013;
Zielinski, 2008). Furthermore, a grounding in the physical and phenomenologically
SCARLETT & ZEILINGER | Rethinking Affordance
7
apparent also overlooks the emerging sense that through the sensorial collection,
aggregation and enactment of data, algorithmic systems are arguably learning to
recognize and respond to virtual affordances that lie outside of the realm of human
consciousness (Gabrys, 2016; Hansen, 2015; Massumi, 2015; see also Nunes, this
issue). Developments like these, addressed in detail in the latter sections of this
introduction, call into question the extent to which the concept of affordance in its
original formulation is still useful, relevant, and meaningful, particularly in theoretical
analyses of and practical engagement with the digital.
Our aim with this special issue is, therefore, to undertake a critical and creative re-
examination of ‘affordance’ for the digital age. This means to explore the critical,
historical and contemporary valences of the concept in a manner that productively
engages with the dynamic malleability of the digital, highlighting the critical potentials
that this dynamism embodies. The contributions collected here pursue this goal by
proceeding along three vectors: historical (e.g., renegotiating the continuities and
tensions between different perspectives on the affordance concept), theoretical (i.e.,
theorizing the uses and meanings of the concept in critical dialogue between digitally-
oriented practitioners, researchers, and other stakeholders), and artistic (i.e., exploring
how media artists have engaged with, reimagined and conceptualized technological
affordances).
The remainder of this introduction will build out a conceptual framework for the
contributions to this issue. We will begin by offering comprehensive overviews of the
two earliest, and most prominent critical perspectives on ‘affordance,’ J.J. Gibson and
Don Norman. After reviewing various alignments and contrasts in their positions, as
well as their significance for a wide range of fields of research, the subsequent sections
transition from the original context of affordance theory – the relationship between
objects, environments, and their users – to consider recent efforts to identify and begin
accounting for the specific affordances attributed to particular media technologies.
Central to this discussion will be a consideration of the ‘novel’ affordances made
possible by contemporary communication technologies, as well as a realization that
these affordances emerge from and unfold in concert with auxiliary layers of
affordance that are corresponded with the material grounds and digital operations of
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
8
computational systems. This range of affordances is actualized despite being
technically imperceptible, marking both a departure from canonical accounts of
affordance as well as a call to ‘rethink’ the affordance concept in response to the
particularities of its computational and algorithmic realization. The final sections
identify and unpack three areas of analysis through which we might begin to answer
this call. First, building on an account of the material and formal grounds of
computation, we begin to parse and conceptualize the imperceptible configuration and
operations of computational affordances. Second, we undertake a practical and
theoretical analysis of recent efforts to instrumentalize and automate the concept and
execution of affordance through algorithmic means. Finally, we move to a sustained
discussion of how the concept of affordance figures in and resonates with
contemporary digital art. The essay will conclude with a brief introduction to each of
the contributions to this special issue.
Framing affordance: Gibson and Norman
The cognitive and ecological psychologist J.J. Gibson first coined the concept of
affordance in his book The Senses Considered as Perceptual Systems (1966). Gibson
continued to refine and expand the concept in “Affordances and Behaviour” (1975),
and finally offered his most sustained discussion of the concept in The Ecological
Approach to Visual Perception (1979).3 Focusing his discussion on interactions between
live agents (both humans and animals) and their environments, Gibson used the term
affordance to explore the actionable properties of environments and, by extension, of
physical objects. Doorknobs afford the opening of doors; steps afford climbing or
descending between floors; cliffs afford falling off. With a view to Don Norman’s
reconceptualization of affordance (see below), it is noteworthy that, for Gibson,
objects as such only represent a subset of the more general environments with which
humans can interact. The affordances of an object or an environment are thus assumed
to describe the phenomenological qualities it embodies, by projecting potential uses,
delimiting possible actions, and signalling possible functions for the object or
environment in question.
In Gibson’s original conception, affordance is a decidedly environmental (or
ecological) phenomenon. On the one hand, an affordance may exist independently of
SCARLETT & ZEILINGER | Rethinking Affordance
9
whether or not an agent who could act upon it actually recognizes it; at the same time,
any affordance is only actualized when it is acted upon. Additionally, one and the same
object (or environment) can embody different context-specific affordances (a shoe,
for example, could protect a foot while walking, but it can also be used to hammer in
a nail, or open a bottle of wine). Affordances thus exist independently of human
intention but can nevertheless not materialize without them. These characteristics have
also been discussed as “relational” (e.g., Hutchby, 2001) and “interactional” (e.g., Nagy
& Neff, 2015), at times with reference to the “situativity” of human-environment
interactions (see Greeno, 1994). As Gibson (1979) puts it, “An affordance cuts across
the dichotomy of subjective-objective and helps us to understand its inadequacy. It is
equally a fact of the environment and a fact of behaviour. It is both physical and
psychical, yet, neither. An affordance points both ways, to the environment and to the
observer” (129).
According to Gibson, any interaction between human agents and their environment
could be described as geared towards the manipulation of this environment, for the
purpose of shaping affordances that are more amenable to the intended uses. The oft-
invoked example of the teapot emphasizes this: the functions and uses of this object
are generally assumed to be embodied in the object’s physical characteristics – its
handle is the only spot that allows a human user to comfortably hold the teapot
without burning their fingers; the wide opening on top lends itself ideally for the action
of filling the object with liquid, while the narrow neck and mouth are ideal for
controlled pouring-out of the liquid. Often, such potential uses (but also their limits!)
may be graspable even to someone who hasn’t previously seen or used the object in
question. Nevertheless, a teapot’s affordances materialize only in and through the
actual interaction. As Gibson notes, an object’s affordances may be grounded within
its material form, but are, ultimately, realized through processes of identification and
purposeful implementation through an agent. As such, affordance is an inherently
relational concept which, for Gibson, accounts for the “middle ground wherein the
perceiver and the perceived actually meet” (Letich & Lisack, 2009: 62); i.e., where uses
and functions are actualized through an interaction between the user and the
object/environment in question. Importantly, this focus on relationality also indicates
that affordances should not generally be considered as fixed and stable; they are, as
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
10
Gibson states, “relative to” and thus also “unique for” the agent in question (see
Gibson, 1979, Chapter 8). Because the agent must recognize an affordance in order to
realize it, Gibson’s affordance theory is intimately tied to theories of learning and
socialization – human agents learn to recognize uses, functions, and limits of objects
and environments, and consequently also strive to alter them as needed. In an
important contrast to Norman’s perspective, Gibson thus considers an environment’s
affordances to exist independently from its potential users’ ability to recognize them.
Over the last four decades, the meanings associated with the term affordance have
begun to significantly diverge from the original definitions Gibson offered. The most
noteworthy and dominant departures from the Gibsonian affordance concept are
represented by the work of cognitive scientist and design theorist Don Norman, whose
perspective is now very widely adopted in the field of design (from product design to
user experience and interface design), frequently to the point of eclipsing Gibson’s
perspective. While Norman built on Gibson’s foundational work, in part he also
negates or contradicts it. Primarily working in design contexts, Norman has developed
a perspective which, in comparison to Gibson’s, is much less focused on the
multifarious interactions between agent and environment/object (as well as the
dynamic nature of these interactions); instead, Norman foregrounds specific uses and
functions and the assumption that they can be built into an object ‘by design.’ As
McGrenere and Ho (2000) have put it, for Norman “an affordance is the design aspect
of an object which suggests how the object should be used” (n.p.). Related to this, a
key aspect underlying Norman’s work on affordance is the idea of the “conceptual
model” (e.g., Norman, 1999), which he conceives of as explanations that delineate for
users how something works, such that they are able to construct mental, interactive
models of it (2013: 25-26); design, in other words, is supposed to project conceptual
models based on which users can perceive an object’s or environment’s affordances.
The departure from Gibson’s model is significant; as Martin Oliver (2005) has
observed, “Indeed, so little of Gibson’s intended sense of the word remains that the
appropriateness of its use must be questioned” (407). Where Gibson’s perspective was
meant to open up our understanding of the relational ontology of objects and
environments, of their uses and purposes, and the limits thereof in relation to human
SCARLETT & ZEILINGER | Rethinking Affordance
11
agents, Norman’s view narrows this ever-widening and potentially open-ended
horizon: “Affordances provide strong clues to the operations of things. Plates are for
pushing. Knobs are for turning. ... When affordances are taken advantage of, the user
knows what to do just by looking: no picture, label or instruction is required”
(Norman, 1999: 9).
Norman thus replaces Gibson’ interactional, relational focus with a user-centred focus:
for him, an affordance is something with which a designer imbues an object in order
to guide and channel (some might say to control and limit) that which a user perceives
as the object’s uses and functions and, consequently, the uses which a user can imagine
to be possible. The important keyword for Norman is ‘to perceive.’ In his focus on
users (and, by extension, on by-design usability), Norman foregrounds “perceived
affordances” above all else, a designation by which he means properties of an object
that are actually perceived by a user and which can therefore be acted upon. This is in
clear contradistinction to Gibson, for whom, as noted, ‘affordance’ referred to an
interactional possibility that exists independent of an actor’s ability to perceive this
possibility. Norman here differentiates between his ‘perceived affordances’ and what
he calls ‘real affordances,’ which he describes as affordances that may exist, but which
a user cannot act upon if they cannot be perceived. This distinction is so important for
Norman that he states, “all affordances are ‘perceived affordances’” (Norman, 1999:
39).
Widely adopted in design and engineering contexts, Norman’s view now frequently
dominates discussion about and understanding of the concept of affordance, to the
point where elaboration on Norman’s perspective often takes precedence over
Gibson’s originary discussion of the term (see, for example, The Glossary of Human
Computer Interaction). While Norman adopts from Gibson the perspective that
affordances are embodied in objects and thus circumscribe an object’s
phenomenological characteristics, Norman perceives these affordances to be rather a
lot more fixed than Gibson. Where Gibson foregrounds how affordances emerge –
necessarily and inevitably – in relational constellations of environment and agent,
Norman proposes that affordances can be designed and subsist, in an object as
abstractable as a button, independently from environments (and technologies) that
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
12
might mediate between object and user. In other words, Norman’s object-centric and
user-focused approach insists that affordances are designed, and that, if they are well-
designed, they are more or less fixed.
As an indication of the significance of this departure from Gibson’s thinking on the
subject, it may be useful to highlight that Norman’s most well-known book on the
topic was initially published as The Psychology of Everyday Things (1988), but later re-
released as The Design of Everything Things. The changed title is programmatic for
Normans’ perspective: it marks a shift away from focusing on the way in which
affordances emerge necessarily in the interactional link between object/environment,
on the one hand, and user, on the other, and towards a focus on the object itself,
which, Norman argues, projects a fixed set and quality of affordances that remain
stable across interactional configurations and events. Where Gibson would certainly
have discounted such a view, for Norman the ideal goal of design is to lock affordances
into place, aiming for them to become ‘invisible’ and ‘one’ with the object to which
they become attached. Norman’s account of the intersection between ‘affordance’ and
‘design’ renders the ideological underpinnings of affordance explicit. While Gibson’s
relational account of affordance unfolds at the ideological intersection between mind
and matter, Norman’s account hinges on the designer’s instrumentalization of the
affordance concept and, therefore, on ideologically-laden interventions that tend to
close down, rather than broaden, the interactional horizon of an object or
environment.
A call to ‘digitize’ the affordance concept
Norman’s theorization of affordance spurred its popularization. Not only did he align
the concept with the field of design, introducing and reformulating it in a manner that
appealed to scholars and practitioners working in a number of corresponding
subdisciplines (see, for example: Gaver, 1991; Haugeland, 1993; Flach and
Dominguez, 1995; Smith, 1996), but his work also identified a critical set of
connections between the concept of affordance and the burgeoning terms of
computational media. While the concept of affordance in its initial formulations
accounted for physically robust artefacts and phenomena, through Norman it was
increasingly applied to digital artefacts and environments, particularly within the fields
SCARLETT & ZEILINGER | Rethinking Affordance
13
of interaction design (e.g., Hartson, 2003), software development (Pressman and
Maxim, 2014), information science and information architecture (e.g, Bernhard et al.,
2013; Pozzi et al., 2014), interface design (Drucker, 2014; Ruecker et al., 2011), and
user experience design (e.g., Pucillo and Cascini, 2014). Within the corresponding
context of Communication and Media Studies, the concept of affordance was adopted
as a means of making sense of the operational potential of devices and platforms (e.g.,
Gillespie, 2010; Neff et al., 2012), as well as in an effort to identify the emergent terms
through which media might, indeed, be deemed ‘new’ (see Manovich, 2001; Lister,
2009).
Yet, despite the cross-disciplinary adoption of the ‘affordance’ concept, as noted in the
introduction, very few scholars have sought to significantly update Gibson’s or
Norman’s theoretical frameworks in order to more thoroughly account for the realities
of digital, rather than physical, systems; not only does much of the contemporary
research on affordances involve a straight-forward review and adoption of both
scholars’ theoretical frameworks, a tendency that Evan et al. (2017) have associated
with a lack of ‘theory-building’ where contemporary accounts of ‘affordance’ are
concerned (36), but there has yet to be a critical examination of the increasingly
prominent intersection between ‘affordance’ and ‘algorithm’ (Ettlinger, 2018).4 This is
a significant oversight. As Nancy Ettlinger (2018) has articulated, “affordances as a
field of possibilities are considerably more complex in algorithmic life than in a
Gibsonian environment-actor relation…” (3). One of the reasons for this, she
explains, is that digitally mediated environments encompass an expansive and diverse
assemblage of “animate and inanimate actors in addition to public and private-sector
actors connected to them” (ibid). While the same might be said for any environment-
actor relation that is embedded within a larger assemblage of actors, objects, and
environments, Ettlinger reminds her reader that the algorithmic field of possibilities is
comprised of increasingly complex constellations of intersecting feedback loops,
driven in large part by the solicitation, aggregation, operationalization, and calculated
implementation of data (Nunes, this issue, addresses similar issues). Within this
context, human agents are increasingly encountering ‘smart’ and networked
technologies, whose potential and real affordances stretch beyond their interactive
surfaces, into the imperceptible yet affective undercurrents of their coded operations,
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
14
networked infrastructures, and socio-cultural apparatuses. Not only does this point to
the ‘nested’ and ‘cascading’ layers that comprise computational technologies, but it also
calls attention to the different modes and means of affordance that these technologies
have the capacity to enact. It is primarily in response to this complex situation that
Ettlinger (for whom this scenario is a feature of ‘algorithmic life’ more broadly)
determines the inadequacy of canonical conceptualizations of affordance.
Furthermore, she argues that the algorithm, as both a process and object, is a new
phenomenon that requires a phenomenon-specific theorization of affordance.
Despite the significant theoretical gap that Ettlinger identifies, numerous scholars and
practitioners have indeed begun to develop technologically oriented accounts of
affordance, with an increasing interest in contending with the forensic grounds,
algorithmic infrastructures, and digital artefacts that comprise contemporary
computation (see for example: Best, 2009; McVeigh-Schlutz & Baym, 2015; Davis &
Chouinard, 2017; Hurley, 2019). While much research has considered the affordances
that emerge from and correspond to the use of specific technologies (see for example:
Graves, 2007; Sutcliffe et al., 2011; Moloney et al., 2018), other scholars have worked
to reorient the theoretical parameters of affordance to begin grasping the particularities
of computational processing, if only in metaphorical terms (see for example: Leonardi,
2011; Nagy & Neff, 2015). In the case of the former, it is often the physical and
phenomenologically apparent surfaces of technology (and responding practices) that
are considered; in the case of the latter, attempts are made to account for the hidden,
or ‘imagined,’ dimensions of algorithmic mediation and digital artefacts. Despite an
interest in mapping the specific grounds of computational affordances, very few
accounts move from a treatment of ‘the digital’ as a sweeping cultural phenomenon to
an examination of the specific technical and algorithmic means through which the
digital operates and is materialized (boyd, 2010). Even efforts by some to grasp at the
imperceptible dimensions of computational processing forego the specific grounds of
the digital in favour of allusive language.
Following these trajectories of research, we will now turn to a selective review of
scholarly responses to the technological and increasingly digital dimensions of
affordance. In addition to providing an overview of key texts, the discussion in the
SCARLETT & ZEILINGER | Rethinking Affordance
15
sections below aims to contribute to scholarly accounts of the invisible and
imperceptible affordances associated with digital systems. Our objective is to begin
mapping out a theoretical scaffolding capable of accounting for the materially complex
and nested grounds of digital affordances, as well as the increasing instrumentalization,
implementation, identification and actualization of affordances through algorithmic
means.
Identifying the affordances of contemporary media
technologies
From cognitive psychology and design to social media studies, education and law, the
affordance concept has been taken up, with increasing intensity it would seem, across
an expanding array of contemporary disciplines as scholars work to make sense of the
effects that 21st century media technologies are having within their respective fields of
study (see for example: Alvarez, this issue; Diver, 2018; Costa, 2018; Carah & Angus,
2018; Heemsbergen, 2019; Hurley, 2019). Within these contexts, analysis of
‘affordances’ is often advanced as a ‘third way’ (Hutchby, 2001: 444) of approaching
media criticism; an affordance-based approach stands between and draws together
discourses of technological determinism and ‘enframement,’ on the one hand (Finn,
2017: 118; Mitchell & Hansen, 2010), and social constructivism, on the other. As
McVeigh-Schlutz & Baym (2015) explain, analyses that depart from a consideration of
‘affordance’ typically address “how people make emergent meaning through
interactions with technology, while also accounting for the ways that material qualities
of those technologies constrain or enable particular practices” (1). In this vein, they
recognize that while media technologies are comprised of “a set of practices that
cannot be defined a priori, and [that] are not predetermined outside of their situated
everyday actions and habits of usage” (Costa, 2018: 3642), their material and structural
constitution “request, demand, allow, encourage, discourage, and refuse” (Davis &
Chouinard, 2017: 242) particular kinds of use.
The practices that emerge at the intersection of these differing pressures have been
conceptualized as both broad indicators of the communicative, social, and political
affordances of contemporary media technologies, as well as medium-specific
affordances (see for example: Heemsbergen, 2019; Schrock, 2015; Sutcliffe et al.,
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
16
2011). Andrew Schrock (2015), for example, has reviewed over a decade of research
into the effects of mobile media on communication practices, and argues that what is
central, but often overlooked, within this scholarship is an understanding of mobile
media’s ‘communicative affordances’ (1234). According to Schrock, the term
‘communicative affordances’ comprises an overarching class of affordances and
describes instances in which the relational intersection between “subjective perception
of [technological] utility and objective qualities of a technology … results in altered
communication and subsequent patterns of behaviour” (1239). Under the banner of
‘communicative affordances’ falls a collection of medium-specific affordances as well,
each of which affects configurations and practices of communication. For example,
reflecting on mobile media, Schrock identifies device portability, user availability and
‘locatability,’ as well as the convergence of an assortment of mediums and platforms
(1235) as key affordances that have significantly altered communication practices.
Similarly, Treem & Leonardi (2012) and Evans et al. (2017) chart a series of
communicative affordances that are specific to social media technologies (such as
blogs, wikis, and social networking sites), focusing on visibility, editability, persistence
and association (Treem & Leonardi, 2012; Evans et al., 2017) as well as anonymity
(Evans, 2017: 41).
Of the prominent recent accounts of affordance, boyd (2010) provides one of the very
few (if not also the most robust) considerations of how the digital and algorithmic
grounds of contemporary media technologies contribute to the affordances that they
help to realize. In “Social Network Sites as Networked Publics: Affordances,
Dynamics, and Implications,” boyd explores how the technologies that constitute and
structure so-called ‘networked publics’5 afford particular kinds of social engagement.
Rather than suggesting that social behaviours are determined through the technological
media that enable them, boyd turns to affordance theory to recognize how the
technologies that comprise ‘networked publics’ are ‘actualized’ through the very
practices that they enable and shape. While boyd is primarily concerned with parsing
the social and communicative affordances of networked publics, she begins by
differentiating between the material grounds of physical and digital technologies. This
enables her, by extension, to differentiate between the particularities of physical and
digital affordances. Perhaps obviously, boyd accomplishes this by explaining that the
SCARLETT & ZEILINGER | Rethinking Affordance
17
physical is materially delimited by the atom while the digital is comprised of bits. “The
underlying properties of bits and atoms,” she explains, “fundamentally distinguish
these two types of environments, define what types of interactions are possible, and
shape how people engage in these spaces” (41). Unlike atoms, bits are easily
“duplicated, compressed, and transmitted through wires” (ibid.). They are also “easier
to store, distribute, and search than atoms” (46). The affordances of networked publics
are, by extension, “shaped by the properties of bits, the connections between bits, and
the way that bits and networks link people in new ways” (41).
boyd maps her claims concerning the properties of bits across a close examination of
the defining features and practices that comprise social network sites (such as profiles,
friends lists, and tools for public communication). She identifies four affordances that
“emerge out of the properties of bits,” and in turn “play a significant role in
configuring networked publics” (46). These affordances include persistence,
replicability, scalability, and searchability (46), each of which introduce[s] new
dynamics that participants in ‘networked publics’ must contend with (48). According
to boyd, these dynamics include the emergence of questions concerning visibility and
anonymity; a collapse of distinctions between the public and private sphere; and a
decontextualization of social and communicative exchanges. In this vein, boyd’s work
offers both a consideration of how the affordances of networked publics are
transforming the practices that comprise communication and everyday life as well as
an account of how the material specificity of bits gives rise to a series of affordances
that are fundamentally different from those associated with physical objects and
environments.
From the perceptible to the imperceptible: the grounds of
computational affordances
boyd’s work identifies and begins to account for the ways in which the underlying
building blocks of digital systems affect their corresponding affordances. This being
said, she does not consider the affordances of the algorithmic means through which
they work and are put to use. Her project aims instead to assess the affordances that
are materialized through the use of social media platforms and in relation to
‘networked publics,’ connecting it more thoroughly to work that identifies the specific
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
18
affordances of contemporary media technologies. This being said, boyd’s turn to bits
as the definitive grounds of digital affordances identifies how the different-yet-
intersecting layers of materiality that comprise the digital challenge the applicability of
traditional conceptualizations of affordance; each layer initiates a different sense of
where and how the affordances of digital systems arise and operate. A contemporary
account of affordance must therefore encompass the different modes of materiality
through which the digital operates, and in relation to which the affordances of digital
systems are realized. Indeed, Ian Hutchby (2001) has argued that agents’
conceptualization and use of technological artefacts are fundamentally shaped by “the
ranges of affordances that particular artefacts, by virtue of their materiality, possess” (193,
emphasis added). It is important to understand, as a result, that as the materiality of
the digital shifts, so too do its potential affordances. Bloomfield et al. (2010) use the
term “‘cascades’ of affordances” (ibid.) to describe this phenomenon (420),
highlighting the co-articulatory (Latour, 1999) and processual emergence of digital
affordances, as they unfold across time and in response to shifting layers of materiality.
As Nagy & Neff (2015) suggest with their affordance-oriented assertion that
“communication theory deserves a richer theory of the materiality of media” (1), in
order to better understand the multilayered affordances of the digital, it is critical to
develop a clear understanding of digital materialism.
In Mechanisms: New Media and the Forensic Animation, Matthew Kirschenbaum (2012)
provides a dialectical account of digital materiality, comprised of the iterative synthesis
of forensic and formal materialism. Grounded within the “richness of a physically robust
world” (9), ‘forensic materiality’ here refers to the physical and embodied dimensions
of the apparatuses, environments and practices that comprise digital technologies. For
Kirschenbaum, this includes the physical “residue of digital inscription” (10),
“surfaces, substrates, sealants, and other material that have been used … as
computational storage media” (10), as well as the “labour practices that attend [to]
computation” (ibid.). Rather than treating the physical underpinnings of computation
as the exclusive grounds of digital materiality, Kirschenbaum introduces ‘formal
materialism’ to account for “the multiple behaviours and states of digital objects and
the relational attitudes by which some are naturalized as a result of the procedural
friction, or torque … imposed by different software environments” (132-133). Formal
SCARLETT & ZEILINGER | Rethinking Affordance
19
materialism, then, arises through the “simulation or modelling of materiality via
programmed software processes” (9). These processes impose “a specific formal regimen
on a given set of data” (13), lending it, and ‘the digital’ more broadly, an aesthetic and
material sense of cohesion and durability. Formalization does not only provide a
perceptible and seemingly stable surface through which to identify and enact the
affordances of the digital, but formalized image objects and environments also offer a
selective glimpse into the undercurrents of mediation as they are seen to index the
processual intersection between hardware, software and code (Hand & Scarlett, 2019).
Following this line of reasoning, the materiality of the digital emerges through the
‘sustained duality’ of forensic and formal modes of materialism (Kirschenbaum, 2012;
Drucker, 2009); digital objects and environments are understood in this case to be
forensically grounded, processually executed and formally durable. The intersection
and coincidence of these modes of materiality help to differentiate between the
multilayered or ‘nested’ (Gaver, 1991) affordances actualized through agents’
interactions with digital technologies, objects, and environments. The affordances of
the digital are not only shaped by the ‘forensic’ materials that undergird digital
technologies, rendering them operable, graspable, and interactable, but
Kirschenbaum’s conceptualization of digital materiality also helps to account for
affordances that are grounded within the iterative, ephemeral and seemingly
‘dematerialized’ structures of formal regimens. While afforded, to an extent, by the
forensic, in rendering the imperceptible layers and processes of computation
perceptible and seemingly material, the formal dimensions of digital materialism help
to establish the conditions of possibility for recognizing the affordances of the digital
whatsoever.
Formal regimens do not only render the affordances of the digital apparent, they also
actively frame digital affordances in a manner that forefronts the seemingly
‘immaterial’ qualities of formal materialism. At stake in this case is both a consideration
of the means through which digital objects and environments are ‘enformed,’ as well
as the ideological pressures that inform these processes (Chun, 2006; Galloway, 2006).
Analyses of the former might call for a close consideration of the role that code plays
in the semiotic delineation, generative execution, formalization and perceptual
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
20
stabilization of particular affordances. As we discuss in greater detail below (with
regards to software studies), this line of inquiry necessitates both a consideration of
the affordances associated with coded language, and ‘actionable’ signifiers more
broadly, as well as an analysis of how the formalized qualities that delimit digital objects
and environments inform and contribute to our sense of what the digital affords as
well as what affordances are particular to the digital as such. For example, we might
consider how it is that the formalized qualities of digital objects and environments
contribute to their perceived manipulability, scalability, deletability and undoability
(Lialina, this issue), regardless of whether this is actually the case, or not.
Deletability is particularly illustrative of this notion, as users’ sense of immediate
deletability is often a function and affordance of the formal, rather than the forensic,
level of computation. As Kirschenbaum details, when users “delete a file from their
trash or recycle bin it is not immediately expunged from their hard drive. What
happens instead is that the file’s entry in the disk’s master index … is flagged as space
for reuse” (50). As such, “the original information may yet persist for some time before
the operating system gets around to overwriting it” (50-51). As we discuss in greater
detail below, for Gaver (1991) this would likely suggest that deleability is in fact a ‘false
affordance.’ Similarly, moving beyond the isolated hard drive to consider the forensic
grounds of ‘deletability’ in networked environments, Treem & Leonardi (2012) and
Evans et al. (2017) advance the opposite affordance, highlighting the nagging
‘persistence’ of digital information rather than its erasability. This being said, as Lialina
details in this issue, at the formal level, ‘deletability’ is not only a perceived affordance,
but it is also central to the ways in which digital tools and platforms are put to use
within creative practice. In this sense, it is an affordance particular to the formal (if not
also - eventually - the forensic) dimensions and operations of digital mediation.
Discerning the imperceptible dimensions of computational
affordances
Forensic and formal materialism render the affordances of digital technologies
perceptible. This being said, they also afford an awareness of the hidden (Gaver, 1991),
and therefore imperceptible, dimensions of computation. While some forensic
materials and processes are graspable, their blackboxed components and micro-
SCARLETT & ZEILINGER | Rethinking Affordance
21
temporal operations render the bulk of their material grounds and operations
inaccessible to the human senses despite the perceptibility of their resulting outputs.
Similarly, while formalized materials are, by definition, apparent to the senses, their
mutable qualities and flexible materiality call attention to the imperceptible procedures
and processes that make this mode of materialism possible. The affordances of
hardware and software are cascading below the surfaces of computation, whether users
perceive them directly or not. Despite Gibson’s and Norman’s focus on the
perceptible, the imperceptibility of computation’s backend does not stop users from
identifying and according ‘action possibilities,’ and therefore affordances, to it. While
boyd (2010) is largely concerned with the affordances of networked publics, her work
alludes to a series of “action possibilities” that are specific to bits. Similarly, Adrienne
Shaw (2017) calls attention to the ways in which users ‘decode’ affordances associated
with aspects of mediated experience that remain invisible to them; she uses algorithms
as an illustrative example, connecting the drive to decode their encoded affordances
and implications with the sense that they “affect what users can and cannot do in
online space, but operate out of view” (600). Indeed, Eslami et al. (2015) have also
demonstrated that whether or not users are able to decode or understand algorithms
correctly, their “perceived knowledge” of underlying computational processes affects
how they interact with devices as well as how they behave more generally (153). A
growing awareness of the presence and cultural implications of algorithmic
technologies’ submedial undercurrents (Groys, 2012), paired alongside a willingness to
accord affordances to their invisible operations, has coincided with scholarly efforts to
theorize and excavate the terms of imperceptible affordances more broadly. Central to
these lines of inquiry are efforts to make sense of how users ‘imagine,’ construct, and
project the affordances of computational (algorithmic) technologies. This is not only
a matter of theorizing the imperceptible, but also points to how imperceptibility, or
invisibility, might be conceived of as an affordance in and of itself.
William Gaver provided one of the first efforts to theorize the different layers of
perceptible and imperceptible affordances that unfold through the operations and use
of computational systems. In “Technology Affordances” (1991), Gaver expands upon
Gibson’s claim that “people perceive the environment directly in terms of its potential for
action, without significant intermediate stages involving memory or interferences”
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
22
(ibid., emphasis added), to advance a more fully delineated account of “perceptible
affordances,” “hidden affordances,” and “false affordances” (80). According to Gaver,
perceptible affordances are those affordances that are recognizable when “the
attributes of the object relevant for action are available for perception” (81). Hidden
affordances, by contrast, are those for which “there is no information available” and
that must therefore “be inferred from other evidence” (80). False affordances arise
when “information suggests a nonexistent affordance,” leading people to “mistakenly
try to act” (ibid.). Gaver corresponds his delineation of perceptible and hidden
affordances to computational interfaces and undercurrents, respectively. Interfaces, he
explains, remediate a set of underlying and otherwise hidden affordances, rendering
the relevant and actionable properties of computational processes and objects
perceptible. Through this formulation, Gaver does not only attest to the existence of
computational affordances that remain hidden below the threshold of perceptibility,
but he also calls attention to the complexity of interfaced affordances as they comprise
both the perceptible features of the interface, such as the physical parameters of a
device or the button and scrollbar that appear on a screen (81), as well as a cross-
section of the operational undercurrents that make these computational objects and
environments work. Not only does this make it difficult to discern between perceptible
affordance and ‘evidence’ of a hidden affordance, but it also suggests that the
perceptible affordances of computation do not necessarily belong to the system, per se.
They may instead be affordances that are proper to the interface, its physical design
and representational mediations, as well as the broader socio-technical and ecological
context in which the encounter unfolds. This illustrates precisely the kind of
complexity that complicates the easy application of Gibson’s and Norman’s theories
of affordance to digital technologies and processes.
While Gaver’s interface renders hidden affordances perceptible, he does not push
aside, flatten or erase the existence of these affordances through their mediated
signification; hidden affordances unfold and come into existence through the
processual operations of computation, whether they are experienced and perceived
directly or not. Furthermore, rather than associating the perceptible affordances of the
interface exclusively with the physical design and hardware that comprise the interface
(as is often the case), Gaver advances an account of the coherent ‘image object,’ which
SCARLETT & ZEILINGER | Rethinking Affordance
23
he explains emerges iteratively through and ‘indexes’ the procedural operations of
computation; the processual image object does not only visualize and render
imperceptible processes actionable, but in so doing, it also marks the intersection
between the potential affordances of the interface and the actualized affordances
realized through backend operations. As this suggests, Gaver’s work does not only
begin to contend with varied modes of digital materiality (as discussed above), but it
also identifies and preserves the existence of affordances that remain hidden from
view.
Prominent amongst accounts of affordance is Peter Nagy and Gina Neff’s (2015)
conceptualization of ‘imagined affordances’ – a concept developed to better account
for the role that users’ expectations and beliefs play in the identification of affordances,
as well as users’ capacity to imagine the affordances of technologies and technological
operations that remain hidden from view.6 Nagy & Neff argue that what people believe
and expect technologies to be able to do shapes “how they approach them and what
actions they think are suggested” (4). These beliefs and expectations are not, from their
perspective, restricted exclusively to that which is directly communicated or
immediately perceptible, but often correspond to what people are able to imagine a
particular technology might be used for (5). They explain:
Users may have certain expectations about their communication
technologies, data, and media that, in effect and practice, shape how they
approach them and what actions they think are suggested...This is what
we define as imagined affordance… (ibid.).
While Nagy & Neff connect imagined affordances to any and all instances in which
individuals attempt to identify the uses that a tool or medium might make available to
them, they are particularly interested in identifying the role that imagined affordances
play in shaping the relationships that comprise “complex socio-technical systems such
as machine-learning algorithms, pervasive computing, the Internet of Things, and
other such ‘smart’ innovation” (1). Rather than contending with the actual affordances
of computational hardware or algorithmic scripts, the authors parse the ways in which
users imagine and attribute affordances to these socio-technical systems, rightly or
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
24
wrongly. For example, they consider how users have imagined their social media news
feeds as offering objective access to their friends’ posts (and vice versa), despite the fact
that this information is algorithmically mediated and therefore structurally constrained.
While the objective form of communication that social media platforms are imagined
to afford indicates a false understanding of what is technically happening, Nagy & Neff
suggest that the affordances that are imagined lead to particular uses and actions
regardless of whether or not they are, in fact, misunderstandings, misperceptions,
and/or misinterpretations (5). For Nagy & Neff this is significant insofar as it suggests
that reflexive engagement with imagined affordances might enable us to better make
sense of and engage critically with the otherwise imperceptible dimensions of
computational devices, as well as the broader socio-technical systems through which
they operate.
A line of questioning that begins to emerge in response to Nagy & Neff’s work
concerns the means and pressures through which particular affordances come to be
‘imagined.’ Paul Leonardi (2011) offers one possible explanation. Working within the
context of organizational studies, Leonardi undertakes a critical examination of the
relationship between humans and techno-material agencies in the workplace; his text
aims to make sense of how the terms surrounding this relationship have the capacity
to change the routines of work and/or the predominant technologies of the workplace
(151). Leonardi deploys the concept of ‘affordance’ to capture the means through
which humans and techno-material agencies relate and become ‘imbricated’ in one
another. Drawing upon Hutchby (2001), he explains that while technological
affordances are grounded within the materials and material practices that delimit a
particular technology, individuals “actively construct perceptual affordances and
constraints” (153, emphasis added) as they interpret and attempt to reconcile the
material parameters of a particular technology with their broader “goals for action”
(ibid.). Despite recognizing that many of the technologies that we encounter have been
thoroughly ‘blackboxed,’ Leonardi does not differentiate between affordances that are
constructed in response to that which is perceptible versus that which is imperceptible.
Anticipating Nagy & Neff’s later theorization of ‘imagined affordance,’ Leonardi
argues instead that the relationship between humans and techno-material agencies
takes shape as individuals imagine how a particular device or tool might afford them
SCARLETT & ZEILINGER | Rethinking Affordance
25
the ability to accomplish a particular goal. This suggests a kind of ‘reverse-engineering’
of affordance, as potential uses do not emanate from an artefact or environment but
are instead projected onto an artefact or environment in response to a desired result.
Leonardi’s account does not only help to make sense of the pressures that might
influence how affordances are imagined, but in focusing on goals for action, it also
offers a way of making sense of the imperceptible dimensions of techno-material
artefacts; if a device or tool enables an individual to undertake a particular action
and/or achieve a desired goal, then this can be identified as one of its affordances,
regardless of whether or not the individual is able to explicitly connect the affordance
with perceptible qualities or characteristics of the artefact itself. This, again, provides
an entry point for analyzing and critiquing the phenomenologically evasive grounds of
digital mediation, albeit indirectly.
Nagy & Neff and Leonardi grasp at the imperceptible dimensions of computational
affordance in a manner that ultimately allows the imperceptible to remain
imperceptible. There is an understanding here that the processual operations of
computation, which mark the iterative coming-together of an expansive technological
apparatus (hardware and software, socio-material discourse and practice), can never be
perceived in their entirety and are rarely perceived directly – there is always some
component or process that remains out of reach. Before proceeding to a further
consideration of the affordances of coded signifiers and representational artefacts, as
well as the algorithmic means through which these affordances are increasingly
identified and enacted, it is worth pausing for a moment to consider how the
unavoidable imperceptibility of digital processing, identified above, has come to the
fore as one of the key affordances of computational technologies. As we noted above,
imperceptibility is one of the inescapable qualities of computation; not only is
invisibility a material fact of the electronic and algorithmic operations that drive
computation, but it also facilitates many of the purposes that computation serves
within contemporary culture. Indeed, Jussi Parikka (2015) has charted the “invisible
infrastructural layers that sustain what is visible” (216), highlighting how the invisibility
of algorithmic logic and processing works to produce particular configurations of the
social and visual. Of particular interest to Parikka are the invisible means through
which algorithms produce (and in turn visualize) “financial, urban and security regimes
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
26
… as one fold in the topological continuum between spatial architecture and
informational ones” (213). Here, ‘invisibility’ describes both a quality of the interstitial
space within which algorithmic operations unfold and configure the relationship
between informational (i.e. digital) and spatial (i.e. physical) realities, as well as an
affordance that is leveraged in order to secure regimented control over how this
relationship is structured and rendered manifest. Following this line of reasoning,
Hoelzl & Marie (2015) have detailed how invisibility is in many ways that which
facilitates the collection, surveillance and commoditization of user data (101). Santos
& Faure (2018) have undertaken an analysis of WhatsApp to argue that invisibility,
framed as the ability to encrypt and render data imperceptible, has become a critical
affordance and corresponding ‘sales tactic,’ in the post-Snowden era (9). Echoing these
sentiments, Parikka follows up his consideration of invisibility by identifying how
“invisibility is, in increasing ways, something that has to do with the proprietary logic
of closed platforms (software) and devices (hardware), putting a special emphasis on
critically tuning technological skills to investigate such ‘nothing to see’ logic…”
(Parikka, 2015: 216).
As these examples suggest, the invisible dimensions of algorithmic technologies, their
capacity to remain below the threshold of perceptibility by humans, are leveraged by
individuals, organizations and governments to better collect, survey, and close off data.
Gregoire Chamayou (2015) identifies and elaborates on this point in his theorization
of the drone. Calling upon Adorno, he suggests that moments of seeming transparency
and structural invisibility are in fact indicative of “a great deal of subjective activity,
involving huge efforts and enormous energy, designed to cover one’s tracks, efface
evidence, and wipe out any trace of a subject involved in action” (207). An affordance
associated with invisibility and imperceptibility might, as a result, be understood as the
capacity to erase the perception of subjective presence and interference. As Özgün
Eylül İşcen (featured in this issue) argues in her critical re-examination of the
affordance concept in response to the racialized, racializing and ultimately
dehumanizing technologies of drone warfare, the invisibility of computational
operations and algorithmic processing has the capacity, on the one hand, to afford a
degree of privacy, while on the other hand also affording the obfuscation and
foreclosing of responsibility and solid grounds for critique. İşcen asks, as a result,
SCARLETT & ZEILINGER | Rethinking Affordance
27
affordances for whom? From this perspective, accounting for imperceptible, but real,
affordances might provide a means of triangulating, stabilizing, and in turn engaging
critically with the ever-receding yet increasingly influential grounds and subjects of
computation. This approach originates from the relational experience of humans; as
such, it reasserts the presence and significance of the human within, or in relation to,
the imperceptible dimensions of computation.
Non-human perception and algorithmic affordances
While much of the canonical scholarship on affordance is grounded within the
ecological and material, it typically assumes that the realization and actualization of
affordances hinge upon interactive relations established by (or at least in relation to)
humans. This being said, researchers working on the development of ‘smart’
computational and robotic systems are increasingly instrumentalizing the affordance
concept – understood as the ability to identify and make use of opportunities for action
within a given environment (whether real or virtual) – in an effort to build technologies
that are efficient, autonomous and responsive to “complex, unstable, and real-time
environments” (Horton et al., 2012: 70). According to Horton et al., by formalising
and instrumentalising the relationship between the agent and its environment, rather
than the environment as such (70), agents are freed from:
… the need to maintain complex representations of the world. The agent
can instead interact with the world as it is, allowing for more flexible and
timelier responses in a dynamic environment, with the agent able to learn
the affordances of its surroundings through first-hand exploration (79).
This affordance-based approach has been adopted by researchers developing a wide
array of automated technologies, including autonomous driving vehicles (Chen et al.,
2015); hand-like attachments for autonomous robotic systems (Saponaro et al., 2018);
“artificial agents” capable of identifying “actionable” properties of an image (Chuang
et al., 2017); and algorithms for determining “actions afforded by a scene” (Wang et
al., 2017). Central to each of these projects are the predictive and probabilistic
affordances of an ‘affordance-based’ approach. As Saponaro et al. (2018) explain with
regards to robots working alongside humans: “A crucial ability needed by these robots
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
28
to succeed in such environment (sic) is to be able to predict the effects of their own
actions, or to give a reasonable estimate when they interact with objects that were
never seen before” (1). For Chuang et al. (2017), an affordance-based approach does
not only allow for a computer’s more seamless negotiation of the image-scape, but also
enables the prediction of relationships between objects in the image (and, by extension,
objects in the world). Similarly, for Wang et al. (2017), an analysis of figure placement
within a scene (their data set is comprised of over 10 million stills from Sitcoms) sheds
light on both the relationship between objects and environment and on the probability
that the perceived relationship (affordance) be realized. In addition to augmenting
systems’ ability to automatically negotiate complex environments, Pirk et al. (2017)
hypothesize that the delineation of affordances might also provide indirect “insight
into the semantic identity of the object,” again contributing to the development of
increasingly ‘smart’ technologies.
As Thomson’s Deep Time Machine Learning suggests, machines can be equipped with a
variety of sensorial and algorithmic means through which to discern and enact
environmental affordances, without the intervention of human agents. This being said,
at the forefront of much of this research (as the preceding examples of recent
innovations suggest) are efforts to leverage technologies of machine vision as well as
machine learning algorithms to automate the identification and actualization of
affordances associated with visual data. In each of the cases discussed above,
autonomous computational agents are being developed and trained to identify and
respond to image-based affordances. This is to say that while the identification of
affordances is corresponded (by the authors) to the actual environment, depicted in
the images, the environment is in fact the pixelated and patterned landscape of the
image file itself. As N. Katherine Hayles has detailed, a slippage occurs here between
reality and abstraction, where an abstraction (the image-object) first stands-in for and
is then mistaken for actual reality (Hayles, 1999). While the systems’ responsive actions
may appear to attest to the affordances of the actual environment, the apparent
coincidence is an indicator of the accuracy of the image-model, rather than being an
indicator of the actual affordances of the environment (or the system’s ability to
recognize them without mediation). What these examples reveal, therefore, relates to
the affordances of pixel artefacts (a formalized material, in Kirschenbaum’s terms) and
SCARLETT & ZEILINGER | Rethinking Affordance
29
encoded pixel data, as well as of the machine vision algorithms that discern and attach
relationally derived meaning to these coded artefacts.
According to Hoelzl & Marie (2015), digitization has resulted in a significant shift in
the “photographic paradigm of the image” (100), from a representational landscape
replete with signifying pictures to one comprised of algorithmically operationalized,
collated, and (at times) visualized data sets. Following Harun Farocki, Hoelzl & Marie
explain that digital images are no longer “visual entities, aimed at a human mind, but
visual patterns recognized and interpreted by a computer” (101). As the authors
articulate, what computational technologies recognize, read and aggregate are the
sampled and quantized bits of information (datasets) that comprise and render digital
images operative and actionable. When these technologies identify the affordances of
an environment through the means of a digital image, what is realized is an
algorithmically discerned pairing between patterns within the pixel data and encoded
(or learned) criteria delimiting ‘opportunities for action.’ On the one hand, these
‘opportunities for action’ are rendered, as in the examples reviewed above, into actual
behaviours, identifying a shift in the traditional ‘agent’ of affordance. No longer an
organic being, algorithmic means of perception and discernment are increasingly adept
at identifying and enacting environmental and object-oriented affordances. On the
other hand, this situation also calls attention – once again – to the imperceptible
unfolding of algorithmic, code- and bit-based affordances. While boyd (2010) provides
an account of the material grounds and affordances of bits, these affordances cannot
be computationally realized without the coinciding affordances of code and algorithm.
How then, might we begin to parse the affordances of algorithms and,
correspondingly, code?
The field of software studies has sought to expose the programmed undercurrents that
enable and constrain computational processes, implicitly suggesting that the
affordances of computation can be explained (at least in part) through a close reading
of computer code and algorithms. Where ‘code’ here refers to the basic
representational building blocks that comprise and structure programming languages,
algorithms are the instructional means through which code is harnessed, “focalized
and instantiated in a particular program, interface, or user experience” (Finn, 2018: 35).
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
30
Reflecting on the resulting intersection between algorithm and affordance, Shintaro
Miyazaki (featured in this issue) argues that “algorithms, when stored and not-yet-
unfolded, have affordances, since they are made of instructions to structure and move
hard-, soft- and wetware…” (n.p.). Whether executed or not, algorithms bear the
capacity to “put things forth, forward or further” (n.p.); they possess the potential to
enact a cascading series of relational actions. Similarly, David Gauthier (2018) has
explained that algorithmic commands “request and constrain action to fulfil the
promise of its execution which, in turn, should shed expected effects” (74). He explains
that “the command itself does not act per se, but rather prescribes an action that it, in
turn, assesses or judges (‘correct value’)” (ibid.). Both Miyazaki and Gauthier point here
to the instrumentalization and subsequent representation of affordances within the
algorithms that drive contemporary computation. To unearth and analyze these textual
undercurrents is, then, to identify and begin parsing the structural affordances that are
embedded within and enacted by computational systems. There are two critical
implications here.
First, there is an understanding that much of this activity unfolds within purely
computational environments, without experiential output.7 Instead, algorithms often
operate in recursive and inter-algorithmic feedback loops, executing and establishing
connections between component parts of computation (e.g., bits of data, code, and
programs) as well as between computational processes. Even outside of its specific
formulation and operation, there is a manner in which this might be understood as
one of the fundamental affordances of the algorithm, namely its actualization of
connections between components and layers of computation. While “the command
itself does not act,” it does enable and initiate – and therefore afford – relation.
Second, this sentiment reifies some of the basic principles of software studies,
inevitably necessitating a critical examination of the broader apparatuses within which
algorithmically encoded affordances are developed and deployed. As Norman
promised, the action possibilities of contemporary computation have been severely
restricted through the algorithmic encoding of predetermined affordances, many of
which are designed in response to what programmers are able to recognize (or imagine)
as possible uses for the system and broader apparatus, as well as in response to what
SCARLETT & ZEILINGER | Rethinking Affordance
31
is socio-politically and economically desirable. (This is discussed in greater detail by a
number of authors featured in this issue, including Maximillian Alvarez, Özgün Eylül
İşcen, and Vendela Grundell.)
Algorithms operate through coded means, and therefore leverage the affordances of
code. As noted, computer code abstracts and ascribes representational signs – language
– to the messy realities of hardware and software operations. While code helps to
establish the relational means through which users interact with and attempt to harness
the capabilities of computing, it is also that which fundamentally delimits the expansive
potential of computation. “By isolating, stratifying, discretising, categorizing and
foreclosing the spatiotemporal continuum the process of execution articulates”
(Gauthier, 2018: 81), code erases users’ perception of the messiness of electronic
processing and slippages between what the code and its symbolic extensions stipulate
and what actually occurs (72).
While Chun (2011) has critiqued the code-enabled desire to erase execution, a gesture
that coincides with the earlier identification of invisibility and imperceptibility as
affordances unto themselves, we might also understand the capacity for code to
interface with and translate between the electronic and textual operations of
computation as one of the critical affordances that code enables. It is fundamentally
through the execution of coded signifiers that the ‘action possibilities’ of computation,
from the level of machine language to the flickering signifiers appearing on our screens
(Hayles, 1999) and back again, are realized. This is not to say that code and its
execution, or that code and activated hardware, are synonymous; as articulated above,
code provides limited insight into the actual material operations of computer hardware.
Nor is this a repeated call to access and read code in an effort to identify the particular
affordances that are embedded within the language that drives computational systems.
Instead, at stake in this case appears to be an understanding of the affordances of
actionable signs – the affordances of executable language and representational
artefacts. Ed Finn (2018) has begun to map an account of actionable signs in his
consideration of the intersection between code and magic. He argues that the
execution of code actualizes long-held cultural beliefs concerning the “mythic power
of language and the incantatory magic of words” (196). Reflecting on computation, he
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
32
explains, code is comprised of symbols that can be manipulated and executed in a
manner that does not simply abstract, represent and produce meaning about the
physical world, but that also has a ‘real’ impact on the physical world. This magical
enactment of actionable signs does not only suggest the culmination and closure of
the perceived gap between representation and reality, insofar as language is no longer
restricted exclusively to the realm of representation, but it necessitates a close
examination of the role of affordance in how computers, at the lowest levels, navigate
the bi-directional gap between electronic instantiation and abstraction.
Art and affordance
The complexities and emerging nuances of affordance theory in digital and algorithmic
contexts find expression not only in recent theoretical work, as elaborated in the
preceding sections, but also, as we identified in the introduction, in the historical and
contemporary work of media artists. Artists’ access to emerging technologies has
always informed the development of industrial, scientific, and commercial applications
of these technologies. While significant scholarship has demonstrated how early access
to computational technologies and industry shaped the foundations of many
contemporary art movements, such as Performance and Conceptual Art (e.g., Cook,
2016; Shanken, 2015), such accounts often overlook the corresponding contributions
that artists have made to the perceived affordances and discursive constitution of
emerging technologies (e.g., Noll, 2016; Patterson, 2015; Kane, 2014). Several of the
contributions featured in this issue explore and elaborate precisely such connections
and seek to emphasize the importance of affordance for current discourses on the
organisation and control of artists’ access to, use of, and experimentation with
emerging digital technologies (see for example: Guillermet, Marcinkowski, and Lotti,
this issue).
As these entries suggest, many contemporary artists continue to probe emerging
technologies in experimental work that helps recognize, expand, and ultimately rethink
the affordances linked to these technologies. Often, this kind of work also takes place
outside of institutional contexts and follows approaches that might be more aligned
with hacker ethics (see Cox, 2010; 2012), critical engineering (Oliver et al., 2019), or
other alternative attitudes towards the appropriative use of emerging technologies. As
SCARLETT & ZEILINGER | Rethinking Affordance
33
such, many works of media art can be read as critically engaging, directly or obliquely,
with Gibson’s and Norman’s perspectives on the affordance concept, and as
significantly problematising and expanding these perspectives along some of the
conceptual and theoretical vectors outlined above.
Fig. 3: ‘Rethinking Affordance’ exhibition, Akademie Schloss Solitude, Stuttgart/GER, June
2018.
Representative examples of artworks that highlight some of the critical positions we
outline in this essay were included in the group exhibition that stood at the beginning
of the ‘Rethinking Affordance’ project (Fig. 3).8 Aside from Jol Thomson’s Deep Time
Machine Learning, discussed above, additional works shown in the exhibition included,
for example, _white paper (2018) by FRAUD (aka Francisco Gallardo and Audrey
Samson), and Ways of Sitting (2018) by Foci+Loci (aka Chris Burke and Tamara Yadao).
In Ways of Sitting, the New York City-based duo Foci+Loci place digitally rendered
Duchampian ‘readymades’ in the responsive environment of the Sony-produced video
game Little Big Planet 3, where players’ interactions trigger the emergence of new sets
of affordances of this commercial, proprietary software (see Fig. 4).9 Rather than
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
34
‘hacking’ the game, the artists take advantage of functionality that has been designed
by the game developers, but which was never meant to take shape in the form of
critical, experimental, or performative media art work. Foci+Loci, in other words,
realize algorithmic affordances that may arguably have remained imperceptible to the
game developers, even though they were purposefully built into a popular
‘participative’ game that relies heavily on the player’s provision of user-generated
content. In this and other works by Foci+Loci, it becomes apparent that even purpose-
built, rule-driven digital artefacts such as video games – which tend to strictly limit
users’ powers while offering them a simulated sense of interactive freedom – afford
wide-ranging critical, alternative, and creative uses that are not predetermined in
Norman’s sense, but which correspond to Nagy and Neff’s framework of ‘imagined
affordances’ (2015).
Fig. 4: Screen capture from Ways of Sitting (work-in-progress), Foci+Loci, 2018.
FRAUD’s _white paper (see Fig. 5), consisting of a series of seemingly white posters and
print-outs (in fact, white ink was used on white paper stock), is an extension of a
cryptocurrency art project (Indulgence Coin) which the artists developed in collaboration
with Guido Rudolphi. In preparing for launching the ICO (Initial Coin Offering) for
that project, the artists had begun to question the affordances of the white paper as a
specific type of information document, while simultaneously starting to explore the
affordances of white paper as a medium through which their critique of this particular
SCARLETT & ZEILINGER | Rethinking Affordance
35
document type could be articulated. The white paper is today very prominently used
within the speculative domain of emerging crypto-economies – a domain which often
relies on ideologically problematic ways on the kinds of invisibility and imperceptibility
discussed above. As a document type, the white paper is here meant to emblematize
rigour, transparency, and extensive descriptions of business plans, technical platforms,
or other details related to a crypto venture. However, as has become clear in the
countless crypto-scams that continue to populate the Internet, the white paper can also
function as a facade that is meant to point to a larger and deeper ‘truth’ beyond itself,
which the reader is never granted full access to. Here, the important affordances of
invisibility, as discussed above with reference to Parikka (2015), Hoelzl & Marie (2015),
and others, come into play. The tendency to hide functional, economic, or ideological
issues of a project in a type of document that is by definition meant to fully lay bare
the system to which it speaks has gone so far that boilerplate web pages advertising
new crypto initiatives now sometimes only announce white papers, rather than actually
making them available.10 As such, the white paper can, in fact, function as a kind of
blackbox. Expanding on this, FRAUD rethink the affordances of the medium of the
white paper, and of written text more generally, whether in analogue or digital form.
In the form in which the work was exhibited at the ‘Rethinking Affordance’ exhibition,
it maps the affordances of the white paper across a wide range of contexts that reach
back from current crypto contexts to earlier forms such as government declarations
and public announcements of cultural, economic, and other types of official policy.
The resulting sculptural interventions call attention to the ‘invisible’ ideological
undergirding established through these seemingly innocuous documents that exist to
announce or introduce, under the guise of transparency, preliminary positions while
also projecting surety and finality. This, again, offers interesting conceptual
counterpoints to both Gibson’s and Norman’s perspectives on affordance, in
considering how the infrastructural code layers of the white paper, approached here as
both document type and medium, can be recast for critical purposes.11
After this brief consideration of how contemporary media artists engage with and
rework the concept of affordance, we will now conclude with a brief summary of the
contributions to this special issue, many of which go into considerably more detail in
their critical exploration of how artists continue to recuperate and expand the
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
36
affordances of the media substrates, technical specificities, and ideological implications
of the technological environments they inhabit.
Fig. 5: _white paper (installation view), FRAUD, 2018.
Overview of the special issue contributions
As noted, the lines of inquiry developed in this special issue seek to revisit the
discontinuities of affordance theory and to recuperate ‘affordance’ in ways that can
productively engage the dynamic malleability of the digital. Since the concept of
affordance is by definition located between design and implementation, between
environment and user, we are particularly interested in approaches that bridge or
combine theoretical and practical approaches. In developing the larger project that led
to this special issue, it was our observation that the (dis-)continuities between
established discourses on affordance and the ways in which the concept is currently
deployed are poorly understood and require critical attention. The contributions to
this special issue begin to fill this gap in contemporary media theoretical criticism.
Olia Lialina’s contribution, which is based on the author’s keynote lecture at the 2018
Rethinking Affordance symposium, offers a comprehensive survey of the tensions
SCARLETT & ZEILINGER | Rethinking Affordance
37
between Gibson’s and Norman’s perspectives on the concept of affordance, and
formulates an incisive critique of how Norman reconfigured Gibson’s initial theory.
Triangulating her inquiry in a critical dialogue between design practitioners, affordance
theory, and a critical reading of design pedagogy, and the revisiting of her own practice
as a pioneering net artist and digital folklore researcher, Lialina’s contribution moves
from early internet design practices through human-computer interaction and user
experience design towards a speculative consideration of the affordances of human-
robotic interaction.
Leveraging the terms of critical pedagogy, Maximillian Alvarez critiques and
disassembles the supposed affordances that digital technologies lend to the learning
environment and advances – in their place – an account of ‘critical media pedagogy.’
For Alvarez, this is a project with ontological implications. Following the work of
Bernard Stiegler, Alvarez explains that the epiphylogenesis of the human is inescapably
imbricated with the technological. Contemporary neoliberal pressures treat learning
technologies as tools that constrain and compel particular behaviours based on what
developers determine appropriate learning and teaching to be. This does not only limit
the potential for learning in accordance with critical pedagogy, but it also obfuscates
individuals’ capacity to form a critical understanding of the grounds for contemporary
technical life and, more fundamentally, the technological conditions of possibility
through which the human comes into being. Alvarez’s critical media pedagogy works
to undo this tendency by exploring what digital media can do for – might afford –
critical pedagogy and vice versa.
In a comparative reading of the affordance concept across a wide range of critical
theorists – from Gibson to Foucault, Deleuze, Galloway, Debord, and beyond –
Torsten Andreasen rethinks key terms of media theory (the medium, the interface, the
dispositif) and applies his insights to the close analysis of an interactive media artwork.
The author’s discussion of Transmute Collective’s Intimate Transactions (2005) thus
problematizes established assumptions that “where the medium affords [certain uses]
because of its physical design, the dispositif determines and limits a set of possible
actions.”
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
38
Aline Guillermet’s contribution traces German Informel painter K. O. Götz’s efforts to
identify and implement the affordances of television in his endeavour to realize the
historical promise of ‘kinetic painting’ – which he believed would help bridge the
demands of painterly modernism with the encroaching rise of information theory and
corresponding electronic technologies. Götz did not have access to an actual television
set and was therefore left to imagine its presumed affordances. Guillermet analyzes, as
a result, both the effects that the imagined affordances of television had on Götz’s
creative activity and historical milieu as well as the possibility that technological
affordances be conceived of as offering a ‘flexible paradigm,’ grounded within the
terms of interpretation and subjective meaning.
Charting a trajectory from J.J. Gibson’s initial theoretical writing on affordance to
Manuel DeLanda’s theory of assemblages, Michael Marcinkowski explores the concept
of digital ‘ambient literature’ projects in relation to the social assemblages that can be
established by new media art installations and the interactional affordances they
project. In doing so, the author calls for a reconfiguration of ontological assumptions
regarding the function of the affordance concept in digital contexts of experimental
literary production.
Vendela Grundell undertakes an analysis of works and practices associated with the
Blind Photography movement to expose the tactical means through which visually
impaired photographers press up against and push beyond the presumed limits of
technological affordances. Grundell argues photographers aligned with the movement
identify and implement a series of tactical affordances through their creative practice
and within their images. Not only do the photographers surveyed identify counter-
intuitive and unexpected uses for visually-oriented technologies, but their images
visualize alternative ways of seeing the world and photography, alluding to the manner
in which technologies normalize particular ways of engaging with and thinking about
reality.
In Mark Nunes’ contribution, the affordances of digital technologies – specifically the
location-awareness of mobile apps – is explored in order to account for what the
author considers as fundamentally different agencies at play in the interactions that
many mobile apps facilitate. While digital technologies often serve to facilitate a
SCARLETT & ZEILINGER | Rethinking Affordance
39
“relational coupling” between user and device, a user’s presence and activities are
themselves indicative of the emergence of new affordances. Drawing on actor-network
theory as a main conceptual framework, Nunes argues that technologies such as GPS,
and the large scale data analysis processes carried out by always-on apps, require a new
perspective on digital affordances, one in which human users themselves become
‘interfaces’ that mediate between algorithmic processes and the physical environments
they navigate.
Özgün Eylül Iscen leverages a historical and theoretical examination of the racialized,
racializing and ultimately dehumanizing technologies of drone warfare to call for a
critical reconsideration of the affordance concept. Iscen works to expose the political
pressures and privileges that often lurk behind the professed affordances of particular
technologies, while also charting the particular ways in which this is made manifest
through drones’ affording particular players ‘the right to look.’ Iscen illustrates these
principles and points towards strategies of critique and resistance through an
introduction to the work of artist-collective Forensic Architecture.
Following a speculative philosophical approach, Shintaro Miyazaki’s essay critiques the
blackboxing of many algorithmic processes, which the author perceives as resulting in
a kind of ‘unaffordability’ of algorithms. Engaging with current theoretical debates on
‘commonism,’ Miyazaki offers a speculative formulation of commonistic affordance
and, taking into consideration issues of access and open source, explores steps towards
a ‘making affordable’ of algorithms that emphasizes commoning rather than corporate
propertization.
Exploring some new affordances of the complex algorithmic systems that form what
is now commonly described as ‘financial technologies,’ Laura Lotti focuses on the
recent phenomenon of ‘tokenization’ within the cryptosphere; i.e., the issuance of new
crypto assets to self-fund decentralized projects. Integrating ongoing critical debates
in the field with a Simondonian reading of decentralized computation, Lotti discusses
the rampant financialization of creative practices presently observed in blockchain
contexts. Through the examples of two blockchain-based art projects (terra0 and 0xΩ),
Lotti analyses new forms of value generation and distribution and argues that various
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
40
instrumentalisations of blockchain affordances open up ways of reimagining and
reprogramming financial and social relations in contexts of decentralized computation.
Acknowledgments
We would like to extend our sincerest thanks to the reviewers and contributors for the generosity of spirit and hard work that they poured into the production of this issue. We would also like to thank Akademie Schloss Solitude and Simon Dawes for their patience and support in realizing this multifaceted project.
References
Bernhard, E. et al. (2013) “Understanding the Actualization of Affordances: A Study
in the Process Modeling Context.” Proceedings for International Conference on
Information Systems (ICIS 2013), 15-18 December 2013, Università Bocconi, Milan.
Access: http://eprints.qut.edu.au/63052/
Best, K. (2009) “Invalid Command,” Information, Communication & Society. Vol. 12(7):
1015 – 1040.
Bloomfield, B. et al. (2010) “Bodies, Technologies and Action Possibilities: When is
an Affordance?” Sociology. Vol. 44(3): 415 – 433.
boyd, d. (2011) “Social Network Sites as Networked Publics: Affordances, Dynamics,
and Implications,” in Networked Self: Identity, Community, and Culture on Social Network
Sites. Ed. Zizi Papacharissi. Routledge: 39-58.
Carah, N. & Angus, D. (2018) “Algorithmic Brand Culture: Participatory Labour,
Machine Learning and Branding on Social Media,” Media, Culture & Society. Vol.
40(2): 178 – 194.
Chamayou, G. (2015) A Theory of a Drone. New York: The New Press.
Chen, C.et al. (2015) “DeepDriving: Learning Affordance for Direct Perception in
Autonomous Driving,” Proceedings of 15th IEEE International Conference on Computer
Vision. Access: http://deepdriving.cs.princeton.edu/
Chuang, C.Y. et al. (2017) “Learning to Act Properly: Predicting and Explaining
Affordances from Images,” arXiv. Access: https://arxiv.org/abs/1712.07576
Chun, W. H. K. (2016) Habitual Media: Updating to Remain the Same, Cambridge, MA:
MIT Press.
Chun, W.H.K. (2006) Control and Freedom: Power and Paranoia in the Age of Fiber Optics.
Cambridge, MA: MIT Press.
SCARLETT & ZEILINGER | Rethinking Affordance
41
Cook, L. (2016) Information. Cambridge: MIT Press.
Costa, E. (2018) “Affordances-in-practices; An ethnographic critique of social media
logic and context collapse,” New Media & Society. Vol. 20(10): 3641 – 3656.
Cox, G. (2010) Antithesis: The Dialectics of Software Art. Aarhus: Digital Aesthetics
Research Center, Aarhus University. http://www.anti-thesis.net/wp-
content/uploads/2010/01/antithesis.pdf
Cox, G. (2012) Speaking Code: Coding as Aesthetic and Political Expression. Cambridge, MA:
MIT Press.
Davis, J.L. & Chouinard, J.B. (2017) “Theorizing Affordances: From Request to
Refuse,” Bulletin of Science, Technology & Science. Vol. 36(4): 241 – 248.
Diver, L. (2018) “Law as a User: Design, Affordance, and the Technological Mediation
of Norms,” scripted. Vol. 15(1): 4 – 48.
Deleuze, G. (1992) “Postscript on the Societies of Control,” October. Vol. 59: 3 – 7.
Drucker, J. (2014) Graphesis: Visual Forms of Knowledge Production. Cambridge: Harvard
University Press.
Evans et al. (2017) “Explicating Affordances: A Conceptual Framework for
Understanding Affordances in Communication Research,” Journal of Computer-Media
Communication. Vol. 22: 35 – 52.
Eslami, M. et al. (2015) “I always assumed that I wasn’t really that close to [her]”:
Reasoning about Invisible Algorithms in News Feeds.” Proceedings of the 2015 Annual
Conference on Human Factors in Computing Systems. Seoul, Korea:153 – 162.
Faraj, S., & Azad, B. (2013) “The Materiality of Technology: An Affordance
Perspective,” in Materiality and Organizing. P. M. Leonardi, B. A. Nardi, & J.
Kallinikos (Eds.), Oxford University Press: 237–258.
Finn, E. (2017) What Algorithms Want: Imagination in the Age of Computing. Cambridge:
MIT Press.
Flach, J.M. & Dominguez, C.O. (1995) “USE - Centered Design: Integrating the User,
Instrument, and Goal,” Ergonomics in Design: The Quarterly of Human Factors
Applications. Vol. 3(1): 19 - 24.
Gabrys, J. (2016) Program Earth: Environmental Sensing Technology and the Making of a
Computational Planet Minneapolis: University of Minnesota Press.
Galloway, A. (2011) “Are Some Things Unrepresentable?” Theory, Culture & Society.
Vol. 28(7-8): 85-102.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
42
Galloway, A. (2006) Gaming: Essays on Algorithmic Culture, Minneapolis: University of
Minnesota Press.
Gaver, W. (1991) “Technology Affordances,” in Proceedings of CHI'91: 79-84.
Gauthier, D. (2018) “On Commands and Executions: Tyrants, Spectres and
Vagabonds,” in Executing Practices. Eds. H. Pritchard et al. Open Humanities Press:
69-84.
Gero, J.S. & Kannengiesser, U. (2012) “Representational Affordances in Design, with
Examples from Analogy Making and Optimization,” Research in Engineering Design.
Vol. 23: 235 – 249.
Gibson, J.J. (1966) The Senses Considered as Perceptual Systems. Boston: Houghton Mifflin.
Gibson, J.J. (1975) “Affordances and Behavior” in E. S. Reed & R. Jones (eds.), Reasons
for Realism: Selected Essays of James J. Gibson, pp. 410-411. Hillsdale, NJ: Lawrence
Erlbaum.
Gibson, J.J. (1979) The Ecological Approach to Visual Perception. London: Routledge.
Gillespie T. (2010) “The Politics of “Platforms,” New Media & Society. Vol. 12: 347–
364.
Graves, L. (2007) “The Affordances of Blogging: A Case Study in Culture and
Technological Effects,” Journal of Communication Inquiry. Vol.31(4): 331 – 346.
Greeno, J.G. (1994) “Gibson’s Affordances” Psychological Review Vol. 101(2): 336-342.
Hand, M. & Scarlett, A. (2019) “Habitual Photography: Time, Rhythm and
Temporalization in Contemporary Personal Photography,” in The Routledge
Companion to Photography Theory. Eds. Tormey, J. & Durden, M. London: Routledge.
Hansen, M. (2015) Feed-Forward: On the Future of Twenty-First Century Media. Chicago:
University of Chicago Press.
Hartson, R. (2003) “Cognitive, Physical, Sensory, and Functional Affordances in
Interaction Design,” Behaviour & Information Technology. Vol. 22(5): 315-338.
Haugeland, J. (1993) “Mind Embodied and Embedded,” Mind and Cognition: 1993
International Symposium. Eds. Yu-Houng H. Houng & J. Ho. Academica Sinica: 233
- 267.
Hayles, N.K. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature,
and Informatics. Chicago: University of Chicago Press.
SCARLETT & ZEILINGER | Rethinking Affordance
43
Heemsbergen, L. (2019) “Killing Secrets from Panama to Paradise: Understanding the
ICIJ through Bifurcating Communicative and Political Affordances,” New Media &
Society. Vol. 21(3):693–711.
Hoelzl, I. & R. Marie (2015) Softimage: Towards a New Theory of the Digital Image. Chicago,
Ill.: Univ. of Chicago Press.
Horton, T.E. et al. (2012) “Affordances for Robots: A Brief Survey,” AVANT. Vol.
3, No. 2:70 – 84.
Hurley, Z. (2019) “Imagined Affordances of Instagram and the Fantastical
Authenticity of Female Gulf-Arab Social Media Influencers,” Social Media + Society.
January/March: 1 – 16.
Hutchby, I. (2001) Conversation and Technology: From the Telephone to the Internet.
Cambridge: Polity Press.
Kane, C. (2014) Chromatic Algorithms: Synthetic Color, Computer Art, and Aesthetics After
Code. Chicago: University of Chicago Press.
Kirschenbaum, M. (2012) Mechanisms: New Media and the Forensic Imagination. Cambridge:
MIT Press.
Klemme, H.F. & Kuehn, M. (2016) The Bloomsbury Dictionary of Eighteenth-Century German
Philosophers. London: Bloomsbury.
Latour, B. (1999) Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge:
Harvard University Press.
Leonardi, P.M. (2011) “When Flexible Routines Meet Flexible Technologies:
Affordance, Constraints, and the Imbrication of Human and Material Agencies,”
MIS Quarterly. Vol. 35, No. 1, Pp. 147 – 167.
Letich, H. & Lissack, Ml. (2009) “Making Room for Affordance,” E:CO. Vol. 11(3)
:61 - 72.
Lippard, L. (1972) Six Years: The Dematerialization of the Art Object. Berkeley: University
of California Press.
Lister, M. et al. (2009) New Media: A Critical Introduction. New York: Taylor & Francis.
Lovink, G. (2014) “Hermes on the Hudson: Notes on Media Theory After Snowden,”
eflux. Vol. 54.
Manovich, L. (2001) The Language of New Media. Cambridge: MIT Press.
Massumi, B. (2015) Ontopower: War, Powers and the State of Perception. Durham: Duke
University Press.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
44
McGrenere, J. and Ho, W. (2000) “Affordances: Clarifying and Evolving a Concept,”
in Proceedings of Graphics Interface: 179-186.
McVeigh-Schultz, J. & Baym, N.K. (2015) “Thinking of You: Vernacular Affordance
in the Context of the Microsocial Relationship App, Couple,” Social Media + Society,
https://doi.org/10.1177/2056305115604649
Mitchell, W.J.T. & Hansen, M. (2010) Critical Terms for Media Studies. Chicago:
University of Chicago Press.
Moloney, J. et al. (2018) “The Affordances of Virtual Reality to Enable the Sensory
Representation of Multi-Dimensional Data for Immersive Analytics: From
Experience to Insight,” Big Data. Vol. 5(53): 1-19.
Morineau, T. et al. (2009) “Turing Machine as Ecological Model for Task Analysis,”
Theoretical Issues in Ergonomics Science. Vol. 10(6): 511 - 529.
Nagy, P. & Neff, G. (2015) “Imagined Affordance: Reconstructing a Keyword for
Communication Theory,” Social Media + Society. July-December 2015: 1 – 9.
Neff G., J.T., McVeigh-Schultz J., Gillespie, T. (2012) “Affordances, Technical
Agency, and the Politics of Technologies of Cultural Production,” Journal of
Broadcasting & Electronic Media, Vol. 56: 299–313.
Noll, A. M. (2016) “Early Digital Computer Art at Bell Telephone Laboratories,
Incorporate,” Leonardo, Vol. 49(1): 55 - 65.
Norman, D. (1988) The Psychology of Everyday Things. New York: Basic Books.
Norman, D. (1999) “Affordance, Conventions and Design,” Interactions 6(3): 38-43.
Norman, D. (2009) The Design of Future Things. Philadelphia: Perseus Books/Basic
Books.
Norman, D. (2013) The Design of Everyday Things: Revised and Expanded Edition. New
York: Basic Books.
Oliver, J., G. Savičić and D. Vasiliev (2011-2019) Critical Engineering Manifesto.
https://criticalengineering.org/
Oliver, M. (2005) “The Problem with Affordance,” E-Learning and Digital Media, Vol.
2(4): 402-413
Patterson, Z. (2015) Peripheral Vision: Bell Labs, the S-C 4020, and the Origins of Computer
Art. Cambridge: MIT Press.
Parisi, L. (2013) Contagious Architecture: Computation, Aesthetics and Space. Cambridge: MIT
Press.
SCARLETT & ZEILINGER | Rethinking Affordance
45
Park, S. & Rhee, O.J. (2013) “Affordance in Interactive Media Art Exhibition,”
International Journal of Asia Digital Art Design: 93-99.
Parikka, J. (2015) “The City and the City: London 2012 Visual (Un)Commons,”
Postdigital Aesthetics: Art, Computation and Design. Eds. David M. Berry & Michael
Dieter. New York: Palgrave Macmillan: 203 - 218
Pressman, R. & Maxim, B. (2014) Software Engineering: A Practitioner’s Approach (8th ed.).
McGraw-Hill Education.
Pucillo F. & Cascini, G. (2013) “A Framework for User Experience, Needs and
Affordances,” Design Studies 35(2): 160-179.
Pozzi, G. et al (2014) “Affordance Theory in the IS Discipline: A Review and Synthesis
of the Literature.” Proceedings for Twentieth Americas Conference on Information Systems.
Ruecker et al. (2011) Visual Interface Design for Digital Cultural Heritage: A Guide to Rich
Prospect Browsing. Surrey: Ashgate Publishing.
Saponaro, G. et al. (2018) “Learning at the Ends: From Hand to Tool Affordances in
Humanoid Robots.” Proceedings: IEEE International Conference on Development and
Learning and on Epigenetic Robotics. Access: https://arxiv.org/abs/1804.03022
Scarlett, A. (2018). “Realizing Affordance in the Techno-Industrial Artist Residency,”
Schloss-Post.com. Access: https://schloss-post.com/realizing-affordance-techno-
industrial-artist-residency/
Schrock, A.R. (2015) “Communicative Affordances of Mobile Media: Portability,
Availability, Locatability, and Multimediality,” International Journal of Communication.
Vol. 9: 1229 – 1246.
Shanken, E. (2015) Systems. Cambridge: MIT Press.
Shaw, A. (2017) “Encoding and Decoding Affordances: Stuart Hall and Interactive
Media Technologies,” Media Culture & Society. Vol 39(4): 592 – 602.
Sloterdijk, P. (2010) “Das Zeug zur Macht,” in Der Welt Über die Straße Helfen.
Designstudien im Anschluss an Eine Philosophische Überlegung. Munich: Fink: 7-26.
Smith, B.C. (1996) On the Origin of Objects. Cambridge: MIT Press.
Soegaard, M. (n.d.) “Affordances,” in The Glossary of Human Computer Interaction. Access:
https://www.interaction-design.org/literature/book/the-glossary-of-human-
computer-interaction/affordances
Stone et al. (2005) User Interface Design and Evaluation. New York: Elsevier.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
46
Striphas, T. (2015) “Algorithmic Culture,” European Journal of Cultural Studies. Vol. 18(4-
5).
Sutcliffe, A.G. et al. (2011) “Social Mediating Technologies: Social Affordances and
Functionalities,” International Journal of Human-Computer Interaction. Vol. 27(11): 1037
– 1065.
Von Borries, F. (2016) Weltentwerfen. Eine Politische Designtheorie. Berlin: Suhrkamp.
Wang, Xiaolong et al. (2017) “Binge Watching: Scaling Affordance Learning from
Sitcoms.” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition
(CVPR): Access:
http://www.cs.cmu.edu/~xiaolonw/papers/CVPR_2017_VAE_affordance.pdf
Wells, A. J. (2002) “Gibson’s Affordances and Turing’s Theory of Computation,”
Ecological Psychology, Vol. 14(3): 140 - 180.
Zeilinger, M. (2018) “Plotting Critical Research Practice in Digital Art” in Golding, S.
(Ed.) Parsing Digital, London: Austrian Cultural Forum: 22-37.
Zielinski, S. (2008) Deep Time of the Media: Toward an Archaeology of Hearing and Seeing by
Technical Means. Cambridge: MIT Press.
Notes
1 Video excerpts from the piece, showing details from the apparatuses and their interaction, can be found as part of a short text the artist contributed to an online collection developed as part of the larger Rethinking Affordance project. See https://schloss-post.com/rotating-divinatory-hexagrams.
2 Beyond the conjoining of Hahn’s mechanical calculator with the Bosch robotic arm, Deep Time Machine Learning also thematizes other aspects of how to make the computational (i.e., machine vision or algorithmic operations) human-legible, and how, in turn, to make human expression computable. In doing so, the work extends its backwards-and-forwards reach into two additional directions not addressed here, which are represented, respectively, by a Faustkeil (a paleolithic hand-axe) that features in the form of an ultra-high resolution 3D print, and by the European Union’s tentative steps towards the issuing of policy and ethics directives at the intersection of humanity and AI.
3 In Chapter 8 of this latter book, Gibson outlines a kind of ‘pre-history’ of his affordance concept, which references Gestalt theory and foundational theories of the psychology of perception.
4 In addition to this theoretical oversight, Norman’s call to streamline and ensure ‘correct’ use through the communicative clarity of design also encourages a narrowing of both the real and perceived affordances that are (or might be) realizable through the use of digital devices and operations. Olia Lialina (this issue) responds critically to Norman’s approach here, reminding us that, unlike physical objects and environments, the digital is theoretically capable of modelling anything, adopting processually malleable and aesthetically unprecedented forms. Norman, she argues, fails to appropriately recognize these possibilities, and encourages designers to actively hide them for the sake of controlled usability.
5 Echoing Schrock’s (2015) conceptualization of ‘communicative affordances’, boyd defines networked publics as “publics that are restructured by networked technologies” (39). They are, as a result, “simultaneously (1) the space constructed through networked technologies and (2) the imagined collective that emerges as a result of the intersection of people, technology, and practice” (ibid.).
SCARLETT & ZEILINGER | Rethinking Affordance
47
6 As Shaw (2017) has noted in her consideration of similarities between the recognition of the
affordances offered by interactive media technologies and Stuart Hall’s theorization of encoding and decoding, “by introducing imagination to affordances … [Nagy & Neff] also acknowledge that there are aspects of mediated experiences that are invisible to users. Algorithms, for instance, affect what users can and cannot do in online spaces, but operate out of view” (600).
7 While it is critical to parse the ideological grounds of algorithmic affordances, it is important to acknowledge that much of what unfolds algorithmically eludes understanding – even to those who author them (Finn, 2018: 35).
8 The event program, including a full list of participating artists and researchers, can be found at http://www.akademie-solitude.de/en/events/symposium-rethinking-affordance~no3926/.
9 Additional documentation of the work is available at https://vimeo.com/266917634 and https://schloss-post.com/ways-sitting-wip/.
10 This is the case, for example, for Scriptocoin, a “Crypto-Pharma Ecosystem Built on Blockchain” promising to “Permanently Revolutionize the Pharmaceutical Industry Paradigm.” At the time of writing, a ‘Token Sale Pre-ICO,’ i.e., a sale of project shares taking place before the cryptocurrency powering the project is actually deployed, is underway, but the platform’s live link to its white paper literally just leads to white paper – a PDF that simply states, “white paper Coming Soon.”
11 Additional contributions to the exhibition, not discussed in detail here, also included: an installation by Situated Systems (Sherri Wasserman/US, Georgina Voss/UK, Debbie Chachra/CAN/US, and Ingrid Burrington/US) documenting the outcome of the artists’ stay at the Pier 9 Artist-in-Residence (AiR) program, which they spent researching and analysing how the military-industrial complex has shaped technological culture and innovation emerging from the Bay Area; German artist Sebastian Schmieg’s I Will Say Whatever You Want In Front Of A Pizza (2017), a critical exploration of new types of labour exploitation afforded by digital ‘gig economy’ platforms, which takes the form of a video essay produced entirely within Prezi (a web-based presentation tool); Bryan Cera’s Prosumption and Alienation (2018), a series of ceramic tea cups created on a custom-built 3D printer; and Martin Zeilinger’s Iterative Schotter (2017), a series of potter prints which explores how early computer art engaged with the affordances of new technologies, following an approach of iterative recording reproductions of Georg Nees’ influential generative art work, Schotter (ca. 1965).
Ashley Scarlett is an Assistant Professor in Critical and Creative Studies at the Alberta University of the Arts, Canada.
Email: [email protected]
Martin Zeilinger is Senior Lecturer in Computational Arts and Technology at Abertay University, UK.
Email: [email protected]
Special Issue: Rethinking Affordance
Once Again, the Doorknob:
Affordance, Forgiveness, and
Ambiguity in Human-Computer
Interaction and Human-Robot
Interaction
OLIA LIALINA
Merz Akademie, Stuttgart, Germany
Media Theory
Vol. 3 | No. 1 | 49-72
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
Based on the author‟s keynote lecture at the 2018 „Rethinking Affordance‟ symposium (Stuttgart, Germany), this essay offers a comprehensive survey of the tensions between J.J. Gibson‟s and Don Norman‟s perspectives on the concept of affordance, and formulates an incisive critique of how Norman reconfigured Gibson‟s initial theory. The essay‟s key arguments are triangulated in a critical dialogue between design practices, affordance theory, and a critical reading of design pedagogy. Drawing on her own practice as a pioneering net artist and digital folklore researcher, the author moves from early internet design practices through human-computer interaction and user experience design towards a speculative consideration of the affordances of human-robotic interaction.
Keywords
AI, Affordance, Interface Design, UX
Introduction
This essay aims to rethink the concept of affordance through a triangulated analysis
of correspondence with design practitioners, critical re-readings of canonical texts,
and reflexive engagement with my own creative and pedagogical practices. As both a
net artist and an instructor in the field of digital design, I strive to reflect critically on
the medium that I work with, in part by way of exploring and showing its underlying
properties. Furthermore, as a web archivist and digital folklore researcher, I am also
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
50
interested in examining how users deal with the worlds that they are thrown into by
designers. These areas of research and practice rely and build upon the core tenets of
human-computer interaction (HCI) and interface design – both of which provide the
conceptual frameworks within which the term „affordance‟ is now embedded, as well
as the contexts in relation to which it is primarily discussed and interpreted. To
rethink affordance, then, it is necessary to think critically about interface design and
the contemporary status of human-computer interaction (or, as will be discussed
below – human-robot interaction).
Interface Design
In the entry on the concept of the interface in Software Studies, A Lexicon, M. Fuller
and F. Cramer define interfaces as links that connect “software and hardware to each
other and to their human users or other sources of data.” After defining five types of
interfaces, the authors note that the fifth, the “user interface,” i.e., the “symbolic
handles” that make software accessible to users,” is “often mistaken in media
studies for “interface” as whole (Fuller, 2008: 149). The following text is not an
exception. It brackets software-to-software interfaces, hardware-to-software
interfaces, as well as other types of interfaces that belong to engineering and
computer science, and deliberately discusses only the surfaces, the clues and “links”
provided to the human user by software designers.
To say that the design of user interfaces powerfully influences our daily lives is both
a commonplace observation and a strong understatement. User interfaces influence
users‟ understanding of a multitude of processes, and help shape their relations with
companies that provide digital services. From this perspective, interfaces define the
roles computer users get to play in computer culture.
As a field of practice, interface design is effectively devoted to decision-making – or
rather, to the facilitation of decision-making processes. Decisions are often made
gently and silently. Often, they are made with good intentions, and more often still,
with no intention at all. The key point is that decisions are made – just like metaphors
are chosen, idioms learned, and affordances introduced. The banality, or „common-
sense‟ orientation, of this process, in no way reflects the gravity of the interface‟s
effects. From this perspective, to think of the interface, particularly in relation to the
concept of „affordance,‟ means to reflect on both the ideological stakes of the design
choices underpinning decision-making processes, as well as on the decision-making
practices that they encourage users to undertake. Such a reflection must also include
the question of what exactly professional interface designers study in order to be able
(and to be allowed) to make these choices. In other words: what should students
who will become interface designers (or “front end developers,” or “UX designers”
– there are many different terms and each of them could be a subject of
investigation) be taught?
LIALINA | Once Again, The Doorknob
51
From a pedagogical standpoint, there are a number of important paradigms that can
be established right away, in an effort to foreground (rather than obscure) the
ideological constitution and implications of the interface: Students studying interface
design, front-end development, user experience (UX), or those seeking opportunities
to reflect critically on these fields, should not begin by making an „improved‟
prototype of an interface that already exists. Nor should they be guided towards
„mastering‟ design functions (such as, for example, drop shadows or rounded
corners). Perhaps a less intuitive alternative approach should be followed, but what
might this be? Should they begin the work of designing interfaces by studying
philosophy, cybernetics, Marxism, dramaturgy and the arts more generally, and only
afterwards set out to create the first button or begin to complete any similarly
rudimentary interface design tasks?
As a workable compromise, interface design students might be introduced to key
texts that reveal the power that user interface designers have. It is critical that they
come to understand that there is no objective reality or reasoning, no nature of
things, no laws, no commandments that underpin this field. There is only this:
decisions that were and will be made, consciously or unconsciously, and the critical
implications of wielding the power to structure these decision-making processes.
This sentiment is advance by Jay Bolter and Diana Gromala in Windows and Mirrors
(2003), a now canonical text in the field, when they state that “[i]t is important for
designers and builders of computer applications to understand the history of
transparency, so that they can understand that they have a choice” (Bolter and
Gromala, 2003: 35). The text is relatively well-known in the field of media theory as
one of its authors coined the concept of remediation (Bolter and Grusin, 2000);
however, it is largely ignored in interface design. This is an unfortunate example of
how a text that usefully questions mainstream practices of interface design is
acknowledged in theoretical, reflective discourse, but disregarded in more practice-
based contexts, which continue to rely on the postulate that the best interfaces are
intuitive and transparent, to the point where users might assume no interface exists
at all.
While artists working with digital technologies are more likely to choose reflexivity
over transparency in an effort to re-think, re-imagine, and problematize the working
of interfaces, designers are traditionally less likely to do so. When the artist Johannes
Osterhoff – who identifies as an “interface artist,” and who is known for witty, long-
term performances including Google, iPhone live, or Dear Jeff Bezos (Osterhoff, 2001;
2012; 2013) – was invited to teach a university course on basic interface design, he
chose to name the course after the book, Windows and Mirrors. In his teaching, he
guided students through the creation of projects that focused on looking at
interfaces, reflecting on metaphors, idioms, and, ultimately, rethinking affordances.
Soon after, Johannes took on the position of Senior UX Designer at SAP, one of the
world‟s biggest enterprise software corporations, and I took over the course from
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
52
him a few years ago. Approximately a decade onwards, in beginning a critical
conversation about interface design one might start with some of the essays in
Brenda Laurel‟s perennially useful book, The Art of Human Computer Interaction (1991).
Published approximately five years after graphical user interfaces had begun to be
popularized, the book reflects on some of the issues and problems that arose during
this process. The book contains essays by practitioners, many of whom, almost three
decades after the book‟s initial publication, have either turned into pop stars of the
electronic age, or have by now been forgotten (as well as some who have recently
been rediscovered). A particularly pertinent text in this regard is “Why Interfaces
Don‟t Work” by Don Norman (1990). The text contains numerous statements that
are repeatedly quoted, referenced and internalized by generation after generation of
interface designers. Several of Norman‟s most cited claims include:
“The problem with the interface is that there is an interface” (Norman, 1990: 217).
“Computers exist to make life easier for the user” (ibid.).
“The designer should always aim to make the task dominate, while making the tools
invisible” (ibid.).
And, “The computer of the future should be invisible” (218).
While these particular points are not typographically foregrounded or emphasized by
the author himself, they have, nevertheless, become a kind of manifesto and
mainstream paradigm for thinking about computers, human-computer interaction
and, by extension, about the affordances of the technologies under consideration. As
each of these statements allude to, in sentence after sentence, metaphor after
metaphor, Norman argues that users of computers are not interested in computers
themselves; what they desire, he claims, is to spend the least possible amount of time
with a computer as such. As a theoretician – and, more importantly, as a designer
working for Apple – Norman was thus pushing for the development of invisible or
„transparent‟ interfaces. In fact, it is through his work that the term “transparent”
started to become synonymous with the terms “invisible” and “simple” in interface
design circles. Sherry Turkle sums up this swift development in the 2004
introduction to her 1984 book, The Second Self:
“In only a few years the „Macintosh meaning‟ of the word Transparency
had become a new lingua franca. By the mid-1990s, when people said
that something was transparent, they meant that they could immediately
make it work, not that they knew how it worked” (Turkle, 2004: 7).
The idea that users should not even notice the presence of an interface had thus
become widely accepted, and generally perceived as a blessing. Jef Raskin, initiator of
LIALINA | Once Again, The Doorknob
53
the Macintosh project, and author of many thoughtful texts on the subject, writes at
the outset of The Humane Interface (2000): “Users do not care what is inside the box,
as long as the box does what they need done. […] What users want is convenience
and results” (8). In practice, however, this perspective is contradicted by the work of
many media artists, discussed, for example, in the aforementioned Windows and
Mirrors, and likewise by many websites created by everyday users in the early 1990s.
In fact, such websites may offer the best arguments to counter the assumption that
users do not want to think about interfaces. Early DIY web design shows, very much
against the core assumptions formulated by Norman, that users were constantly busy
envisioning and developing interfaces that were not only visible, but even
foregrounded. Many examples of such sites are collected in my One Terabyte of Kilobyte
Age archive (Figures 1 and 2), and show that users indeed often work actively against
the idealized invisibility and transparency of interfaces.
Figure 1. From One Terabyte of Kilobyte Age (2009, ongoing), Olia Lialina and Dragan Espenschied
Norman, in order to support his intention of removing the interface from even the
peripheral view of the user, quoting himself from Psychology of Everyday Things (1988),
lifted the well-known doorknob metaphor from industrial design, importing it into
the world of HCI: “A door has an interface – the doorknob and other hardware –
but we should not have to think of ourselves as using the interface to the door: we
simply think about ourselves as going through the doorway, or closing or opening
the door” (Norman, 1990: 218). There is probably no other mantra of interface
design that has been quoted more often than this statement. Given the preceding
discussion of interface design in this article, does it appear appropriate that Norman‟s
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
54
writing is almost universally assigned as core reading for budding interface design
students? Perhaps it is, if one were to consider the sentence following the passage
just quoted: “The computer really is special: it is not just another mechanical device”
(ibid., 218). Here, Norman momentarily slips and acknowledges the computer as a
complex, difficult system. But he quickly catches himself, and immediately following
this statement, he reasserts his claim that the computer‟s purpose is primarily to
simplify lives.
Figure 2. From One Terabyte of Kilobyte Age (2009, ongoing), Olia Lialina and Dragan Espenschied
In contrast to the trajectory Norman seeks to prescribe, his tangential observation
that a computer is “not just another mechanical device” points to what is perhaps the
most important idea students of interface design should take to heart: the complexity
and beauty of general purpose computers. The purpose of such a device is not to
simplify life (although this may sometimes be an effect of their many uses). Rather,
one could think of the computer‟s potential purpose as enabling a kind of human-
computer symbiosis. When writing the programmatic Man Computer Symbiosis (1960),
J.C.R. Licklider appropriately quoted the French philosopher Anry Puancare‟s
proclamation that, “the question is not what is the answer, the question is what is the
question” (75). In doing so, he indicated that if computers were to be considered
collaborators or colleagues, they should also be involved in the process of
formulating questions, rather than simply being put to task answering them.
Similarly, complex purposes of the computer have been formulated, for example that
of bootstrapping (as discussed in Engelbart),1 and that of „realising opportunities,2‟ as
Vilém Flusser put it in Digitaler Schein (1997: 213) – incidentally in the same year that
LIALINA | Once Again, The Doorknob
55
Norman‟s text was published. All of these observations certainly point to
significantly more complex affordances of computer technology than simply that of
“making life easier.”
Not only is Norman‟s simplification and erasure of the computer interface at odds
with critical approaches adopted by other prominent theorists of the time, but one
can also sense that Norman‟s contemporaries were not particularly excited about his
treatment of the doorknob. In a short introductory article, “What is Interface,”
Brenda Laurel diplomatically notices that, in fact, doors and doorknobs project
significant complexity with regards to issues of control and power; indeed, they
necessitate difficult determinations of “who is doing what to whom” (1990: xii). “An
interface is a contact surface. It reflects the physical properties of the interactors, the
functions to be performed, and the balance of power and control,” continues Laurel
(ibid.).
Similarly, when Bruno Latour published Where Are the Missing Masses? The Sociology of a
Few Mundane Artifacts (1992), the reference list of his book suggests that he was well
acquainted with Norman‟s writing. The text contains a highly pertinent section
entitled “Description of the Door,” which canonizes the door as a “miracle of
technology” that “maintains the wall hole in a reversible state.” Word by word,
Latour‟s analysis of a note pinned to a door (“The groom is on Strike, For God‟s
Sake, Keep the door Closed”) and his elaborate remarks on every mechanical detail –
knobs, hinges, grooms – fully dismantles Norman‟s attempt of portraying the
doorknob as something simple, obvious, and intuitive.
“Why Interfaces Don‟t Work” does not mention the term affordance, but the
doorknob symbolizes the term very well, and has accompanied the concept across
most design manuals. What is important to emphasize is that it was Don Norman
who first initially adapted „affordance,‟ originally coined by ecological psychologist J.
J. Gibson, for the world of human computer interaction. Viktor Kapelinin provides a
good summary of this topic in his entry on affordances in the 2nd edition of
Encyclopedia of HCI, a highly recommended resource. Here, affordance is “[…]
considered a fundamental concept in HCI research and described as a basic design
principle in HCI and interaction design” (Kaptelinin, 2018, author‟s emphasis). “For
designers of interactive technologies the concept signified the promise of exploiting
the power of perception in order to make everyday things more intuitive and, in
general, more usable.”
Significantly, the entry pertains to Norman‟s figuration of affordance, not Gibson‟s.
Within the fields of HCI and interface design it is Norman‟s reconfiguration of
„affordance‟ that seems to have become the assumed source of the concept itself. A
widely quoted table found in Joanna McGrenere and Wayne Ho‟s “Affordances:
Clarifying and Evolving a Concept” demonstrates the key differences between the
two theorists‟ conceptualisation of the term, and summarizes the conceptual shift as
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
56
follows: “Norman [...] is specifically interested in manipulating or designing the
environment so that utility can be perceived easily” (2000: 8). By contrast, Gibson‟s
definition does not include “Norman‟s inclusion of an object‟s perceived properties,
or rather, the information that specifies how the object can be used,” and instead
notes that an “affordance is independent of the actor‟s ability to perceive it”
(McGrenere and Ho, 2000: 3).
As is well known, Norman, at a later date, conceded that he misinterpreted Gibson‟s
term (Norman 2008a), and corrected his general definition to pertain more
specifically to “perceived affordances.”3 Elsewhere, he elaborates:
“Far too often I hear graphic designers claim that they have added an
affordance to the screen design when they have done nothing of the sort.
Usually they mean that some graphical depiction suggests to the user that
a certain action is possible. This is not affordance, either real or
perceived. Honest, it isn‟t. It is a symbolic communication, one that
works only if it follows a convention understood by the user” (Norman
2008b).
Almost two decades later, the community of interface designers has grown vastly,
but claims on supposed affordances have become even more ridiculous, to a point
where the term is being used by UX designers in extremely wide-ranging meanings,
and has become a substitute for any front-end term. A recent article, “How to use
affordances in UX,” published online by Tubik Studio, demonstrates this well (Tubik
Studio, 2018). The title immediately indicates considerable confusion, suggesting that
an „affordance‟ is perhaps simply an element of an app that can be used alongside
other design elements such as „menu,‟ „button,‟ „illustration,‟ „logo,‟ or „photo.‟ The
article then goes on to reference a recent text in which a taxonomy of six rather
absurd types of affordances are proposed, categorised as explicit, hidden, pattern,
metaphorical, false, and negative (Borowska, 2015). Here, the designer is not only
moving further away from Gibson‟s binary perspective (that affordance either exists
or does not exist), but also extends Norman‟s notion of the “perceived” affordance
to the level of the absurd. This terminological mess is nothing new for the field of
design, in which varied and divergent usages of the term affordance point to many
troubling issues including, for example, the careless imprecision with which concepts
such as “transparency” and “experience” are used.
Could these careless games with the term „affordance‟ be ignored, or perhaps even be
perceived positively as a commendable attempt to bring sense into a confusing world
of clicking, swiping and drag-and-dropping, as a good intention to contextualize
these interactions? It certainly merits emphasizing that neither the desire to define
„affordance‟ nor the careless use of the term are quite as innocent as sometimes they
may appear. As a cornerstone of the HCI paradigm of „User Centered Design‟ –
LIALINA | Once Again, The Doorknob
57
coined and conceptualized (once again) by Don Norman in the mid-1980s, the
concept of affordance is equally important to the idea of the User Experience bubble
initiated (yet again!) by Norman (Merholz, 2007). Both of these concepts were
somewhat collapsed around 1993, when Norman became head of research at Apple.
Now, User Experience – or UX – swallowed other possible ways of imagining what
an interface might be, and how it might be used. I wrote about the danger of
scripting and orchestrating user experiences in “Rich User Experience, UX and
Desktopization of War,” where I noted that such scripting raises “user illusion” to a
level where users are asked to believe that there is no computer, no algorithms, no
input (Lialina, 2015a). But as I noted in an earlier piece, “Turing Complete User”
(Lialina, 2012), it is very difficult to criticize the concept of UX itself, because it has
developed such a strong aura of doing the right thing, of “seeing more,” “seeing
beyond,” etc.
Statements by many contemporary UX designers confirm this perception. For
example, when asked about his interpretation of UX, Johannes Osterhoff noted that:
“When I say UX I usually mean the processes that I set up so that a
product meets customers‟ (i.e., users‟) needs. [I say] „processes‟ because
usually I deal with complicated tools that take a long time to develop and
refine – much beyond an initial mock-up and a quick subsequent
implementation. So when I say UX I mean the interplay of measures that
have to be taken to enhance a special piece of software on the long run:
this involves several disciplines such as user research, usability testing,
interaction design, information visualization, prototyping, scientific and
cultural research, and some visual design. In a big software company,
strategy and psychology is part of this, too. And also streams of
communication; which form and frequency is updated; what works in
cross-located teams and what does not” (Correspondence with the
author, June 3, 2018).
In response to the same question, Florian Dusch, principal of the Stuttgart-based
software design and research company “zigzag,” also refers to UX as “many things,”
“holistic,” and “not only pretty images (Correspondence with the author, June 2,
2018). Golden Krishna, a designer employed at Google, in a text with the telling title
The Best Interface Is No Interface (2015), offers this list of terms to define UX: “People,
happiness, solving problems, understanding needs, love, efficiency, entertainment,
pleasure, delight, smiles, soul, warmth, […] etc. etc. etc.” (47). And, finally, the
German academic, Marc Hassenzahl, approximates a definition of UX by
introducing himself thus on his website: “He is interested in designing meaningful
moments through interactive technologies – in short: Experience Design”
(Hassenzahl, n.d.). This small sample of quotes from individuals who have been in
the design profession for a long time serves well to convey the sense that UX is
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
58
growing ever more complex, and is maturing into a very large field. The paradox is
that technically, when it comes to practice, products of User Experience Design
often contradict the image and aura of the field. UX is about nailing things down, it
has no place for ambiguity or open-ended processes.
Figure 3. From One Terabyte of Kilobyte Age (2009, ongoing), Olia Lialina and Dragan Espenschied
LIALINA | Once Again, The Doorknob
59
Figure 4. From One Terabyte of Kilobyte Age (2009, ongoing), Olia Lialina and Dragan Espenschied
Marc Hassenzahl, quoted above, contributes to the field not only through poetic
statements and interviews. In Experience Design: Technology for All the Right Reasons
(2010), he offers “the algorithm for providing the experience” (12), in which the
“why” is a crucial component, a hallmark that justifies UX‟s distinguished position.
In a series of video interviews (Interaction Design Foundation, n.d.) that Hassenzahl
recorded with the Interaction Design Foundation, the multitude of reasons that can
be behind phone calls were used to illustrate this idea: business, goodnight kiss,
checking if a kid is at home, ordering food. Ideally, each of the “whys” behind these
calls would drive and result in the design of specific user experiences, both with
regard to the software and the hardware involved. From this perspective, an ideal
UX phone would thus be one that adjusts to different needs, or which at least offers
a different app to use for different types of calls. In this sense, the „why‟ of UX is not
a philosophical question, but a pragmatic question – it could be substituted with
“what exactly?” and “who exactly?” User Experience Design could thus form a
successful attempt to overcome the historic accident Don Norman makes
responsible for difficult-to-use interfaces of the late 1980s: “We have adapted a
general purpose technology to very specialized tasks while still using general tools”
(Norman, 1990: 218).
“We can design in affordances of experiences,” said Norman in 2014 (Interaction
Design Foundation, n.d.). What a poetic expression, if we allow ourselves to forget
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
60
that „affordance‟ in HCI means immediate unambiguous clue, and „experience‟ is an
interface scripted for a very particular narrow scenario.
There are many such examples of tightly scoped scenarios. To name one that has
received significant public attention recently (in the aftermath of the Cambridge
Analytica scandal): Facebook recently announced an app for long-term relationships
(Machkovech, 2018) – real long-term relationships, not just “hook-ups” (to quote
Mark Zuckerberg). I have elaborated my position on general purpose computers and
general purpose users elsewhere (see “Turing Complete User” and “Do You Believe
in Users” [2009], and, following that perspective, I believe that there should be no
dating apps at all; not because dating is wrong, but because individuals can actually
date using general purpose software: they can date in email, in chats, in Excel and
Etherpad. If the free market demands a dating software, this should be made without
asking “why?” or “what exactly?”, “hook-up or long-term relationship?”, etc. – a
general purpose dating app, instead of one that compartmentalises and pigeonholes.
The “why” of UX should be left to the users, as well as their right to change the
answer and still continue to use the same software.
In the One Terabyte of Kilobyte Age, I included a “before_” identifier that is assigned to
pages which were created with certain purposes in mind, purposes that nowadays are
taken over by industrialized, centralized tools and platforms. One such category is
“before_flickr;” another is “before_googlemaps.” The last figure reminds me of
ratemyprofessors.com, so I tagged it “before_ratemyprofessor” (Figures 3 and 4).
The webpages collected in my archive are dead, and none of them became
successful, but they are examples of users finding individual ways of doing what they
desire, in an environment that is not custom-designed for their specific goals: in
contrast to the visions of interface design presented above – of restrictive views on
what kinds of experiences the web affords – this is what I would call a true user
experience, even though it is completely against what has become the dominant
ideology of UX.
Apart from contradicting Don Norman‟s definition and insisting that computers of
the future should be visible, I also propose that the term affordance should finally be
severed from Norman‟s perspective. This means to disconnect „affordance‟ from
experience, from the ability to perceive directly (as described in Gibson), and
consequently, to also disconnect it from the requirements and goals of experience
design. It means to position „affordances‟ as possibilities of action. The computer‟s
core „affordance‟ then, corresponds to its conceptualization as a „general purpose‟
device – capable of becoming anything, provided that one is given the option to
program it. Ultimately, such a perspective on the concept of affordance (particularly
within the fields of HCI and design) means to allow oneself and others to recognize
(and, potentially, to act upon) opportunities and risks of a world that is no longer
restrained to mechanical age conventions, assumptions, and design choices.
LIALINA | Once Again, The Doorknob
61
In the latest edition of the influential interaction design manual, About Face, the
authors observe:
“A knob can open a door because it is connected to a latch. However in
a digital world, an object does what it does because a developer imbued
it with the power to do something […] On a computer screen though,
we can see a raised three dimensional rectangle that clearly wants to be
pushed like a button, but this doesn‟t necessarily mean that it should be
pushed. It could literally do almost anything” (Cooper et al., 2007: 284).
Throughout the chapter, designers are advised to resist this opportunity to design
interfaces that could „literally do almost anything,‟ and instead to consistently follow
recognized conventions. Because everything, in the world of zeroes and ones, is, in
principle, possible, the authors introduce the notion of a “contract” as a means of
establishing constraints and therefore limiting users‟ potential recognition of
affordances: “When we render a button on the screen, we are making a contract with
the user […]” (285). This notion postulates that if there is what appears to be a
button on the screen, users should be able to press it – not, for example, drag-and-
drop it. The designed object, in other words, should respond appropriately to the
expectations of the users. However, this proposition is correct only as long as the
envisioned interface is limited to the horizon of preconceived uses and functions of
buttons.
When Bruno Latour wanted his readers to think about a world without doors he
wrote:
“[…] imagine people destroying walls and rebuilding them every time
they wish to enter or leave the building… or the work that would have
to be done to keep inside or outside all the things and people that left to
themselves would go the wrong way” (Freeman et al., 2008: 154).
A beautiful thought experiment, and indeed unimaginable in the material world – but
not in a computer-generated world, where we do not really need doors. You can go
through walls, you can have no walls at all, you can introduce rules that would make
walls obsolete, or simply change their „behaviour.‟ Since rules and contracts – not the
behaviors of knobs – are the future of user interfaces, the importance of thinking
through the politics of how they are established is again emphasized. The strong
need to be thoughtful and careful in how to structure the education of interface
designers should be obvious.
From human-computer interaction to human-robot
interaction
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
62
The title of this essay announces two further concepts – forgiveness and human-
robot interaction (HRI) that have not been addressed yet. I will turn to them now in
sketching answers to two questions: How does the preoccupation with strong clues
and strictly bound experiences – what might also be described as affordances and
UX – affect the beautiful concept of “forgiveness” (which we often encounter as
„undo‟ functions), which should, at least in theory, be part of every interactive
system? And, following on from this, how does HRI refract concepts including
transparency, affordance, user experience, the above-mentioned forgiveness, and the
idea that „form follows function‟ or that „form follows emotion‟?4
Apple‟s 2006 Human Interface Guidelines gives a good idea, which I think gives a very
good indication of what exactly might be meant by forgiveness in the context of
designing user interfaces (Apple Computer Inc., 2006: 45):
Forgiveness
Encourage people to explore your application by building in forgiveness
– that is, making most actions easily reversible. People need to feel that
they can try things without damaging the systems or jeopardizing their
data. Create safety nets, such as Undo and Revert to Saved commands,
so that people will feel comfortable learning and using your product.
Warn users when they initiate a task that will cause irreversible loss of
data. If alerts appear frequently, however, it may mean that the product
has some design flaws. When options are presented clearly and feedback
is timely, using an application should be relatively error-free.
Anticipate common problems and alert users to potential side effects.
Provide extensive feedback and communication at every stage so users
feel that they have enough information to make the right choices.
In essence, this recommendation intends to make actions reversible, to offer users
stable perceptual cues for a sense of „home,‟ and to always allow the „undoing‟ of any
action. Roughly a decade after these guidelines were published, Bruce Tognazinni
and Don Norman noticed that the principle of forgiveness had vanished from
Apple‟s iOS guidelines and, in reaction, co-authored an essay, expressing their
irritation under the heading, How Apple Is Giving Design a Bad Name (Tognazinni and
Norman, 2015).5
Users of Apple, Android, and all other mobile phone hardware without keyboards
noticed the disappearance of forgiveness even earlier, because there was no
equivalent to the standard „undo‟ keyboard shortcuts for undoing actions, well-
known from virtually all contemporary operating systems.
LIALINA | Once Again, The Doorknob
63
Figure 5. External Undo Button, Teja Metez; part of the author's Undo-Reloaded project (2015)
In my view of the world of HCI, „undo‟ should be a constitutional right. (It is,
accordingly, the top demand on my project User Rights [Lialina, 2013]). First of all,
„undo‟ has a historical importance. It marks the beginning of the period when
computers started to be used by people who didn‟t program them. Secondly, „undo‟
is one of very few generic (“stupid”) commands. It follows a convention without
sticking its nose into the user‟s business, and never asks “why” a user decided to
undo an action. In the present context it should be foregrounded that the
development of hypes around the affordance concept and UX occurred in parallel
with the disappearance of the „undo‟ function. This is not a coincidence: single-
purpose applications with one button per screen are designed to guide users through
life without a need for „undo‟.
As part of more general new media dynamics, the field of HCI is considered as
vibrant and „pluralistic.‟ Tasks for interface designers, therefore, are to be found far
beyond „Submit‟ buttons and the screens of personal computers. There are new
challenges, such as Virtual Reality and Augmented Reality, Conversation and Voice
User Interfaces, even Brain-Computer Interaction. These areas are not new in and of
themselves. They are contemporary with the emergence of graphical user interfaces,
but could accurately be described as “trending right now” (or “trending right now
again”) in HCI papers and in the culture industry more generally. The current
moment (in movies, literature, and consumer products) is all about artificial
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
64
intelligence, neural networks, and anthropomorphic robots. Allowing this
development to infect my curriculum as well, I introduced the rewriting of an
ELIZA (see Landsteiner, 2005) script as a task in my interface design course. This
allows students to prepare themselves for designing interfaces that talk to the users,
and which pretend that they understand them. I personally have a bot (see Lialina,
2015b), and this talk will be fed into its algorithm and will become a part of the bot‟s
performance. In a few more years this bot might be injected into a manufactured
body that looks something like me and will go to give lectures or write essays in my
place.
Considering the slew of films and TV series in which robots are the main
protagonists, and considering popular media coverage of the adventures of human-
looking robots such as Sophia, it requires less and less specialization to dive into
complex contemporary issues concerning robots that were exotic not too long ago;
relevant examples include the difference between symbolic and strong AI, ethics of
robotics, or trans-humanism. This being said, the omnipresence of robots, even if
merely in mediated forms, provokes delusions: “We expect our intelligent machines
to love us, to be unselfish. By the same measure we consider their rising against us to
be the ultimate treason” (Zarkadakis, 2017: 51). Delusions lead to paradoxes:
“Robots which enchant us into increasingly intense relationships with the inanimate,
are here proposed as a cure for our too-intense immersion in digital connectivity.
Robots, the Japanese hope, will pull us back toward the physical real and thus each
other” (Turkle, 2012: 147). Paradoxes then lead on to more questions: “Do we really
want to be in the business of manufacturing friends that will never be friends?”
(ibid., 101). Should robots have rights? Should robots and bots be required to reveal
themselves as what they are?
This last question suddenly entered the discourse after Google‟s recent demo of the
Duplex AI assistant (Grubb, 2018), when Internet users began to debate whether the
tool should be allowed to say “hmmm,” “oh,” “errr,” or to use interjections at all.
LIALINA | Once Again, The Doorknob
65
Figure 6. Sophia, First Robot Citizen at the AI for Good Global Summit 2018. (Image credit: CC BY 2.0, AI for Good Global Summit)
Perhaps without even noticing, the general public is now engaging in discussions of
difficult ethical as well as interface design questions and decisions. By extension, this
is also a debate building on the evolving recognition of the potentially much less
restrictive affordances of emerging technologies such as AI assistants. And I wish or
hope it will stay like this for some time. “Why Is Sophia‟s (Robot) Head
Transparent?” (Quora, n.d.), users ask. Is it just to look like the lead character from
Ex Machina, or is it for better maintenance? Does it perhaps mark a comeback of
transparency in the initial, pre-Macintosh meaning of the word? Curiously, when
scientists and interaction designers talk about transparency at the moment, they
oscillate between the desire to convey meaning and explain algorithms, on the one
hand, and that of increasing the simplicity of the communication with a robot, on the
other. The following series of recent publication titles is indicative of this trend:
“Designing and implementing transparency for real time inspection of autonomous
robots” (Theodorou et al., 2017); “Robot Transparency: Improving Understanding
of Intelligent Behaviour for Designers and Users” (Wortham et al., 2017a);
“Improving robot transparency: real-time visualisation of robot AI substantially
improves understanding in naive observers” (Wortham et al., 2017b).
Joanna J. Bryson, who co-authored these aforementioned papers, projects a very
clear position on ethics. “Should Robots have rights?” is not a question for her.
Instead, she asks why we should wish to design machines that raise such questions in
the first place (Theodorou et al., 2017). There are, however, enough studies proving
that humanoids (anthropomorphic robots) that perform morality are the right
approach for situations in which robots work with and not instead of people. This
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
66
could be described as the social robot scenario, in which “social robot is a metaphor
that allows human-like communication patterns between humans and machines,” as
Frank Hegel wrote (Hegel, 2016: 104). Hegel‟s essay doesn‟t announce paradigm-
shifting insights, but rather states quite obvious things, such as that “human-likeness
in robots correlates highly with anthropomorphism” (ibid. 111), or that “aesthetically
pleasing robots are thought to possess more social capabilities” (ibid. 112). Calmly
and subtly, he introduces his principle for fair robot design: the “fulfilling
anthropomorphic form” (ibid. 106), which should immediately lead humans to
understand a robot‟s purpose and capabilities. Such principles indicate a
consideration of affordances for a new age.
Robots are here, not industrial machines, but instead become social or even
“lovable” entities. Their main purpose is not to replace people, but to be among
people. They are anthropomorphic; they look more and more realistic. They have
„eyes‟ – however, not because they need them to see, but because their eyes inform
us that „seeing‟ is among the robot‟s functions. If a robot has a „nose,‟ it is, likewise,
to inform the user that it can „smell,‟ perhaps detect gas and pollution; if it has „arms,‟
it can obviously carry heavy items; if it has „hands,‟ it will be designed to grasp
smaller items, and if these hands have „fingers,‟ you might expect that the robot can
play a musical instrument. Robots‟ eyes beam usability, their bodies express
affordances. Faces literally become an interface.
How can this be contextualised with Norman‟s wisdom?:
“Affordances provide strong clues to the operations of things. Plates are
for pushing. Knobs are for turning. Slots are for inserting things into.
Balls are for throwing or bouncing. When affordances are taken
advantage of, the user knows what to do just by looking: no picture,
label, or instruction needed” (Norman, 1988: 9).
Manual affordances (“strong clues”) are easy to comprehend and to accept when
they are part of a GUI (graphical user interface): they are graphically represented and
located somewhere on a screen. Things already became quite a bit more complex
both for designers and users when we entered the so-called “post-GUI” realm, in
which gestures in virtual, augmented, and invisible space figure importantly. Yet, all
of this cannot be compared with the astonishing level of complexity that is reached
when our thoughts move from human-computer interaction to human-robot
interaction.
LIALINA | Once Again, The Doorknob
67
Figure 7. Video still image from Concept for Swimming Lifesaver Robot (2018), Andreas Eisenhut.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
68
This figure above is from a selection of sketches in which students were tasked to
embrace the principle of the fulfilling anthropomorphic form, and take it to the limit.
What could be an anthropomorphic design if everything that does not signal a
function is removed? If the robot cannot smell, there must be no nose. And why
should there be a pair of hands if you only need one? What could this un-ambiguity
mean for interaction and product design? Is there a chance for robots to not
manifest “what?” and for humans to not answer “why?”.
This leads us to the concluding question regarding the coexistence of affordance and
forgiveness in anthropomorphic scenarios: How does the human-computer
interaction principle of „undo‟ appear in human-robot interaction?
In contrast to the current situation in graphical and touch-based user interfaces,
forgiveness is doing very well in the realms of robots and AI. It is built in: “[t]he
external observer of an intelligent system can‟t be separated from the system”
(Zarkadakis, 2017: 71). Robot companions are here “[n]ot because we have built
robots worthy of our company but because we are ready for theirs,” and “[t]he
robots are shaping us as well, teaching us how to behave so they can flourish”
(Turkle, 2012: 55). These statements remind us once more of Licklider‟s man-
computer-symbiosis, Engelbart‟s concept of bootstrapping, and other advanced
projections for the coexistence of man and computer – except this time, what is
concerned is human and robot, not human and computer-on-the-table. Forgiveness
is built-in, but in HRI, it is always already built into the human part. It is all ours to
give. Here, we are witnessing how the most valuable concept of HCI – „undo‟ –
meets a fundamental principle of symbolic AI – scripting the human interactor.6 It
remains to be seen what affordances will further emerge. And who will undo whom
once Symbolic AI is replaced by the Strong or, as scientists and mass media refer to
it now, “Real” and “Full” AI.
References
Bardini, T. (2000) Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal
Computing, 1st ed. Palo Alto: Stanford University Press.
Bolter, D. & R. Grusin (2000) Remediation: Understanding New Media. Cambridge: MIT
Press.
Bolter, D. & D. Gromala (2003) Windows and Mirrors: Interaction Design, Digital Art, and
the Myth of Transparency. Cambridge: MIT Press.
Borowska, P. (2015) “6 Types of Digital Affordance That Impact Your
UX,” Webdesigner Depot. https://www.webdesignerdepot.com/2015/04/6-types-
of-digital-affordance-that-impact-your-ux/
LIALINA | Once Again, The Doorknob
69
Cooper, A., R. Reimann, & D. Cronin (2007) About Face 3: The Essentials of Interaction
Design, 3rd edition. Indianapolis, In.: Wiley.
Eisenhut, A. (2018) Concept for Swimming Lifesaver Robot (video).
Flusser, V. (1997) Medienkultur, 5th ed. Frankfurt am Main: Fischer Taschenbuch.
Frogdesign. “About Us.” https://www.frogdesign.com/about Accessed August 18,
2018.
Grubb, J. “Google Duplex: A.I. Assistant Calls Local Businesses to Make
Appointments,” YouTube.
https://www.youtube.com/watch?v=D5VN56jQMWM Accessed July 28, 2018.
Hassenzahl, M. & J. Carroll (2010) Experience Design: Technology for All the Right Reasons.
San Rafael, Ca.: Morgan and Claypool Publishers.
Hegel, F. (2016) “Social Robots: Interface Design between Man and Machine,”
in Hadler, F. & J. Haupt (Eds.) Interface Critique. Berlin: Kulturverlag Kadmos.
Interaction Design Foundation (n.d.) “„User Experience and Experience Design,‟ by
Marc Hassenzahl.” Accessed July 28, 2018. https://www.interaction-
design.org/literature/book/the-encyclopedia-of-human-computer-interaction-
2nd-ed/user-experience-and-experience-design
Kaptelinin, V. (n.d.) “Affordances,” in The Encyclopedia of Human-Computer Interaction,
2nd ed. (Interaction Design Foundation). Accessed July 28, 2018.
https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-
computer-interaction-2nd-ed/affordances
Kay, A. “User Interface: A Personal View,” in Laurel, B. (Ed.) The Art of Human-
Computer Interface Design, Reading, MA: Addison-Wesley, 1990, pp.191–207.
Krishna, G. (2015) The Best Interface Is No Interface: The Simple Path to Brilliant Technology.
Berkeley, Ca.: New Riders.
Landsteiner, N. (2005) “Eliza (Elizabot.Js),” https://www.masswerk.at/elizabot/.
Latour, B. (1994) “Where Are the Missing Masses?,” in Bijker, W. et al. (Eds.) Shaping
Technology / Building Society: Studies in Sociotechnical Change, Reissue edition.
Cambridge, Mass.: MIT Press, pp.225–59.
Laurel, B., Ed. (1990) The Art of Human-Computer Interface Design, 1st ed. Reading,
Mass.: Addison Wesley Publishing Corporation.
Lialina, O. (2012) “Turing Complete User.” http://contemporary-home-
computing.org/turing-complete-user/
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
70
Lialina, O. (2013) “User Rights,” http://userrights.contemporary-home-
computing.org/
Lialina, O. (2015a) “Rich User Experience, UX and Desktopization of War.”
http://contemporary-home-computing.org/RUE/
Lialina, O. (2015b) “GIFmodel_ebooks,” Twitter Bot,
https://twitter.com/GIFmodel_ebooks.
Lialina, O., & D. Espenscheid (2009) “Do You Believe in Users?” in Digital Folklore,
Stuttgart: Merz und Solitude.
Lialina, O., & D. Espenscheid (2009, ongoing) One Terabyte of Kilobyte Age.
http://blog.geocities.institute/
Licklider, J. (2003) “Man-Computer Symbiosis,” in Wardrip-Fruin, N. & N. Montfort
(Eds.) The New Media Reader. Cambridge: MIT Press.
Machkovech, S. (2018) “Mark Zuckerberg Announces Facebook Dating,” Ars
Technica, https://arstechnica.com/information-technology/2018/05/mark-
zuckerberg-announces-facebook-dating/
McGrenere, J. & W. Ho (2000) “Affordances: Clarifying and Evolving a Concept,”
Proceedings of Graphics Interface.
http://teaching.polishedsolid.com/spring2006/iti/read/affordances.pdf
Merholz, P. (2007) “Peter in Conversation with Don Norman About UX &
Innovation,” Adaptive Path. Accessed July 29, 2018.
https://www.adaptivepath.com/ideas/e000862/
Metez, T. (2015) External Undo Button. https://newmedia.merz-
akademie.de/~teja.metez/undo_reloaded/#undo-keyboard
Murray, J. (1997). Hamlet on the Holodeck: The Future of Narrative in Cyberspace, 1st
edition. New York: Free Press.
Norman, D. (1988) Psychology of Everyday Things. New York: Basic Books.
Norman, D. (1990) “Why Interfaces Don‟t Work,” Laurel, B. (Ed.) The Art of Human-
Computer Interface Design. Reading, Mass.: Addison Wesley Publishing Corporation.
Norman, D. (2008a) “Affordances and Design,” accessed July 28, 2018.
https://jnd.org/affordances_and_design/
Norman, D. (2008b) “Affordance, Conventions and Design (Part 2),” accessed
August 20, 2018. https://jnd.org/affordance_conventions_and_design_part_2/
Osterhoff, J. (2001) Google (performance) http://google.johannes-p-osterhoff.com/
LIALINA | Once Again, The Doorknob
71
Osterhoff, J. (2012) iPhone live (online performance) http://iphone-live.net/
Osterhoff, J. (2013) Dear Jeff Bezos (online performance) http://www.bezos.cc/
Raskin, J. (2000) The Humane Interface. New Directions for Designing Interactive Systems.
Reading, Mass: Pearson Education.
Theodorou, A. R. Wortham, & J. Bryson (2017) “Designing and Implementing
Transparency for Real Time Inspection of Autonomous Robots”, Connection
Science 29(3): 230–41.
Tognazzini, B. (2012) “About Tog,” https://asktog.com/atc/about-bruce-
tognazzini/.
Tognazzini, B. & D. Norman, “How Apple Is Giving Design A Bad Name,” Fast
Company, November 10, 2015. https://www.fastcompany.com/3053406/how-
apple-is-giving-design-a-bad-name
Tubik Studio (2018) “UX Design Glossary: How to Use Affordances in User
Interfaces,” UX Planet, May 8, 2018. https://uxplanet.org/ux-design-glossary-
how-to-use-affordances-in-user-interfaces-393c8e9686e4
Turkle, S. (2004) The Second Self: Computers and the Human Spirit. Cambridge: MIT
Press.
Turkle, S. (2012) Alone Together: Why We Expect More from Technology and Less from Each
Other. New York, NY: Basic Books.
Wortham R, Theodorou A., & J. Bryson (2017) “Robot Transparency: Improving
Understanding of Intelligent Behaviour for Designers and Users,” in Gao Y.,
Fallah S., Jin Y., Lekakou C. (Eds.) Towards Autonomous Robotic Systems. TAROS
2017. Lecture Notes in Computer Science Vol. 10454.
Quora.com (n.d.) “Why Is Sophia‟s (Robot) Head Transparent?”
https://www.quora.com/Why-is-Sophias-robot-head-transparent.
Zarkadakis, G. (2017) In Our Own Image: Savior or Destroyer? The History and Future of
Artificial Intelligence, 1 edition. Pegasus Books.
Notes
1 See Bardini‟s discussion of this issue: “Engelbart took what he called „a bootstrapping approach,‟
considered as an iterative and coadaptive learning experience” (Bardini, 2000: 24). 2 “Verwirklichen von Möglichkeiten”
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
72
3 This should remind us of another term that has existed in HCI since 1970, at least at XEROX Park
lab: User Illusion, which at the end of the day is the same principle, and also a foundation of interfaces as we know them. “At PARC we coined the phrase user illusion to describe what we were about when designing user interfaces” (see Kay, 1990: 191-207).
4 Form Follows Emotion is a credo of German industrial designer Hartmut Esslinger, which became a slogan for “frog” the company he founded in 1969. See: “Frog Design. About Us.,” accessed August 18, 2018; “FORM FOLLOWS EMOTION,” Forbes.com, accessed August 18, 2018. https://www.frogdesign.com/about https://www.forbes.com/asap/1999/1112/237.html
5 Bruce Tognazinni has himself authored eight editions of Apple‟s Human Interface Design Guidelines, starting in 1978, and is known for conceptualizing interface design in the context of illusion and stage magic (see Tognazinni, 2012).
6 “A successful chatterbot author must therefore script the interactor as well as the program, must establish a dramatic framework in which the human interactor knows what kinds of things to say […]” (Murray, 1997: 202).
Born in Moscow in 1971 and now based in Germany, Olia Lialina is an early-days, network-based art pioneer, among the best-known participants of the 1990s net.art scene. Her early work had a great impact on recognizing the Internet as a medium for artistic expression and storytelling. This century, her continuous and close attention to Internet architecture, “net.language” and vernacular web – in both artistic and publishing projects – has made her an important voice in contemporary art and new media theory. Lialina has, for the past two decades, produced many influential works of network-based art: My Boyfriend Came Back from the War (1996), Agatha Appears (1997), First Real Net Art Gallery (1998), Last Real Net Art Museum (2000), Online Newspapers (2004-2018), Summer (2013), Self-Portrait (2018). Lialina is also known for using herself as a GIF model, and is credited with founding one of the earliest web galleries, Art Teleportacia. She is cofounder and keeper of the One Terabyte of Kilobyte Age archive and a professor for New Media Design at Merz Akademie in Stuttgart, Germany. Email: [email protected]
Special Issue: Rethinking Affordance
(Digital) Media as
Critical Pedagogy
MAXIMILLIAN ALVAREZ
University of Michigan, USA
Media Theory
Vol. 3 | No. 1 | 73-102
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
From chalkboard sites to social media, from smartphones to interactive grading software, there is an overabundance of digital learning tools at our fingertips, many of which float into our classrooms on airy praise from university administrators, politicians, and corporate technicians alike who tout the incorporation of these technologies into our teaching as an undeniably positive step toward the “enhancement” of student learning. Rather than promoting a critical model of learning by which students and teachers can explore the matrix of possibilities “afforded” by their relationship to new media, the techno-fetishist instrumentality of “technology-enhanced learning” functions as an efficient means of materializing neoliberal market ideology and adjusting us to accepting our positions as self-contained users of discrete tools that define for us what the goals and processes of learning will be. It is imperative, then, that we engage ourselves and our students in the critical pedagogical process of learning to learn in conversation with – not at the behest of – media. To do so gets to the very heart of critical pedagogy itself, because, as I argue, the ontological assumptions underwriting the very hope and possibility of critical pedagogy as a political project are nothing if not the essential coordinates for a media theory of being. If we are to determine how to develop a sufficiently critical pedagogy in the age of digital media, we must first re-locate the learning process in the exploration of the open, dialectical circuits between human and world through which life itself is mediated, and from which political change is made possible.
Keywords
Media Theory, Critical Pedagogy, Digital, Ontology
“Within history, in concrete, objective contexts, both humanization and dehumanization are
possibilities for a person as an uncompleted being conscious of their incompletion.”
– Paulo Freire, Pedagogy of the Oppressed
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
74
“Since in reality there is nothing to which growth is relative save more growth, there is nothing to
which education is subordinate save more education.”
– John Dewey, Democracy and Education
Over the past three decades, opining about the educational applications of digital
technologies has become a cottage industry unto itself. “Indeed,” Neil Selwyn writes,
“most recently a fresh set of educational discourses has accompanied the emergence
of ‘new’ technologies such as social media, wireless connectivity and cloud data
storage, and not least the seemingly unassailable rise of personalized and portable
computing devices such as smartphones and tablets” (2013: 3). From chalkboard sites
to social media, from smartphones to Prezi, from in-class polling apps to interactive
grading software, there is an almost suffocating overabundance of digital tools at our
fingertips, many of which float into our classrooms on airy praise from university
administrators, politicians, and corporate technicians alike who tout the incorporation
of these technologies into our teaching as an undeniably positive step toward the
“enhancement” of student learning (Ahalt & Fecho, 2015). As a result, “Public debate,
commercial marketing, education policy texts and academic research are now replete
with sets of phrases and slogans such as ‘twenty-first century skills’, ‘flipped
classrooms’, ‘self-organised learning environments’, ‘unschooling’, an ‘iPad for every
child’, ‘massively online open courses’ [MOOCs] and so on” (Selwyn, 2013: 3). As our
educational discourse continues to be pumped full of such slogans, the conclusion that
the future of learning is – and must be – digital seems to have already been made for
us.
That we and our students are living in a digitalized world is a blunt fact. And it seems
futile, and perhaps even slightly irresponsible, not to actively engage students in the
process of learning about (and learning on) the digital terrains that they have grown
up navigating – and will continue to navigate once they leave our classrooms. And
there is, indeed, much to be gained from doing so, for students and teachers alike. As
Ernest Morrell, Rudy Dueñas, Veronica Garcia, and Jorge López note, “Today’s youth
spend the majority of their waking lives as consumers and producers of media […]
[They] blog, pin, post, comment, and share links with social networks on a scale that,
a generation ago, would have been possible only for professional media personnel”
(2013: 2). In their daily consumption and production of media, along with their flexible
negotiation of ever-evolving media-worlds, students today are developing skills outside
ALVAREZ | (Digital) Media as Critical Pedagogy
75
of the classroom that have tremendous capacities to inform what and how they learn
inside the classroom. Moreover, on the flip side, what forms the learning process takes
in the digitally connected classroom, and how students’ own subjectivities are shaped
and mediated through it, can have significant bearing on the kinds of “digital citizens”
(Talib, 2018: 56) students will become.
This is precisely why, even for those of us who try not to be total Luddites, there is
something deeply unnerving in the spoken and unspoken presumptions that are being
made about students and learning and technology throughout much of the professional,
corporate, and governmental discourses of digital education. Such presumptions are
routinely reinforced by the instrumentalist manner in which we deploy digital
technologies in the classroom; that is, by the way we assume and accept our positions
as users of tools whose uses themselves have been prescribed – and whose
functionality has been programmed and hidden behind a black box (Goffey & Fuller,
2012) – by opaque commercial, governmental, and administrative forces beyond the
classroom, all of which have their own incentives and agendas calibrated to the
positions they occupy in our political economy. It is crucial to remember that there is
nothing predestined about the sort of digital technologies we incorporate into our
teaching, the specific shapes they take, the functions they perform, the skills they test,
their methods for measuring success, the data they collect, the people they put out of
work, etc. But there is nothing neutral about these things either. As Kristin Smith and
Donna Jeffery write, “The widespread acceptance of online [and other digital]
educational technologies is not simply the product of pure technological evolution.
They are deeply embedded in the social, economic, and political contexts governed by
neoliberal discourses and practices” (2013: 378). The top-down rush to “enhance” the
learning process and “streamline” teaching duties through the adoption of new digital
technologies has been part of an institutional realignment that is both “deeply
embedded” in the historical contexts of neoliberalism and consonant with the aims of
the generalized, but unevenly executed, neoliberalization of education as such
(Newfield, 2008; Bousquet, 2008; Schrecker, 2010; Giroux, 2015; Hall, 2016).
Neoliberalism, as Wendy Brown writes:
is most commonly understood as enacting an ensemble of economic
policies in accord with its root principle of affirming free markets. These
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
76
include deregulation of industries and capital flows; radical reduction in
welfare state provisions and protections for the vulnerable; privatized and
outsourced public goods, ranging from education, parks, postal services,
roads, and social welfare to prisons and militaries; […] the conversion of
every human need or desire into a profitable enterprise, from college
admissions preparation to human organ transplants, from baby adoptions
to pollution rights, from avoiding lines to securing legroom on an airplane;
and, most recently, the financialization of everything and the increasing
dominance of finance capital over productive capital in the dynamics of
the economy of everyday life (2015: 28).
Under the rank shadow of neoliberalism, more and more public goods and personal
desires are broken down and rewired to accommodate the total and seamless
penetration of market values into every facet of “the economy of everyday life.” As
critical sites for the accumulation of capital and the reproduction of neoliberal
ideology, educational institutions are unmoored from the public good and restructured
to ease the infiltration of money, personnel, and directives from the private sector
(Weiner, 2004; Newfield, 2016; Cervone, 2018). This structural overhaul is
accompanied by formal (and often strictly enforced) changes to curricula, teaching
practices, learning outcomes, methods of assessment, etc. – changes designed to
complement these retrofitted neoliberal prerogatives while (re)producing in students
and teachers alike the sort of self-policing “responsible subjects” (Clarke, 2004: 33)
neoliberalism requires. “As a result, educators are increasingly expected to enact cost
containment measures, cooperate with the demands of efficiency-driven management
styles, and work under expectations of labor flexibility and adaptability” (Smith &
Jeffery, 2013: 375), all while being charged with the task of enacting and enforcing “an
idea of education as content delivery and absorption, with students designated as
recipients and clients rather than partners in an exploratory enterprise” (Mullen, 2002:
19).
These are the hard, practical contexts in which the push for integrating more digital
technologies into the learning process is taking place. And it is precisely in this vein
that we must critically appraise the ideological functions and subjective outcomes of
said technological integration as well as the equally utopian and fatalistic narrative “that
ALVAREZ | (Digital) Media as Critical Pedagogy
77
technology is inevitable, that technology is wrapped up in our notions of progress, and
that somehow progress is inevitable itself and is positive” (Young & Watters, 2016).
Because, at the same time that educational institutions have transformed into
“administrative [apparatuses] whose morality is outsourced to the market” (Alvarez,
2017), the instrumentalist, techno-fetishist embrace of learning with and through
digital tools is part and parcel of the essential reproduction of neoliberal market
ideology. “Many elements of online education exemplify the core beliefs of the private,
commercial sector in that they necessarily concern themselves with trying to measure
and count narrow outcomes rather than with the complexities of learning […]
challenging subject matter […]. If education is to be efficient, then it simply must be
capable of being measured” (Jeffery & Smith, 2013: 377). That corporate,
administrative, and governmental efforts to accelerate the incorporation of digital
technologies into the learning process have surged in tandem with the thorough
neoliberalization of education institutions is not a coincidence. These technologies are
less designed and deployed to expand the horizons of critical student learning than to
narrowly redefine the very shape and scope of formal learning in accordance with the
prerogatives of the neoliberal power structure, which prizes, above all else, that which
(and those who) can be standardized, quantified, managed, and monetized. Thus, as
Jesse Stommel and Sean Michael Morris write in their open-access e-book, An Urgency
of Teachers, “educators and students alike have found themselves more and more
flummoxed by a system that values assessment over engagement, learning
management over discovery, content over community, outcomes over epiphanies”
(2018). And to uncritically approach the integration and use of digital technologies into
the learning process is to make ourselves and our students vulnerable to being used by
them – to being adjusted, programmed, and made comfortable with the very worldly
conditions that we, as critical educators, are ostensibly trying to challenge. We must,
therefore, be wary of the professional discourses that herald this process of
technological integration as both inevitable and objectively positive.
In her contribution to the edited volume Critical Learning in Digital Networks, for
instance, Sarah Hayes examines trends in these educational discourses from the U.S.,
E.U., and Australia, and picks up on a relatively recent and rather telling terminological
shift. Hayes notes that the ubiquity of terms like “e-Learning” and “online learning,”
which, in more-or-less neutral ways, primarily served to describe the digital context in
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
78
which learning (however it was defined) took place, has been largely usurped by the
more explicitly value-judgment-laden discourse of “technology-enhanced learning.” In
this positivist discourse, it is not only taken as a given that to infuse education with
newer technological elements is, by definition, to enhance the learning process; it is
also presumed that the learning process itself is straightforward enough that its
technology-induced “enhancement” can be so confidently assured. As Hayes writes,
“The verb ‘enhanced’ is selected and placed in between ‘technology’ and ‘learning’ to
imply (through a value judgment) that technology has now enhanced learning, and will
continue to do so” (2015: 15). Ideologically, epistemologically, politically, the implicit
value judgment that is buried in (and enforced by) the discourse of “technology-
enhanced learning” is doing a lot of heavy lifting here. How the learning process will
be defined, what will be learned, and to what ends – these and other vital questions
are subsumed under the narrow purview of a formal education apparatus that, as
mentioned above, is designed to clear the way for market forces to penetrate every
level of daily life while also shaping and pumping out the kind of responsible subjects
neoliberalism needs to reproduce and maintain its hegemony.
What must be noted here – especially given the theme of this issue of Media Theory –
is that the positivist assertion embedded in the professional discourse of “technology-
enhanced learning” explicitly (and even violently) forecloses the epistemological,
subjective, and political possibilities that are otherwise expressed in the discourse of
technological “affordance.” “Technology-enhanced learning” bears out a self-
affirming promise that the technology in question will not “afford” teachers and
students the means to explore new learning possibilities so much as it will efficiently
compel them to perform what the programmers of said technology have determined
learning to be (and that said technology, with exacting precision, will evaluate teaching
and learning on the strict basis of this performance). In fact, we could say that the
political epistemology represented by the assertion of “technology-enhanced learning”
is roundly antagonistic to the understanding of technology that is belied by the very
notion of affordance. Because where there is affordance there is openness, uncertainty,
a chance for thinking or doing something that is made possible – but is by no means
guaranteed – by that which affords. Such openness is antithetical to the neoliberal
prerogatives and parameters of “technology-enhanced learning.”
ALVAREZ | (Digital) Media as Critical Pedagogy
79
Of course, as an analytical concept that can help us better understand the range and
scope of technological functionality, “affordance” is equally a question of the
possibilities that are opened up and foreclosed by the structural specificities of a
particular tool, program, environment, etc. “Affordances are functional in the sense
that they are enabling, as well as constraining, factors in a given organism’s attempt to
engage in some activity,” Ian Hutchby notes (2001: 448). “Certain objects,
environments or artefacts have affordances which enable the particular activity while
others do not. But at the same time the affordances can shape the conditions of
possibility associated with an action: it may be possible to do it one way, but not
another” (2001: 448). Thus, while it is certainly true that the functional specificity of
certain digital technologies can afford students and teachers the “conditions of
possibility” for developing new forms of critical, collaborative, and exploratory
learning, it is equally true that engaging with these – or any – technologies will
inevitably limit the horizons of what is doable and thinkable to what their functional
specificity allows (i.e. affords). For the purposes of this discussion, however, what is
especially noteworthy is the fact that affordance names a context in which the horizon
of possibilities is limited (and opened) by the relation between a human organism and
the functional specificity of a distinct technology. The relation itself forms the
generative matrix of possibility: “Affordances are thereby focused on the relationship
between people and object, their creative and adaptive interaction with the environment rather
than any compliant response to any designed features of that environment” (Conole
& Dyke, 2004: 302, emphases added). Indeed, this is why the neoliberal instrumentality
denoted by “technology-enhanced learning” steers clear of any serious reference to
affordance. The former, which does seek to elicit (if not compel) a “compliant response
to […] designed features,” is not content with the relational limiting of possibilities
named in the discourse of technological affordance; it is deliberately designed and
deployed, rather, to foreclose (as much as possible) the contingency of possibility itself.
Rather than opening a learning space in which teachers, students and digital
technologies can explore one another in a matrix of relational possibility, “technology-
enhanced learning” inflates the neoliberal illusion of possibility with increasingly
personalized, choice-adaptive programs and multi-modal functionalities that
nevertheless reduce the user’s say in what and how they learn to nil. “The embedding
of the idea of ‘enhancing learning through the use of technology,’” Hayes continues,
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
80
“firmly structures educational technology within a framework of exchange value. It
places emphasis on what technology is doing to yield a profit rather than how learning
takes place as a human process” (2015: 16). There is no real acknowledgment of, let
alone appreciation for, relational agency in the idea of “technology-enhanced learning”
– at least not on the part of the learner. More than anything or anyone else, it is the
technology itself that is granted a kind of coercive agency to convey learning subjects
to their final destination; it alone maintains a sense of agential singularity that everyone
else is denied. And, in so doing, it functions quite effectively as a medium for the
reproduction of neoliberal subjecthood and authoritative social control shrouded in
the illusion of personal choice. “If we discuss technology as detached from the humans
who perform tasks with it, then it simply becomes an external force acting on our
behalf. This objective approach disempowers the human subject to undertake any
critique, as it effectively removes them from the equation, closing down possibilities
for more varied conversations across diverse networks” (Hayes, 2015: 17).
As one illustrative example, we could look to the page on the U.S. Department of
Education’s website that is dedicated to “Use of Technology in Teaching and
Learning.” The opening passage on the website reads:
Technology ushers in fundamental structural changes that can be integral
to achieving significant improvements in productivity. Used to support
both teaching and learning, technology infuses classrooms with digital
learning tools, such as computers and hand held devices; expands course
offerings, experiences, and learning materials; supports learning 24 hours
a day, 7 days a week; builds 21stcentury skills; increases student engagement
and motivation; and accelerates learning. Technology also has the power
to transform teaching by ushering in a new model of connected teaching.
This model links teachers to their students and to professional content,
resources, and systems to help them improve their own instruction and
personalize learning. Online learning opportunities and the use of open
educational resources and other technologies can increase educational
productivity by accelerating the rate of learning; reducing costs associated
with instructional materials or program delivery; and better utilizing
teacher time (U.S. Department of Education).
ALVAREZ | (Digital) Media as Critical Pedagogy
81
Notice that, unlike the examples analyzed by Hayes, this passage omits any specific
mention of “technology-enhanced learning”; in fact, this particular page on the
Department of Education website does not mention the words “enhance” or
“enhancement” even once. Far from representing a deviation from the positivist
fatalism embodied in the discourse of “technology-enhanced learning,” however, we
could argue that this passage represents its apotheosis. More than anything else, this
description of educational technology reads like a company promo, a matter-of-fact
discursive fusion of government and industry confidence that said technology will
make good on these promises to “increase educational productivity by accelerating the
rate of learning” while also forcing educators to adopt more of the qualities prized by
the neoliberal model of (cheap) labor: hyper-productivity, 24-7 accessibility, flexibility,
etc. Once again, that these are the given (and celebrated) parameters for “successful”
teaching, and that learning as such is explicitly measured in terms of speed, quantity,
and productivity, is not an accident. “The commodity form and its administrative
simulacra are now able to penetrate hitherto protected zones,” philosopher Andrew
Feenberg notes, in conversation with Petar Jandrić (2015: 143). “This is the essence of
neo-liberalism, the extension of commercial relations and criteria into every area of life
[…] Deskilling education and bringing it under central management is now on the
agenda. Money would be saved and the ‘product’ standardized. Technology is hyped
as the key to this neo-liberal transformation of education. Computer companies,
governments, university administrations have formed an alliance around this utopian,
or rather dystopian, promise” (2015: 143).
“The more our tools are naturalized, invisible, or inscrutable,” as Morris and Stommel
write, the less likely we are to interrogate them” (2018). Likewise, the more intimately
our professional responsibilities, and students’ scholastic success, are bound to
carrying out these instrumentalist directives, the more relentlessly the forces of
neoliberal administration convert our learning environments into “dystopian”
assemblages of “technology-enhanced learning,” the harder it becomes to imagine a
narrative of “new media encounter” whose arc has not already been determined for
us. Because, as Alan Liu writes, “Good accounts of new media encounter imagine affordances and
configurations of potentiality. We don’t want a good story of new media with a punch line
giving somebody the last word. We want a good world of new media that gives
everyone at least one first word […] We want a way of imagining our encounter with
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
82
new media that surprises us out of the ‘us’ we thought we knew” (2013: 16, emphases
added). Under the market-calibrated aegis of “technology-enhanced learning,”
accounts of new media encountered in and outside the classroom have, for the most
part, already been written for us – accounts that take it as a given that learning with
and through digital technologies will be a process defined and measured by those
technologies themselves. When it comes to imagining the “configurations of
possibility” that may exist for us and our students in our potential encounter with new
media, we are, once again, presented with the illusion of agency in a plot that has been
scripted by the very authors of our own continued exploitation and domination. It is,
thus, all the more incumbent upon us, as critical educators, to imagine – and engage
our students in the vital process of imagining for themselves – a narrative of new media
encounter in which “The future of learning will not be determined by tools but by the
re-organization of power relationships and institutional protocols” (Scholz, 2011: IX).
Such an imperative necessarily involves engaging ourselves and our students in the
critical pedagogical process of learning to learn in conversation with – not at the behest
of – media. To do so gets to the very heart of critical pedagogy itself, because the
project of critical pedagogy is ultimately a media project. And if we are to determine
how to develop a sufficiently critical pedagogy in the age of digital media, critical
pedagogy and/as media theory first enjoins us to re-examine (and intervene in) the
sites where learning as such actually takes place. Because, I argue, the core political and
ontological premises upon which critical pedagogy is based – and from which it
maintains a sense of hope that we and our worlds can change – breathe life into an
understanding of the learning process as a process of becoming in which we must
explore, analyze, and praxically engage the open, dialectical circuits between human
and world that mediate life itself.
Perhaps at no other point, then, has the need for a critical media pedagogy been so
urgent at the same time that the institutional and technological conditions of formal
learning have become so structurally hostile to the spirit of critical pedagogy itself. The
more seamlessly digital technologies are integrated into the learning process, the more
crucial it is for students and teachers alike to develop their capacities for critically
analyzing – and intervening in – the broader, overlapping forces of social control that
are mediated through them. It is imperative that we critically (re)examine our own
ALVAREZ | (Digital) Media as Critical Pedagogy
83
pedagogies, and that we ask what it will mean to work with our students to hash out a
vulnerable, critical, and creative learning praxis that not only resists the coercive
interpellation of neoliberal subjectivation, but that also affirms and expands their
humanity in the digitalized world while bolstering their capacities to interrogate, attack,
and dismantle the conditions that dehumanize them by stifling their learning.
_________________________
Critical pedagogy doesn’t necessarily start with Paulo Freire, but it certainly doesn’t
exist without him. “To separate Paulo from critical pedagogy is not possible,” Shirley
Steinberg writes (2015: ix). “We know our own positionality within critical pedagogy
by how we first came to know Paulo Freire” (2015: ix). A world-renowned educator
and philosopher, Freire developed revolutionary and widely successful methods for
teaching poor, illiterate populations in Brazil before the 1964 military coup (Golpe de
64), after which he was imprisoned for 70 days and forced to live in exile for 15 years.
It was during the first decade of his exile that Freire wrote and published his first book,
Education, the Practice of Freedom (1967). This was followed by his most famous book,
Pedagogy of the Oppressed (1970), which has served as the lodestar of critical pedagogues
ever since. Half-a-century’s worth of independent studies, internal debates, critical
reappraisals, practical experimentations, and theoretical variations have unfolded in the
wake of the publication of Freire’s seminal work, but everything in the ever-exploding-
and-rearranging field of critical pedagogy still orbits around the core, radical concept
that is articulated in it. (By no means do I wish to suggest that practitioners have
followed a singular, prescribed path in developing their own critical pedagogies, nor
do I mean to imply that the “field” of critical pedagogy as such is not riven with
necessary critiques and departures on practical and theoretical issues regarding, for
instance, race, disability, the mind/body distinction, etc. [Brock & Orelus, 2015;
Ellsworth, 1989; Erevelles, 2000; S. Shapiro, 1999]. However, I argue that the
coherence of critical pedagogy as an expressly political project rests on a set of
ontological assumptions about the mediated relationship between human and world –
assumptions that fundamentally challenge the reductive, dehumanizing treatment of
student and teacher subjecthood that is materially reinforced by the neoliberal
apparatus of “technology-enhanced learning.”) At base, the project of critical
pedagogy, as Henry Giroux puts it, remains fixated on “[drawing] attention to the ways
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
84
in which knowledge, power, desire, and experience are produced under specific basic
conditions of learning and [illuminating] the role that pedagogy plays as part of a
struggle over assigned meanings, modes of expression, and directions of desire,
particularly as these bear on the formation of the multiple and ever-contradictory
versions of the ‘self’ and its relationship to the larger society” (2011: 4).
It was through Freire’s distinct voice that the project of critical pedagogy as we
understand it today found its first real articulation. That being said, Freire’s was an
articulation of something that has always been latent in the “struggle to be more fully
human” (Freire, 2005: 47), a calling-forth of something that is always calling out, always
reaching from somewhere just below the surface of what is, like fingers stretching the
outer membrane of the possible in the endless, groping “struggle for a fuller humanity”
(Freire, 2005: 47). It was an articulation that contained within it traces and echoes of
those who came before Freire, and those who came after, those who sense, have
sensed, or will sense – without Freire to hard boil their sensation into something
tangible and familiar – that the reality roiling under the austere lid of what we call
education is much more complex and consequential than we are compelled to think, that
the process of teaching is neither straightforward nor unilateral, that the subjects and
objects of learning are never set, self-contained things, and that the contexts for learning
are never neutral.
Whether known to Freire or not, his work condensed and soldered together various
insights that had manifested in bits and pieces across the scattered works of earlier
critical thinkers and traditions – from Karl Marx and G.W.F. Hegel to John Dewey
and Anísio Teixeira, from W.E.B. DuBois and Lev Vygotsky to the Frankfurt School
and Franz Fanon.1 What emerged in Freire’s work, and has since taken shape in the
radical project of critical pedagogy, has always been rooted in that nagging,
discomfiting sense that the societal and individual stakes of education are incredibly
high and that the means and ends of learning will vary significantly depending on how
“education” is defined. Moreover, as discussed in relation to the neoliberal apparatus
of “technology-enhanced learning,” the types of subjects we are trained to become,
and the ways we are compelled to fit and function inside the hegemonic power
structure, are likewise made contingent upon decisions about who (and what) gets to
define education as such and determine where it will take place, what its goals will be,
ALVAREZ | (Digital) Media as Critical Pedagogy
85
how those goals will be set and measured, etc. Critical pedagogy “picks up on the idea
that educational processes, practices, and modes of engagement play an active role in
the production and reproduction of social relations and systems. [It] seeks to
understand and is concerned with the ways that schools and the educational process
sustain and reproduce systems and relations of oppression” (Porfilio & Ford, 2015:
xvi).
Whether in public schools, private schools, charter schools, officially approved
independent programs, etc., we spend the better part of (at least) our first two decades
of life being formally “educated” in the customs of social life along with all the other
“necessary” practices and forms of knowledge that will presumably equip us, as
independent agents, to successfully navigate the world “out there” that we are
preparing to enter. But the critical pedagogical project understands that educational
institutions themselves are not worlds apart. At every step of the way, our formalized
processes of education are thoroughly integrated into and reflective of the broader,
given power arrangement in our society; they are a critical node in “the machinery by
which […] power relations give rise to a possible corpus of knowledge [and by which
said] knowledge extends and reinforces the effects of this power” (Foucault, 1995: 29).
Thus, these processes of formal education serve as a vital technology of subjectivation,
training students and teachers to become the kind of responsible subjects who are
well-adjusted to – and who will go forth to reproduce – the conditions of their own
domination. “A central tenet of [critical] pedagogy maintains that the classroom,
curricular, and school structures teachers enter are not neutral sites waiting to be
shaped by educational professionals,” Joe Kincheloe writes (2004: 2). Thus,
“proponents of critical pedagogy understand that every dimension of schooling and
every form of educational practice are politically contested spaces” (2004: 2). That
“every dimension of schooling and every form of educational practice” are political is
a given; that they are “politically contested spaces,” however, is not. The dimensions
of formal learning are political inasmuch as they are imbricated in an educational
apparatus that is built to, at worst, functionally replicate the historico-specific
conditions that bolster the dominant power arrangement or, at best, leave those
conditions uncontested. The naturalness of the conditions that maintain and enforce
the given power arrangement in the world “out there” is inscribed in the minds and
bodies (mind-bodies) of students and teachers. Thus, by the time students are ready to
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
86
take what they’ve learned in school and “make their way” in the world, the world has
already made its way through them.
Schools and official education systems are by no means the only sites where the
political forces of social reproduction come to a head, but they do serve as critical
conductors of possibility for what is, at base, Freire’s primary concern: the oscillating
movements, electrical currents, and stubborn blood clots of the macro- and micro-
dialectics playing out in the mutual shaping of individual and world. “World and
human beings do not exist apart from each other,” Freire writes, “they exist in constant
interaction” (2005: 50). The struggle for “humanization” unfolds in the dynamic and
slowed-down spaces of life where this “constant interaction” mediates the flow,
distribution, capture, and dispersion of energies that shape and re-shape the world …
which shapes and re-shapes the human … who shapes and re-shapes the world …
which shapes and re-shapes the human … who shapes and re-shapes … ad infinitum.
As a point of departure from any sort of vulgar economic or material determinism, it
follows that the project of critical pedagogy is imbued with a sense of undying hope that
things can change, and that pedagogy can play a vital role in that change. “Hope is a
natural, possible, and necessary impetus in the context of our unfinishedness. Hope is
an indispensable seasoning in our human, historical experience. Without it, instead of
history we would have pure determinism” (Freire, 1998: 69). This hope derives from
the essential belief in the multidirectionality of energy flows in the dialectical struggles
of everyday life, in the mutually constitutive, back-and-forth circuit between the world
that inscribes itself upon us and our subjective resistance to inscription (Garoian &
Gaudelius, 2001: 334). It is a belief in the fundamental capacity for “always-unfinished”
individuals to break far enough away from the grip of the material, cognitive, embodied
contexts of their domination that they can learn and develop a critical consciousness
(conscientização) of the fact that this isn’t the only way things can or should be. On top
of this, it is a belief that said individuals can and must turn around and direct their
liveliness at attacking the structural supports behind these contexts. At the very core
of critical pedagogy is an essential presumption of breakable worlds and unfinished
people in motion:
Reality which becomes oppressive results in the contradistinction of men
as oppressors and oppressed. The latter, whose task it is to struggle for
ALVAREZ | (Digital) Media as Critical Pedagogy
87
their liberation together with those who show true solidarity, must acquire
a critical awareness of oppression through the praxis of this struggle. One
of the gravest obstacles to the achievement of liberation is that oppressive
reality absorbs those within it and thereby acts to submerge human beings’
consciousness. Functionally, oppression is domesticating. To no longer be
prey to its force, one must emerge from it and turn upon it. This can be
done only by means of the praxis: reflection and action upon the world in
order to transform it (Freire, 2005: 51).
What Freire brings to the surface here is a conceptualization of education as a
contestable site of vulnerable and volatile encounter. Such encounters are strategically
contained and policed within the contexts of schooling systems (but also in realms like
popular culture, government, etc.) which, in turn, serve to reproduce the conditions of
pacification (or “domestication”) of the oppressed many and the corresponding
conditions of societal domination by the oppressive few. Freire’s conceptualization of
education also positions it as an encounter that trembles, always, with the potential for
something more, something radical, something else.
The critical pedagogue understands that education, more or less, names the formalized,
teleologized containment of the humanizing processes of learning, the generative power
of which is recognized by the oppressive few as an inherent threat to the preservation
and maintenance of their domination. It is, thus, among the most vital charges of the
project of critical pedagogy to locate and interrogate the ways that, materially,
symbolically, and practically, a society’s existing educational apparatus functions to
sustain an “oppressive reality” that works the oppressed over, submerging human
beings’ consciousness of their oppression and of the contingent, pliable, and breakable
nature of the worldly conditions that oppress and dehumanize them. Such a charge,
moreover, carries with it a critically conscious recognition that who one is is also
contingent, pliable, and dependent upon a world in motion that is as well. “It
approaches individual growth as active, cooperative, and social process, because the
self and society create each other” (Shor, 1992: 15). And one must take that recognition
and follow through with praxis to break the world that subjugates them: “To no longer
be prey to its force, one must emerge from it and turn upon it” (Freire, 2005: 51).
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
88
It is of insurmountable importance for Freire and for critical pedagogy writ large – as
it is for media theorists – that concern for the mutual making, un-making, and re-
making of human and world in the dialectical meatgrinder of history, holds fast an
ontological understanding of the human as a fundamentally open-ended thing whose
being is always, necessarily, a being-in-process, mediated by changing worlds in and
through which it can become what it will be. “Education as the practice of freedom –
as opposed to education as the practice of domination – denies that man is abstract,
isolated, independent, and unattached to the world; it also denies that the world exists
as a reality apart from people” (Freire, 2005: 81). The human, that is, figures as a kind
of circuit between “inside” and “outside,” between the biological organism and the
world, without which it could not be(come) itself. Whether tacitly or explicitly, critical
pedagogy, “as the practice of freedom,” presupposes a process of being wherein life is
mediated by “external” worlds that make the human what it is, and critical pedagogy
itself names a consciously praxical intervention in this process, a harnessing of the fact
that the human, consciously or not, must and always does have a hand in making,
reproducing, and altering the worlds in which it can be(come) itself.
Perhaps nowhere else is this point made more clearly than in the oft-stated contempt
Freire and other critical pedagogues have for the “banking” concept of learning in
which students are understood as “‘containers’ to be ‘filled’ by the teacher” with
demonstrably replicable forms of knowledge whose retention by student-receptacles
can be easily tested. In a lengthy passage from Pedagogy of the Oppressed, Freire writes:
Implicit in the banking concept is the assumption of a dichotomy between
human beings and the world: a person is merely in the world, not with the
world or with others; the individual is spectator, not re-creator. In this
view, the person is not a conscious being (corpo consciente); he or she is rather
the possessor of a consciousness: an empty “mind” passively open to the
reception of deposits of reality from the world outside. For example, my
desk, my books, my coffee cup, all the objects before me – as bits of the
world which surround me – would be “inside” me, exactly as I am inside
my study right now. This view makes no distinction between being
accessible to consciousness and entering consciousness. The distinction,
however, is essential: the objects which surround me are simply accessible
ALVAREZ | (Digital) Media as Critical Pedagogy
89
to my consciousness, not located within it. I am aware of them, but they
are not inside me. It follows logically from the banking notion of
consciousness that the educator’s role is to regulate the way the world
“enters into” the students. The teacher’s task is to organise a process
which already occurs spontaneously, to “fill” the students by making
deposits of information which he or she considers to constitute true
knowledge. And since people “receive” the world as passive entities,
education should make them more passive still, and adapt them to the
world. The educated individual is the adapted person, because she or he is
better “fit” for the world. Translated into practice, this concept is well
suited to the purposes of the oppressors, whose tranquility rests on how
well people fit the world the oppressors have created, and how little they
question it (2005: 75-76).
At issue here is nothing less than the ontological presumption of the human being as
either a self-contained being in and of itself that merely exists in the world, or a being
that cannot be itself “with[out] the world or with[out] others.” The banking concept
of education obviously rests on the former presumption, which further presumes that
the process of learning is a matter of representation; that is, a matter of translating the
world into a data stream that can be “poured” into and re-presented in the isolated
consciousness of students. Such a process “already occurs spontaneously” in daily life
as we, isolated receptacles that we are, absorb, process, and retain data from the world
around us, but it is the teacher’s job to “organize” this process as a functionary of an
educational apparatus, which is itself a functionary of the oppressive power
arrangement in our given world. Education’s functional service to this power
arrangement, as Freire notes, involves “[regulating] the way the world ‘enters into’ the
students,” deputizing teachers (but also other operators in the educational apparatus,
from principals and superintendents to legislators and textbook makers) as
authoritative arbiters of what sort of knowledge does and doesn’t get passed on.
However, from lessons and activities to course materials and evaluations, the specific
content of this organized learning, while having much potential for exerting a
“domesticating” influence on the (a)critical consciousness of students, is perhaps less
consequential than the routinized form of the learning process itself as modeled on
the banking concept. “Education can socialize students into critical thought or into
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
90
dependence on authority, that is, into autonomous habits of mind or into passive
habits of following authorities, waiting to be told what to do and what things mean”
(Shor, 1992: 13). Day in, day out, this process continually fortifies and enforces the
ontological fiction that people are static, self-contained, “passive entities” who
“‘receive’ the world” in discrete representational forms, thus adapting them to a world
that secures its existing power arrangement by ensuring the passivity of the oppressed
and the accomplices of the oppressors.
In its varied iterations, and throughout its necessary critical reevaluations, the project
of critical pedagogy has maintained a consistent and vital antagonism to this
ontological fiction itself, which undergirds the banking concept of education. In the
harried and high-stakes race to determine what learning will be in the digital age,
however, this ontological fiction has found ever more sophisticated means of
universalizing and enforcing itself. That the neoliberal apparatus of “technology-
enhanced learning” has materialized a political epistemology that is founded upon this
fiction is a case in point. And a critical pedagogy that is up to the task of contesting it
must work to relocate the process of learning in the open spaces and soft tissue
through which the dialectical negotiation of self and world is eternally mediated. To
do so requires that, rather than eliciting a “compliant response to [specific] designed
features” (Conole & Dyke, 2004: 302), the task of critically learning with and through
(digital) media will necessarily entail exploring the contexts of our own
“unfinishedness,” and doing so within the generative matrix of possibility that is
afforded by a relation to media that is not prescribed beforehand.
_________________________
The goal here, of course, is not to give a complete and thorough accounting of the
admittedly broad field of critical pedagogy and its many practical and theoretical
variations, critiques, divergences, etc., but to tease out the underlying ontological
assumptions (we might even say “ontological affordances”) that make the radical
project of critical pedagogy conceivable, let alone possible. Doing this work is
especially crucial for critical pedagogues as we attempt to find and cultivate spaces
where we and our students can develop a critical consciousness of – and the praxical
means for intervening in – the diffuse operations of power in our twenty-first-century
media-worlds. Because without interrogating the medial conditions that make us who
ALVAREZ | (Digital) Media as Critical Pedagogy
91
we are, without feeling out and analyzing the dialectical circuits that open us and our
world up to one another, and without grasping that the hope of liberatory learning is
not inherent to the educational media we use but, rather, to the mediation of being as
such, then we cannot hope to develop a sufficiently critical pedagogy for the digital
age. Once again, Morris and Stommel’s arguments in An Urgency of Teachers are
instructive here:
The tools we use for learning, the ones that have become so ubiquitous,
each influence what, where, and how we learn – and, even more, how we
think about learning. Books. Pixels. Trackpads. Keyboards. E-books.
Databases. Digital archives. Learning management systems. New
platforms and interfaces are developed every week, popping up like daisies
(or wildfires). None of these tools have what we value most about
education coded into them in advance. The best digital tools inspire us,
often to use them in ways the designer couldn’t anticipate. The worst
digital tools attempt to dictate our pedagogies, determining what we can
do with them and for whom. The digital pedagogue teaches her tools,
doesn’t let them teach her (2018).
This is why our focus has not necessarily been on the critical pedagogical affordances
of specific digital learning technologies but, rather, on the critical pedagogical
importance of openly exploring the matrix of possibility afforded by the very (and
varying) ways we relate to technology. As noted earlier, the practical, epistemological,
and even ontological violence of the cold neoliberal apparatus of “technology-
enhanced learning” is enforced by the deployment of digital learning tools that leave
as little room as possible for learning by way of exploring and expanding the
potentialities of how we relate to media – and that, instead, dictate, limit, monitor,
quantify, and monetize learning for us. And it would be a grave mistake to believe that
these barriers to critical learning can be overcome through the incorporation of newer,
“better” media into the learning process. It is incumbent upon us, rather, to develop
and practice a critical pedagogy that directly challenges the ontological fiction
embodied in such techno-fetishist instrumentality. “Digital pedagogy is not equivalent
to teachers using digital tools. Rather, digital pedagogy demands that we think critically
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
92
about our tools, demands that we reflect actively upon our own practice […] Good
digital pedagogy is just good pedagogy” (Morris & Stommel, 2018).
In the increasingly digitalized classroom, how one practically develops their own
critical pedagogy in conversation with students will, of course, vary widely depending
on the institutional contexts, the life experiences and literacies collected in said
classroom, and so on. But this does not mean that the introduction of digital
technologies has somehow rewritten critical pedagogy’s core concern for the “struggle
to be more fully human” (Freire, 2005: 47) or its defining ontological assumptions
about the mediation of being through the dialectical circuit between self and world.
We must be wary if we start to believe otherwise, lest we submit to the same repressive
logic by which the neoliberal apparatus of “technology-enhanced learning” reduces the
scope of how we define ourselves, our media, and how they relate to one another. The
more that our place in twenty-first-century media-worlds is dictated by such
apparatuses, which boil our potential relations to new media down to a slate of
prescribed uses, the more easily we are compelled to accept and abide by the
ontological fiction by which they operate; that is, by the notion that we and the media
through which we “learn” are discrete, closed-off, self-contained entities that do not
need each other to be what they are. This is all the more reason to appreciate how
necessary the project of critical pedagogy is for helping us and our students navigate
the contemporary media-worlds we inhabit. Because the project of critical pedagogy
is, at base, a media project: a struggle, that is, to find, feel, interrogate, attack, and
rework the inextricable, mutually constituting medial connections between human and
world. The ontological assumptions underwriting the very hope and possibility of
critical pedagogy are nothing if not the essential coordinates for a media theory of
being.
Before we can even begin to ask what digital media can do for the project of critical
pedagogy, critical pedagogy enjoins us to confront the medial conditions of life itself.
As a project of “humanization” that is, from the beginning, a technical praxis of
negotiating the enlivened circuitry mediating human and world as they make, un-make,
and re-make each other, critical pedagogy drills into the bedrock of media theory from
its own distinct angle. The project of critical pedagogy is ultimately based on critically
interrogating, working with, and challenging the medial conditions that give historical
ALVAREZ | (Digital) Media as Critical Pedagogy
93
shape to the “transductive”2 relationship between human and world. As such, critical
pedagogy eschews the ontological conceptualization of the medium in the same
instrumentalist register of a tool whose relation to the human upholds the chauvinistic
fiction of a self-contained, isolated subject. Instead, it embraces a conceptualization of
the medium, as Mark B.N. Hansen puts it, “as an environment for life” (2006: 299). The
project of critical pedagogy, that is, strives for a process of humanization that unfolds
through (not apart from) the circuitry of the world that mediate our lives, because it is
that mediation of life through the “external” that makes us human in the first place.
“Before it becomes available to designate any given, technically-specific form of
conversion or mediation,” Hansen notes, “medium names an ontological condition of
humanization – the constitutive dimension of exteriorization that is part and parcel of
the transduction of technics and life” (2006: 300). Media theorists like Hansen and
Bernard Stiegler take critical pedagogy’s ontological assumptions to their roots; that is,
to the “originary” constitution of the human, as such, as a technically mediated being,
as a being (a distinct species) co-originated with and through technical mediation.
Building on the work of paleontologist Andre Leroi-Gourhan, Stiegler asserts that
human beings have evolved in ways that cannot be explained in purely
zoological/biological terms. Our evolution inheres in the passing on of knowledge
through externalized cultural worlds, the construction and maintenance of which is
made possible through technics. The technical worlds we create, the worlds in which
we can live and be, are the very medial support for a non-biological, “epiphylogenetic”
memory; thus, the evolution that constitutes us as human is, from the beginning,
technical:
The problem arising here is that the evolution of this essentially technical
being that the human is exceeds the biological, although this dimension is
an essential part of the technical phenomenon itself, something like its
enigma. The evolution of the “prosthesis,” not itself living, by which the
human is nonetheless defined as a living being, constitutes the reality of
the human’s evolution, as if, with it, the history of life were to continue by
means other than life: this is the paradox of a living being characterized in
its forms of life by the nonliving – or by the traces that its life leaves in the
nonliving (Stiegler, 1998: 50).
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
94
Stiegler’s description thus presents human evolution as irreducibly biological and
technical, occurring as a process of what he terms “epiphylogenesis” (evolution of
human life “by means other than life”). The human becomes itself through technical
mediation, and human evolution is, necessarily, the “evolution of the ‘prosthesis,’”
which is, from the beginning, an exteriorization of the living organism in its pursuit of
life by means other than life. “From this perspective,” Hansen argues, “the medium is,
from the very onset, a concept that is irrevocably implicated in life, in the
epiphylogenesis of the human, and in the history to which it gives rise qua history of
concrete effects” (2006: 299-300). By the same token, human life is irrevocably
implicated in the process of mediation:
Thus, long before the appearance of the term ‘medium’ in the English
language, and also long before the appearance of its root, the Latin term
medium (meaning middle, center, midst, intermediate course, thus
something implying mediation or an intermediary), the medium existed as
an operation fundamentally bound up with the living, but also with the
technical. The medium, we might say, is implicated in the living as essentially technical,
in what I elsewhere call ‘technical life’; it is the operation of mediation – and perhaps
also the support for the always concrete mediation – between a living being and the
environment. In this sense, the medium perhaps names the very transduction between
the organism and the environment that constitutes life as essentially technical; thus it
is nothing less than a medium for the exteriorization of the living, and
correlatively, for the selective actualization of the environment, for the
creation of what Francisco Varela calls a ‘surplus significance’, a
demarcation of a world, of an existential domain, from the unmarked
environment as such (Hansen, 2006: 300, emphases added).
From the vantage point of critical pedagogy, as noted previously, the human is
necessarily understood as an open-ended being-in-process. It is, in fact, only upon such
an understanding of the human that any sort of substance can be found in critical
pedagogy’s dialectical assertion that the oppressive historical contexts of students’ lived
experience and learning dig into and shape the content of their humanity. And it is only
upon such an understanding that any sort of hope can be found in the promise that
things can be different. From the vantage point of media theory, the processuality of
ALVAREZ | (Digital) Media as Critical Pedagogy
95
our humanity is necessarily understood as being-in-media. Thus, mirroring Freire’s
assertion that critical pedagogy “denies that man is abstract, isolated, independent, and
unattached to the world” and that “it also denies that the world exists as a reality apart
from people” (2005: 81), Stiegler argues that “[t]he paradox [of being-in-media] is to have
to speak of an exteriorization without a preceding interior: the interior is constituted
in exteriorization ... the appearance of the human is the appearance of the technical”
(1998: 141). For Stiegler, the aporetic relationship between “inside” and “outside,”
“interior” and “exterior,” “subject” and “object,” can only be understood as differance
– a movement of differing and deferral without origin, a transductive synthesis
mutually constituting the who and the what while giving the illusion of their opposition.
Media are the passageways of being, the transductive circuitry by which human and
world constitute each other as essentially inseparable in “technical life.” Through
technical mediation, we “selectively actualize” our environments that actualize us,
creating worlds in and through which we become ourselves. “Making worlds is
something humans do in order to be human. Our species came to define itself by our
need to live in worlds we’ve had a hand in building” (Alvarez, 2018). Just as critical
pedagogy posits the open-ended, mutual construction of human and world on its way
to deconstructing the ontological fiction of the human as a passive, self-enclosed being
underwriting the banking concept of education, so media theory posits life itself as
technical mediation on its way to deconstructing the ontological fiction of the human
as independent singularity whose humanity is not defined in communion with the
world but by instrumental dominion over it. “Humans simply don’t want to give up
their self-assigned precious place in the modern cosmological hierarchy,” Dominic
Pettman writes (2006: 163). “Those definitions of technology which expel this
phenomenon outside of the human sphere, quarantining it in ‘objects’ and ‘machines’
and ‘artificial entities,’ do so according to the logic of apartheid” (2006: 164). And there
are consequences. Inasmuch as the banking concept of education traps us in pacified
submission to oppressive power arrangements that anesthetize our critical capacities,
“ignoring the function, genealogy, and history of those sociotechnical imbroglios […]
that construct our political life and our fragile humanity” (Latour, 1994: 42),
hubristically maintaining the illusion that we are always “in the driver’s seat” – that we
are always, only, beings in and not with and through the world – blinds us to the ways
that the fragility of ourselves and our worlds is harnessed, exploited, and “enframed”
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
96
in ways that point to the eventual destruction of both. “Quite simply, then, we are
slaves to the notion that we are masters” (Pettman, 2006: 171).
As mentioned previously, the stakes here are quite high. Without closely and critically
working through how the mediation of life itself operates as the ontological condition
of possibility for the radical project of critical pedagogy as such, we run the perpetual
risk of accepting and abiding by the ontological fictions of techno-political apparatuses
that have an explicitly vested interest in foreclosing that possibility. “For the most
part,” as Paulo Blikstein writes, “schools have adopted computers as tools to empower
extant curricular subtexts – i.e., as information devices or teaching machines” (2008:
209). And one can see how, nearly fifty years after Freire published his seminal work,
the deployment of digital technologies in the classroom offers new opportunities for
re-inscribing the conditions of students’ subjective passivity that Freire linked to the
banking concept:
… the traditional use of technology in schools contains its own hidden
curriculum. It surreptitiously fosters students who are consumers of
software and not constructors; adapt to the machine and not reinvent it;
and accept the computer as a black box which only specialists can
understand, program, or repair. For the most part, these passive uses of
technologies include unidirectional access to information (the computer
as an electronic library), communicate with other people (the computer as
a telephone), and propagate information to others (the computer as a
blackboard or newspaper). Not surprisingly, therefore, the new digital
technologies are commonly called ICT (Information and Communication
Technologies). In sum, a [critical digital pedagogy] – injecting into a
critique of education a subversive political agenda – might position
computers, for the most, as commonly recruited by “the system” to
inculcate in future consumers the learned passivity that supports
capitalism by perpetuating its inherent inequities. Yet, the most
revolutionary aspect of the computer […] is not to use it as an information
machine, but as a universal construction environment (Blikstein, 2008:
209).
ALVAREZ | (Digital) Media as Critical Pedagogy
97
When it comes to learning as the vital process of humanization, digital
technologies only “afford” as much as our critical pedagogical relation to them
makes possible. As Blikstein notes, students’ capacities to learn with and through
these technologies depends on the contexts in which “learning” is defined as
either “passive use” or as a matter of creativity and construction that enjoins
students to directly engage and explore the medial points where their humanity
can be felt in the circuital flow between “inside” and “outside,” between self and
world. From the analog to the digital, education without an active, critical,
probing concern for the medial conditions of being-in-process, for the human
as an open-ended thing whose being is mediated in and through the world, will
further expose the vulnerable humanity of students and teachers to the
oppressive forces that aim to pacify and subjugate them, which, in the age of
global neoliberal dominance, is “part of [the] broader goal of creating new
subjects wedded to the logic of privatization, efficiency, flexibility, the
accumulation of capital, and the destruction of the social state (Giroux, 2011: 9-
10).
The techno-fetishist conceit that digital media will “enhance” learning on their own
rests on the very same ontological assumptions that critical pedagogy and/as media
theory aim(s) to deconstruct. In this context, then, to “think critically about our tools,”
as Morris and Stommel encourage us to do, is to eschew thinking that presumes tools
to be simply “ours” to “use”; it is, rather, to embrace a praxical understanding of such
tools, and ourselves, as being situated within the medial networks through which life
and self and world become in – and as – flux. Likewise, it is to see that integrating
digital media into the learning process ultimately serves to bolster our contemporary
conditions of neoliberal domination insofar as they continue to sediment and enforce
the ontological fiction of clear distinctions between subject and object, inside and
outside, user and tool, human and world. However, as Mark Deuze writes, “If we let
go of this deception – this dualistic fallacy of domination of man over machine (or
vice versa) – it may be possible to come to terms with the world we are a part of in
ways that are less about effects, things and what happens, more about process [and]
practice” (2012: xiii). What might it look like, then, to practice a critical digital pedagogy
that – as all critical pedagogy inevitably must – fosters and bears witness to learning as
the struggle of beings-in-process to become “more fully human,” to learning not as a
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
98
matter of “banking,” “using,” “quantifying,” or “testing,” but as “a way of living that
fuses life with material and mediated conditions of living in ways that bypass the real
or perceived dichotomy between such constituent elements of human existence”
(Deuze, 2012: 3)? This, again, is the core of critical pedagogy as such. In any of its
multitudinous variations and iterations, the radical project of critical pedagogy is, at
base, “a matter of studying reality that is alive, reality that we are living inside of, reality
as history being made and also making us” (Freire, 1985: 18). As an extension of the
actuated environment in which the technical mediation of life itself takes place, what
might it mean to learn to become human in a digitally connected reality that is, itself,
“alive”? What might it mean, and what practical forms might it take, if we approach
the process of learning with digital technologies as a matter of aiding – of midwifing3 –
students’ development of their own critical capacities to not only read the world as a
concept or text, but to intervene in it as the vibrant contexts of their being – not just
as an objective “outside” environment in which they live, but as the porous, moveable
circuitry mediating life itself, shaping who they are at any given time as they struggle
to shape it?
References
Ahalt S. & Fecho, K. (2015) Ten Emerging Technologies for Higher Education [White Paper].
viewed 1 April 2019, from RENCI.org: <https://renci.org/wp-
content/uploads/2015/02/EmergingTechforHigherEd.pdf>.
Alvarez, M. (2017) ‘Administering Evil’, The Baffler, 29 November, viewed 8 April 2019,
<https://thebaffler.com/the-poverty-of-theory/administering-evil>.
Alvarez, M. (2018) ‘The Death of Media’, The Baffler, 20 June 2018, viewed 10
September 2018, <https://thebaffler.com/the-poverty-of-theory/the-death-of-
media>.
Blikstein, P. (2008) ‘Travels in Troy with Freire: Technology as an Agent of
Emancipation’, in P. Noguera & C. Torres (eds.), Social Justice Education for Teachers:
Paulo Freire and the Possible Dream, Rotterdam: Sense Publishers, pp. 205-244.
Bousquet, M. (2008) How the University Works: Higher Education and the Low-Wage Nation.
New York: NYU Press.
ALVAREZ | (Digital) Media as Critical Pedagogy
99
Brock, R. & Orelus, P. (eds.) (2015) Interrogating Critical Pedagogy: The Voices of Educators
of Color in the Movement. New York: Routledge.
Brown, W. (2015) Undoing the Demos: Neoliberalism’s Stealth Revolution. New York: ZONE
Books.
Cervone, J. (2018) Corporatizing Rural Education: Neoliberal Globalization and Reaction in the
United States. Cham: Palgrave Macmillan.
Clarke, J. (2004) ‘Dissolving the Public Realm? The Logics and Limits of
Neoliberalism’, Journal of Social Policy, 33(1), pp. 27-48.
Conole, G. & Dyke, M. (2004) ‘Understanding and Using Technological Affordances:
A Response to Boyle and Cook’, Research in Learning Technology, 12(3), pp. 301-308.
Deuze, M. (2012) Media Life. Cambridge, UK: Polity Press.
Dewey, J. (1916) Democracy and Education, in J. Boydston (ed.) (2008) The Middle Works
of John Dewey, 1899-1924 - Volume 9, Carbondale: Southern Illinois University Press.
Ellsworth, E. (1989) ‘Why Doesn’t this Feel Empowering? Working Through the
Repressive Myths of Critical Pedagogy’, Harvard Educational Review, 59(3), pp. 297-
324.
Erevelles, N. (2000) ‘Educating Unruly Bodies: Critical Pedagogy, Disability Studies,
and the Politics of Schooling’, Educational Theory, 50(1), pp. 25-47.
Feenberg, A. & Jandrić, P. (2015) ‘The Bursting Boiler of Digital Education: Critical
Pedagogy and Philosophy of Technology’, Knowledge Cultures, 3(5), pp. 132-148.
Foucault, M. (1995) Discipline and Punish (trans. A. Sheridan). New York: Vintage
Books.
Freire, P. (1985) ‘Reading the World and Reading the Word: An Interview with Paulo
Freire’, Language Arts, 62(1), pp. 15-21.
Freire, P. (1998) Pedagogy of Freedom: Ethics, Democracy, & Civic Courage (trans. P. Clarke).
Lanham: Rowman & Littlefield.
Freire, P. (2005) Pedagogy of the Oppressed (30th Anniversary Edition) (trans. M. Ramos).
New York: Continuum.
Fuller, M. & Goffey, A. (2012) Evil Media. Cambridge: The MIT Press.
Giroux, H. (2011) On Critical Pedagogy. New York: Continuum.
Giroux, H. (2014) Neoliberalism’s War on Higher Education. Chicago: Haymarket Books.
Hall, G. (2016) The Uberfication of the University. Minneapolis: University of Minnesota
Press.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
100
Hansen, M.B.N. (2006) ‘Media Theory’, Theory, Culture & Society, 23(2-3), pp. 297-306.
Hayes, S. (2015) ‘Counting on Use of Technology to Enhance Learning’, in P. Jandrić
& D. Boras (eds.) Critical Learning in Digital Networks, New York: Springer, pp. 15-
36.
Hutchby, I. (2001) ‘Technologies, Texts and Affordances’, Sociology, 35(2), pp. 441-456.
Jeffery, D. & Smith, K. (2013) ‘Critical Pedagogies in the Neoliberal University: What
Happens When They Go Digital?’, The Canadian Geographer / Le Géographe Canadien,
57(3), pp. 372-380.
Kincheloe, J. (2004) Critical Pedagogy. New York: Peter Lang.
Latour, B. (1994) ‘On Technical Mediation – Philosophy, Sociology, Genealogy’,
Common Knowledge, 3(2), pp. 29–64.
Liu, A. (2013) ‘Imagining the New Media Encounter’, in R. Siemens & S. Schreibman
(eds.) A Companion to Digital Literary Studies, Malden: Wiley-Blackwell, pp. 3-25.
Morris, S. & Stommel, J. (2018) An Urgency of Teachers: The Work of Critical Digital
Pedagogy. [Creative Commons ebook] Pressbooks, accessed 10 August 2018,
<https://criticaldigitalpedagogy.pressbooks.com/>.
Mullen, M. (2002) ‘“If You’re Not Mark Mullen, Click Here”: Web-Based Courseware
and the Pedagogy of Suspicion’, The Radical Teacher, 63, pp. 14-20.
Newfield, C. (2008) Unmaking the Public University: The Forty-Year Assault on the Middle
Class. Cambridge: Harvard University Press.
Newfield, C. (2016) The Great Mistake: How We Wrecked Public Universities and How We
Can Fix Them. Baltimore: Johns Hopkins University Press.
Pettman, D. (2006) Love and Other Technologies: Retrofitting Eros for the Information Age.
New York: Fordham University Press.
Porfilio, B. & Ford, D. (2015) ‘Schools and/as Barricades: An Introduction’, in B.
Porfilio & D. Ford (eds.) Leaders in Critical Pedagogy: Narratives for Understanding and
Solidarity, Rotterdam: Sense Publishers, pp. xv-xxv.
Rancière, J. (1991) The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation (trans.
K. Ross). Stanford: Stanford University Press.
Scholz, T. (2011) ‘Introduction: Learning through Digital Media’, in T. Scholz (ed.)
Learning Through Digital Media: Experiments in Technology and Pedagogy, New York:
Institute for Distributed Creativity, pp. XIII-XIII.
ALVAREZ | (Digital) Media as Critical Pedagogy
101
Schrecker, E. (2010) The Lost Soul of Higher Education: Corporatization, the Assault on
Academic Freedom, and the End of the American University. New York: The New Press.
Shapiro, S. (1999) Pedagogy and the Politics of the Body: A Critical Praxis. New York:
Garland.
Shor, I. (1992) Empowering Education: Critical Teaching for Social Change. Chicago: The
University of Chicago Press.
Steinberg, S. (2015) ‘Preface’, in B. Porfilio & D. Ford (eds.) Leaders in Critical Pedagogy:
Narratives for Understanding and Solidarity, Rotterdam: Sense Publishers, pp. ix-xi.
Stiegler, B. (1998) Technics and Time, 1: The Fault of Epimetheus (trans. R. Beardsworth &
G. Collins). Stanford: Stanford University Press.
Talib, S. (2018) ‘Social Media Pedagogy: Applying an Interdisciplinary Approach to
Teach Multimodal Critical Digital Literacy’, E-Learning and Digital Media, 15(2), pp.
55-66.
U.S. Department of Education (2019) ‘Use of Technology in Teaching and Learning’,
viewed 1 April 2019, <https://www.ed.gov/oii-news/use-technology-teaching-
and-learning>.
Watters, A. & Young, J. (2016) ‘Why Audrey Watters Thinks Tech is a Trojan Horse
Set to “Dismantle” the Academy’, The Chronicle of Higher Education, 18 May, viewed
5 April 2019, <https://www.chronicle.com/article/Why-Audrey-Watters-Thinks-
Tech/236525>.
Weiner, E. (2004) Private Learning, Public Needs: The Neoliberal Assault on Democratic
Education. New York: Peter Lang.
Notes
1 For more on critical pedagogy’s antecedents and on Freire’s intellectual precursors and influences, see: Allen, R.L (2013) ‘Whiteness and Critical Pedagogy’, Educational Philosophy and Theory 36(2), pp. 121-136; Deans, T. (1999) ‘Service-Learning in Two Keys: Paulo Freire’s Critical Pedagogy in Relation to John Dewey’s Pragmatism’, Michigan Journal of Community Service Learning 6(1), pp. 15-29; Fischman, G.E. & McLaren, P. (2005) ‘Rethinking Critical Pedagogy and the Gramscian and Freirian Legacies’, Cultural Studies ↔ Critical Methodologies 5(4), pp. 425-447; Giroux, H.A. (2011) On Critical Pedagogy. New York: Continuum; Gottesman, I. (2010) ‘Sitting in the Waiting Room: Paulo Freire and the Critical Turn in the Field of Education’, Educational Studies 46(4), pp. 376-399; Kincheloe, J.L. (2004) Critical Pedagogy. New York: Peter Lang; Kress, T. & Lake, R. (eds.) (2013) Paulo Freire’s Intellectual Roots: Toward Historicity in Praxis. London: Bloomsbury.
2 “Transduction, following Gilbert Simondon’s conceptualization, is a relation in which the relation itself holds primacy over the terms related” (Hansen, 2005: 299).
3 It is especially helpful to think of the teaching side of the vulnerable educational encounter, as I’ve described it here, in the terms laid out by Jacques Rancière in his (in)famous analysis of The Ignorant
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
102
Schoolmaster. For Rancière, this encounter will only re-inscribe the inequalities and un-democratic hierarchies in the given aesthetic arrangement of our world if it begins from the presumption of inequality, with the teacher occupying the privileged position of the one who knows more than her pupils and who tries, however genuinely, to reach a state of equal knowledge between her and her pupils through teaching. The educational encounter, instead, must begin from the (democratic) presumption of equality in the capacity to learn with different forms of knowledge and expertise signaling different “manifestations” of common intelligence, which must be used by the teacher to pose questions and to try to help draw out (“midwife”) and bear witness to students’ exercise of their capacity to learn: “Here is everything that is in Calypso: The power of intelligence that is in any human manifestation. The same intelligence makes nouns and mathematical signs. What’s more, it also makes signs and reasonings. There aren’t two sorts of minds. There is inequality in the manifestations of intelligence, according to the greater or lesser energy communicated to the intelligence by the will for discovering and combining new relations; but there is no hierarchy of intellectual capacity” (1991: 27).
Maximillian Alvarez is a dual-PhD candidate in Comparative Literature and History at the University of Michigan.
Email: [email protected]
Special Issue: Rethinking Affordance
Destituting the Interface:
Beyond Affordance and
Determination
TORSTEN ANDREASEN
University of Copenhagen, Denmark
Media Theory
Vol. 3 | No. 1 | 103-126
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
This article proposes the affordance of the medium and the determination of the dispositif as two distinct approaches to media or technology in general. Taking the dialectical tension between affordance and determination, between medium and dispositif, as its point of departure, the article explores Transmute Collective’s Intimate Transactions (2005) as a problematic fusion of the two approaches. A historicising re-reading of Guy Debord’s Society of the Spectacle with regard to current forms of digital control and modes of production then argues that contemporary alienation takes place within the digital interface as the zone of indistinction between affordance and determination, and that instead of designing liberating machines or inventing subjective evasions of the dispositif, emancipatory engagement requires a destitution of the interface.
Keywords
interface, dispositif, affordance, destitution
In his book The Interface Effect (2012), Alexander Galloway proposed two alternative
readings of the Greek word techne: media as hypomnesic inscriptions on a substrate and
modes of mediation as the ethos of lived practice (16-18). The first (exemplified by
McLuhan, Kittler, and Manovich) is coherent with a reading of affordance as the
inherent functionality of objects springing from their material constitution or design.
The second is explicitly a reference to the analysis of the dispositif as developed in
Deleuze’s reading of Foucault.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
104
Taking this distinction a bit further, we can say that where the medium affords a certain
number of possibilities or use cases because of its physical design, the dispositif determines
and limits a set of possible actions. The affordances of the medium afford even the
limits of use, even the finitude of possibility is somehow a gift of the medium. The
dispositif, on the other hand, is an operation of power, and as Foucault stated, power
is an action upon an action, a limitation of possible behaviour, and, thus, even the
opening of possibility is a restriction of potential by predetermination.
The present article questions the critical or emancipatory potential of these two
fundamental approaches to media. Thinker of the dispositif par excellence, Foucault
had little faith in the emancipatory aspirations of what I am here describing as a theory
of affordance, the aim of which is to design an apparatus or medium that, when used
correctly, would necessarily lead to a better world: “Men have dreamed of liberating
machines. But there are no machines of freedom, by definition” (Foucault, 2002: 356).
Galloway’s faith in the critical potential of the Deleuzian refashioning of Foucault’s
theory of the dispositif, on the other hand, hinges on the invention of new subjective
forms to escape the determination of dispositival control.
In his famous essay on dispositival determination in the age of cybernetics, “Postscript
on the Societies of Control”, Deleuze stated: “There is no need to fear or hope, but
only to look for new weapons” (Deleuze, 1992: 4), and Galloway clearly chooses the
inventive exploit of the determinations of computational protocols as the best weapon
at his disposal:
The goal, then, is not to destroy technology in some neo-Luddite delusion,
but to push it into a state of hypertrophy, further than it is meant to go.
Then, in its injured, sore, and unguarded condition, technology may be
sculpted anew into something better, something in closer agreement with
the real wants and desires of its users (Galloway, 2005: 30).
Marx criticised a lacking distinction between “machinery itself” and “the capitalist
application of machinery” which led to the “stupidity of contending” against the first
instead of the second (Marx, 1976: 569). If we take Marx at his word, what is needed,
then, is an analysis that – in addition to the history of technology itself (affordance of
the medium) and the history of its utility within power formations (determination of
ANDREASEN | Destituting the Interface
105
the dispositif) – takes the history of capital into account. Only via such a perspective
can technology and its power be properly periodised and thus critically understood.
The means of production are no longer limited to the factory but now include
everything from smartphones to urban infrastructures, and algorithmic alienation now
takes place beyond human perception and cognition. Any critique must take into
account this development within the three-fold structure of the history of technology,
the history of power formations and the history of capital.
Taking the dialectical tension between affordance and determination, between medium
and dispositif, as its point of departure, the following explores Transmute Collective’s
Intimate Transactions (2005) as a problematic fusion of the two approaches. A
historicising re-reading of Guy Debord’s Society of the Spectacle with regard to current
forms of digital control and modes of production then argues that contemporary
alienation takes place within the digital interface as the zone of indistinction between
affordance and determination, and that instead of designing liberating machines or
inventing subjective evasions of the dispositif, emancipatory engagement requires a
destitution of the interface.
Affordance of the medium – A reciprocal relation
The media theoretical approach to the material world as a set of affordances is derived
from the term invented by James Gibson, not to describe media but a specific
complementarity between animal and environment:
The affordances of the environment are what it offers the animal, what it
provides or furnishes, either for good or ill. The verb to afford is found in the
dictionary, but the noun affordance is not. I have made it up. I mean by it
something that refers to both the environment and the animal in a way
that no existing term does. It implies the complementarity of the animal
and the environment (Gibson, 2015: 119).
The analytical reach of the concept is meant to go beyond the mere phenomenal
environment of a given species and instead designate an interrelation of subjective and
objective, psychical and physical, environment and behaviour. Nonetheless, the focal
point of the analysis of these interrelations remains the physical constitution of the
object at hand: “The object offers what it does because it is what it is” (130). It does
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
106
not offer an essence, however, but a number of possible relations afforded those
inherently able to (mis-)perceive the “exteroceptible information” of the world in
relation to a “coperceptible self” (133). The physical constitution of the object signals
possible outcomes of interaction with a perceiving agent able to conceive of what is to
be gained or lost from what is afforded. This complementary relation between
objective information and perceiving self with regard to the affordances of the
environment is what Gibson describes as an “ecological” approach.
It is no surprise that such a theory of the perception of conceivable use has been of
importance for certain approaches to design, Donald Norman being the name usually
mentioned. Gibson acknowledged that the “information pickup” of the perceiver, i.e.
the perceiver’s ability to assess affordances, was open to error, to misperception.
Affordances spring from objective complementarity and are thus not dependent on
actual perception – the fall of a tree in the forest affords both pain and accessible
lumber regardless of whether anyone is there to take the hit or gather the bounty.
According to Norman, the task of design, then, is to render visible the clues to the
operations of things: “Perceived affordances help people figure out what actions are
possible without the need for labels or instructions. I call the signalling component of
affordances signifiers” (Norman, 2013: 13). Good design provides enough visual cues
for the information pickup of affordances to run smoothly without explanation
beyond these signifiers.
It is from this perspective on the affordances of design that we can go beyond Gibson
and consult the first of Galloway’s two approaches to the Greek techne: the medium as
“substrate and only substrate,” as “hypomnesis,” as “the externalisation of man into
objects” (Galloway, 2012: 16).1 The material world can be transformed by humans so
as to afford other affordances, and the medium is just such a changed surface full of
signifiers that change the affordances of the environment.
In Galloway’s three personifications of an approach to techne as medium – McLuhan,
Kittler, and Manovich – we clearly see the transformation of the material world in
order to ameliorate its affordances and its signifiers. McLuhan presented media not as
“externalisations” but as “extensions of man” that “massage” society: “Societies have
always been shaped more by the nature of the media by which men communicate than
by the content of the communication. […] It is impossible to understand social and
ANDREASEN | Destituting the Interface
107
cultural changes without a knowledge of the workings of media” (McLuhan and Fiore,
1962: 8). The message was the affordances of the medium, not the content transmitted
nor the actual human use of either content or medium. For Kittler, different media
afford different Lacanian modes: by virtue of their discrete encoding of the world,
block letters convey the symbolic register; as cinema fuses discrete images into one
flowing movement that affords the recognition of the self in motion it creates the
Lacanian imaginary; and the phonograph’s registration of sound prior to and beyond all
meaning affords a rare mediated glimpse at the Lacanian real as that which never ceases
not to write itself (Kittler, 1999: 15-16). In The Language of New Media, Manovich asked
the questions “How does the shift to computer-based media redefine the nature of
static and moving images? What is the effect of computerization on the visual
languages used by our culture? What new aesthetic possibilities become available to
us?” (Manovich, 2001: 9). Manovich is basically asking: what are the affordances of
new media and what are their signifiers?
To my knowledge, neither McLuhan, Kittler nor Manovich reference Gibson. Evoking
them in the description of an affordance approach to media is thus not to shed new
light on Gibson but to characterise the function of his terminological contribution, so
frequently used in contemporary media theory, as a specific focus – from Gibson’s
ecology of perception to Kittler’s media archaeology – on the formal characteristics of
media and what they may afford the perceiver or agent. I find the inclusion of
affordance a useful modification of Galloway’s critique of the media approach because
of the resulting possible tension with the determination of the dispositif. Where
Galloway distinguished medium from mode, object from action, I propose the
distinction between a medium that affords and a dispositif that determines in order to
evaluate their respective capacities for emancipatory engagement with the status quo.
What is mostly absent from Gibson’s concept of affordance and Galloway’s thinkers
of the medium are relations of power, subjugation or exploitation:
What the male affords the female is reciprocal to what the female affords
the male; what the infant affords the mother is reciprocal to what the
mother affords the infant; what the prey affords the predator goes along
with what the predator affords the prey (Gibson, 2015: 127).
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
108
From a political point of view, such a reciprocity requires an exceedingly formalist and
potentially deeply disturbing abstraction of the relations involved. In the cases
mentioned, affordance involves a relation between a giver and a receiver, the prey
giving itself to the receiving predator. According to Gibson, “Behaviour affords
behaviour” (127) and even violent domination constitutes a reciprocal relation.
Although it in no way follows that Gibson considers this reciprocity necessarily
equitable, symmetrical, or just, the affordance analysis does not afford a view of the
structural dissymmetries that spring from material conditions and collective formations:
Why has man changed the shapes and substances of his environment? To
change what it affords him. He has made more available what benefits him
and less pressing what injures him. In making life easier for himself, of
course, he has made life harder for most of the other animals (122).
Man changes the material substrate for his own benefit and the detriment of other
species just as those in power change systems of government to favour their position.
But what of structural changes to the human environment that fall along divides of
class, gender and race – the enclosure of the commons, questions of suffrage,
reproductive rights, equal pay, racialised credit forms, and biased algorithms
determining who should be hired or fired, and who is eligible for parole? What happens
when binary reciprocity proves insufficient to adequately capture the structural power
operations of the given medium?
In view of the obvious political limitations of any approach based on reciprocity,
Matthew Fuller correctly problematises Gibson’s reliance on what is basically a
homeostatic worldview which, although suggestive as “a materialist formulation of the
micropolitics of detail that also escapes the form-content dichotomy” (Fuller, 2005:
46), should be enhanced by further engagement with what Foucault, in his description
of the prison as a microphysics of power (i.e., as dispositif), called “the attentive
malevolence that turns everything to account” (Foucault, 1995: 139 quoted in Fuller,
2005: 47). Although the analysis of affordances provides useful insight into the
reciprocal basis of possible binary relations of perception and action, it seems
exceedingly difficult to wring from it a critical analysis of the role of media in more
complex power structures.
ANDREASEN | Destituting the Interface
109
Determination of the dispositif – Lines of fracture
When it comes to the structural dissymmetries of what is afforded to whom, or rather,
whose actions are determined by what, the dispositif provides a useful analytical tool.
Although the term “dispositif” has a less clear origin than “affordance,”2 Foucault’s
usage in the mid- to late seventies is no doubt fundamental3:
1. “a thoroughly heterogeneous ensemble consisting of discourses, institutions,
architectural forms […]”
2. “the nature of the connection that can exist between these heterogeneous
elements”
3. a “formation which has as its major function at a given historical moment
that of responding to an urgent need. The apparatus thus has a dominant
strategic function” (Foucault, 1980: 194-195).
From the point of view of the dispositif, the material world is itself a product of power
dynamics beyond the reciprocal relations of perceiver/perceived, giver/receiver,
agent/acted upon. Where the affordance approach dissolves the subject/object
relation by way of the reciprocal constitution of afforded relations, the dispositif posits
a structural power that precludes any reciprocality. In this perspective, the object
cannot offer what it does because it is what it is because, according to the dispositif,
what is on offer is determined by the strategic function of the ordering of the
heterogenous elements at hand. What the object is, what it offers, and to whom, are
all determined by structural operations beyond the heterogenous elements of the
ensemble.
Where, according to Gibson, behaviour affords behaviour in the afforded relation
between object and agent, giver and receiver, Foucault insists that “To govern, in this
sense, is to structure the possible field of action of others […]” and that “the exercise
of power [is] a mode of action upon the actions of others […]” (Foucault, 1982a: 790).
While Gibson focused on how actions open specific possibilities of further action,
Foucault analysed action as either that which limits or is limited by other actions or
that which seeks emancipation by refusing the determinations of power through the
invention of other forms of action.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
110
Just as Gibson inventively transformed a verb into a noun, Foucault defined the
infinitive verb “to govern” in the form of a noun he used on many occasions –
governmentality: “This contact between the technologies of domination of others and
those of the self I call governmentality” (Foucault, 1982b: 19). And these two aspects
of governmentality, which we can call the technologies of domination and the
techniques of self, 4 correspond to the two meanings of the word “subject” in
Foucault’s thought: “There are two meanings of the word “subject:” subject to
someone else by control and dependence; and tied to his own identity by a conscience
or self-knowledge” (Foucault, 1982a: 781).
In Foucault, we thus find a relation between the operations of power and the possible
field of actions of the subject, “governmentality” being the point of contact between
subject and power and the “subject” being both the subjected individual and the agent
whose fundamental freedom allows the possibility of action beyond the determinations
of power.5 It is this double relation – subject/power, agency/subjection – that is the
focal point of the operations of the dispositif.
In the lecture “What is a dispositif?”, which served as a main reference for Galloway’s
second reading of the word techne, Deleuze formulated this focal point of the
Foucauldian dispositif as the complex relation between lines of visibility, enunciation,
force, and subjectification (Deleuze, 1992b) that should be disentangled by
cartographical analysis. Visibility and enunciation encompass the question of
knowledge – what can be recognised and what can be expressed with any hope of
comprehension – while the lines of force, of course, designate power relations. The
lines of subjectification are related to the so-called “lines of fracture” where “the
productions of subjectivity escape from the powers and the forms of knowledge
[savoirs] of one social apparatus [dispositif] in order to be reinserted in another, in forms
which are yet to come into being” (Deleuze, 1992b: 162).
While the three first lines – visibility, enunciation, force – execute the technologies of
domination or determination to which the subject is subjected, the techniques of self
and thus the self-knowledge of the subject open the possibility of forms of life that
escape the determinations of the dispositif to such an extent that it may force the
“movement of one apparatus to another”:
ANDREASEN | Destituting the Interface
111
This bypassing of the line of forces is what happens when it turns on itself,
meanders, grows obscure and goes underground or rather when the force,
instead of entering into a linear relationship with another force, turns back
on itself, works on itself or affects itself. This dimension of the Self is by
no means a pre-existing determination which one finds ready-made
(Deleuze, 1992b: 161).
This escape from “pre-existing determination” of a subject which, “tied to his own
identity by a conscience or self-knowledge”, “goes underground” is, of course, what
Galloway was referencing when he proposed that the emancipation from the
determinations of protocol, a term with which Galloway designates the dominant
dispositival form of our contemporary digital condition, requires not technological
destruction but pushing it “into a state of hypertrophy, further than it is meant to go.”
Actions beyond the determination of the dispositif would force a restructuring of its
power operations in the attempt to re-establish a stable order, one that may prove
better for the subjects dominated by it.
We can, here, contrast the lines of subjectification that turn back on themselves in the
Deleuzian dispositif with the linear reciprocity of the affordances of the medium. The
reciprocity of affordances can be described as the rectilinear relationship between giver
and receiver, agent and environment, power and subject. The agent either correctly
assesses the affordances at hand or not, the afforded relation is there whether it is
realised or not. While the reciprocity of affordance exists simply because of a specific
possible compatibility of agent and object, the approach of the dispositif, on the other
hand, insists on the operational dissymmetries between participants. The operations
of power determine the subject by acting on its actions, but the subject always retains
a certain amount of freedom with regard to this determination, lest power turn to
violence. The approach of the dispositif, then, insists on locating the fracture, the point
where the reciprocity of domination and subjugation stops or even slightly diverges
from a rectilinear relation and the subjugated subject becomes something else.
Becoming something else is crucial for the Foucauldian theory of resistance: “Maybe
the target nowadays is not to discover what we are but to refuse what we are”
(Foucault, 1982a: 785).
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
112
Intimate Transactions – Design for engagement
It is thus possible to sketch two different approaches to the Greek word techne: one
considering the affordances of the medium, the other focusing on the determinations
of the dispositif – one having to do with the design of possibility, the other with the
possibility of creating a new form of life, what Foucault called ethos, which surpasses
the determinations of power.
Foucault sometimes described the creation of ethos, ethopoiesis, as “the arts of oneself”
or “the aesthetics of existence.” He argued that in antiquity, the search for an ethical
practice was a question of giving one’s life a specific form in which one could recognise
oneself, be recognised by others and perhaps even serve as an example for posterity
(Foucault, 1988). With this aspect of aesthetics, what Deleuze termed “Life as a work
of art” (Deleuze, 1995), it is no wonder that the subjective refusal of determination via
lines of fracture in the engagement with the dispositif has become something of an
inspiration for politically engaged art.
However, the artwork’s good intentions of affording lines of fracture somehow risk
constituting a zone of indistinction between the media design of affordances and the
ethos of dispositival fracture. One such artwork is Transmute Collective’s Intimate
Transactions (2005), which clearly expresses the attempt to appropriate design
affordances as a means of producing lines of fracture. The work is an installation
involving two separate physical locations, each containing a large “screen-space” and
a so-called “Bodyshelf” that serve as the media for an interfacial connection between
the twin sites. The screens open unto a shared virtual world with which the
installation’s two participants can interact via sensors in the shelf. Participants use full-
body movements to control the movements of their avatars, to navigate the virtual
world, and to interact with its creatures. In turn, the interactions in screen-space are
accompanied by haptic and sonic feedback in the Bodyshelf as well as a haptic pendant
on the participants’ abdomens.
The virtual world is inhabited by creatures from whom the participants can collect
assets for their own avatars but this impoverishment of the virtual environment results
in a slower and more wizened world that can only be reinvigorated by the collaboration
of the two participants:
ANDREASEN | Destituting the Interface
113
They must conjoin their avatars and work in unison to return assets to the
creatures. Again this interaction relies upon movements on the Bodyshelf,
which navigate the conjoined avatars. And again, the Bodyshelf provides
a conduit for feedback. When their avatars are interlocked, the users can
feel each other’s push and pull through the Bodyshelf. As their motion is
relayed back and forth, they become part of a remote, embodied
collaboration (Hamilton, 2008: 180).
The installation thus provides an immersive experience where the entire body is
engaged in a collaboration for the continued vitality of the virtual world. Pia Ednie-
Brown, who was one of the designers of the work’s haptic feedback, calls this designed
collaboration a “relational design ethics,” “striving for a balance between affecting and
being affected” (Ednie-Brown, 2007: 329 quoted in Bertelsen, 2012: 33).
The relation between affecting and being affected evokes the Deleuzian reading of
Spinoza’s concept of affect (affectus) as the ability to affect and be affected, i.e., as an
increased or diminished ability to live and act (vis existendi and potentia agendi). Although
Deleuze is not preoccupied with equilibrium or a balance between affecting and being
affected, the ability to affect and be affected plays an important role, for instance in
his just quoted reading of the dispositif, where the line of subjectification “turns back
on itself, works on itself or affects itself.” This affecting of oneself involves a careful
dialectics of affecting and being affected. It requires a turning away, a being disaffected
by power, as well as the twin abilities of affecting and being affected by oneself.
Drawing on Ednie-Brown and Transmute Collective’s artistic director Keith
Armstrong’s description that although “there are many ways to approach the work, it
ultimately rewards participants with a willingness to collaborate“ (Armstrong, 2005),
Lone Bertelsen’s analysis describes the work as the “Rigorous attempt to design for
engagement within the ‘logic of affects’ that makes Intimate Transaction a matter of
‘transitivity,’ more than interactivity” (Bertelsen, 2012: 40, my emphasis). Bertelsen is
here clearly in agreement with the work’s creators in her celebration of their “design
for engagement” as a means of going from the interaction of individual subjects to a
logic of affective trans-subjectivity. The interface becomes the zone of indistinction
between the two individuals who enter into new trans-subjective formations because
of the feedback between each participant and the virtual world.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
114
The work seems to take as its primary concern the environmental consequences of
individually constituted subjects whose actions are determined by self-interest: “We
now live under the enduring mantle of a global crisis, a self-imposed act of unparalleled
and seemingly irrational self-destruction, which we misname as ecological – WE are
the crisis” (Armstrong, 2005). This self-destructive “WE” is not ecological. To the
contrary, the crisis is caused by our inability to act ecologically, i.e., in accordance with
what is afforded by the world around us. According to Armstrong, ecology – as a way
of “striving for a balance between affecting and being affected”, one could say – is the
explicit goal of the work: “the way we approach design can have an enormous impact
upon the way that we interact with the world. It can potentially change the way that
we approach, and therefore understand, ecology” (Armstrong quoted in Bertelsen,
2012: 41).
The design affordances of the work should not only bring us to understand ecology,
they should make us engage in an ecological equilibrium. And such equilibrium is, with
good reason, claimed to depend on collective participation instead of individual
appropriation of the assets of the world at hand. Bertelsen draws on Brian Massumi
to speak of a “caring for belonging as such” (Bertelsen, 2012: 42) and, referencing Erin
Manning, she celebrates a “participation” beyond the individual, a participation in a
“relational movement” (Bertelsen, 2012: 44). It is quite clear that the artists, as well as
Bertelsen’s sympathetic reception of their work, claim that the very design of the work
affords a new line of fracture – “an ethical and reparative turn toward a restoring of
ecological balance” (Bertelsen, 2012: 41) – as a means to avoid the current destructive
crisis: “This deliberately designed possibility for (networked) transsituational collaboration
can deterritorialise the more destructive habits of the individual […]” (Bertelsen, 2012:
54, my emphasis).
Society of the interface
Boris Groys’ article “On Art Activism” clearly hones in on what I am trying to address
here:
Art activists want to be useful, to change the world, to make the world a
better place – but at the same time, they do not want to cease to be artists.
ANDREASEN | Destituting the Interface
115
And that is the point where theoretical, political and even purely practical
problems arise (Groys, 2016: 43).
Intimate Transactions draws on the theoretical tradition of the dispositif (Deleuze,
Guattari, Massumi) and its focus on ethos – what Galloway called “modes of mediation”
– as a way of pushing the current technologies of domination out of shape so as to
better suit the needs of our contemporary condition. To paraphrase Armstrong, the
liberal subject and its dispositively determined inability to engage in trans-subjective
collaboration is the basis of the current ecological crisis and its reparation depends on
an ethical surpassing of this subjective form. But these reparative aspirations of new
ethos are produced by design affordances as signalled by what Donald Norman called
signifiers. The world is supposedly liberated from the self-destructive tendencies of
individual appropriation of assets because the design rewards participatory
collaboration and punishes both individual appropriation and disengagement. The new
and reparatory ethos thus hinges on a belief in the emancipatory affordances of the
design of the medium. What should be questioned with regard to Intimate Transactions
is, then, the political viability of “designing for engagement” as a zone of indistinction
between affordance and dispositif.
Galloway referenced the ambiguity of the Greek techne in order to argue the political
necessity of going beyond the affordances of the medium via an analysis of the
determinations of the dispositif and its possible exploits. Groys does something
similar, when he evokes the ambiguity of techne as the indistinction between art and
technology, between art and design (46), and proposes the radical perspective of art as
seeing “the present status quo as already dead, already abolished,” while arguing that
the aspirations of design towards “the stabilisation of the status quo will ultimately
show itself as ineffective” (60).
In view of this article’s conceptual trajectory, the problem of the stated intention and
sympathetic reception of Intimate Transactions can be characterised as a result of the
worst possible reading of Deleuze’s rendition of the dispositif. It should be noted here
that while the dispositif is always a matter of the predetermination of power and the
possibilities of indetermination in spite of and in resistance to this predetermination,
Foucault locates the fracture outside of the dispositif (often in the form of asceticism),
whereas Deleuze situates it as a line within the dispositif. The problem is thus precisely
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
116
the belief that resistance is determined/given by the dispositif, that the dispositif can
afford or allow for specific modes of (self-transformative) action beyond itself.
Deleuze includes the fracture of resistance within the dispositif as a way of
schematising Foucault’s claim that the analysis of power should always take resistance
as its point of departure (Foucault, 1982a: 780). He in no way implied that fracture
springs from the proper design of the three other lines.
Intimate Transactions thus perfectly incorporates the zone of indistinction between
medium and dispositif, between designed affordances and the lines of fracture of
ethical life as a work of art. Resistance is presented as an affordance of the dispositif
and thus forecloses any hope of actual fracture. The work, therefore, should not be
seen as a “reparative turn toward a restoring of ecological balance” but, rather, as a
clear expression of contemporary technologies of domination. Instead of an activation
of mind and body in a caring ecological collaboration, the positioning of the engaged
body on the Bodyshelf in front of the virtual world of the screen-space should be seen
as a contemporary digital counterpart to the mechanical device of punishment in
Kafka’s penal colony. Intimate Transactions clearly holds a certain amount of truth with
regard to a diagnosis of the present, but it is the opposite of that intended by its
creators. The truth of the work should be found in the perfect allegory of the
read/write operations of contemporary technologies of power, where engagement is
rewarded and disengagement punished, and where the physical and psychic minutiae
of the subject are inscribed in the database as well as onto our very bodily fates.
In order to understand this allegory of our contemporary condition, I propose to
examine Intimate Transactions’ interfacial participation design in the perspective of Guy
Debord’s concept of the spectacle, which served as a periodising characterisation of its
day: “The whole life of those societies in which modern conditions of production
prevail presents itself as an immense accumulation of spectacles. All that once was
directly lived has become mere representation” (Debord, 1995: 12). Debord’s
periodisation is performed simply by paraphrasing the famous opening of the first
chapter of Capital volume 1: Marx’s accumulation of commodities has become an
accumulation of spectacles. The perceptible world itself has been replaced by images,
not because of mass media but because of the increasing intensity of what Marx
described as commodity fetishism which had reigned unchallenged since the fading of
ANDREASEN | Destituting the Interface
117
the German and Russian revolutionary momentums in 1923 and the economic boom
in manufacturing and consumption that followed World War II.
According to Debord, the spectacle is the extension of the domain of the economy to
cover all aspects of life: “[…] the autocratic reign of the market economy which had acceded
to an irresponsible sovereignty, and the totality of new techniques of government which
accompanied this reign” (Debord, 1990: 2, my emphasis). Autocratic reign of the market
and new techniques of government – one springs from the other as the images of the
spectacle circulate and operate as governmental techniques. The reign of the market
generates its own modes of subjugation where action is replaced by passive contemplation:
The spectator is simply supposed to know nothing, and deserve nothing.
Those who are always watching to see what happens next will never act:
such must be the spectator’s condition (Debord, 1990: 22).
While this is still, in a certain sense, an excellent description of contemporary binge
watching on the abundantly available streaming services, media no longer let their
images fall upon passive consumers that dare add nothing. Whereas the description of
the passive spectator seems an apt description of the 1980s culture of television, when
the critique of the channel-surfing couch potato was predominant, it now appears
inadequate.
The passive consumption of broadcast media has clearly been replaced by media that
invite the active participation/valuable contribution of the consumer, only for these
media to consume the consumer in turn. The prosumer has been technologically produced only
to be technologically consumed.6 While watching the image, the image watches the spectator
(Paglen, 2016); while reading an e-book, the “e-book reader” reads the reader of the
e-book (Alter, 2012); while gaining information from Google, news services or social
media, they all gain a terrible amount of information from you (Stalder and Mayer,
2010). When we read the computer, the computer reads us, and when the computer
reads, it writes elsewhere, so when the digital interface reads our participation, our
destinies are written to the database where algorithms determine who is hired or fired,
who is convicted of a crime and who is let out on parole, and who pays how much for
health insurance.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
118
The passivity described by Debord was a matter of the absence of historical political
agency, however, and not just the passivity of the media consumer. As stated by the
Situationists, the spectator engages in a specific form of frantic participation: “The
internal defect of the system is that it cannot totally reify people; it also needs to make
them act and participate, without which the production and consumption of reification
would come to a stop” (Situationist International, 2006: 106). But this is a participation
in the neatly separated spheres of production and consumption. As Debord stated:
“alienated consumption is added to alienated production as an inescapable duty of the
masses” (Debord, 1995: 29).
The alienated consumer described by the Situationists had its proper place on either
side of the show-window or the factory wall. The admirer or consumer of
commodities/the commodified subject of conspicuous consumption were subjective
modes distinct from the labour process’ commodification of time. The show-window
distinguished the desiring consumer from the enviable consumer on display, just as the
walls of the factory operated a distinction between consumer and producer, the
punching in and punching out, and the clear imperative that the wages earned on one
side were transformed into consumption on the other.
Now, contemporary alienation takes place within the digital interface as the zone of
indistinction between production and consumption. You produce data by consuming
data and the more you produce data the more you are consumed as data. While Debord
and the Situationists tried to escape the imperative of participation by breaking free of
the commodifying circuits of the art institution and taking to the streets, it is
increasingly difficult to find a way out of the zone of indistinction of the interface.
This – that there is no way out because resistance has been included in the interfacial
dispositif as an affordance of its design – is the truth of Intimate Transactions.
Destituting the interface
Instead of a training ground for a new political utopia where our ability to decipher
and respond to the signifiers of the visual and haptic interface allows us to transcend
ourselves in a reciprocal ecological equilibrium, Intimate Transactions should thus be seen
as an allegory for the present imperative of interfacial participation, i.e. for the society of
the interface. Whereas the virtual world in Intimate Transactions is impoverished by
ANDREASEN | Destituting the Interface
119
participant appropriation of interfacial assets, participants in the contemporary society
of the interface are alienated by collaborative engagement as they are continually coded
as avatars, profiles, data sets. The affordances of a trans-subjective affective
equilibrium should be seen as the current dispositival dividualisation of the subject,
which Deleuze described as the transition of collective form from the mass to the
databank (Deleuze 1992a). When any and all participation is indexed and priced as
data, the affective equilibrium of the interface becomes the ability to affect only insofar
as the interface has affected and only in accordance with the feedback loop required
to maintain homeostasis within the system. In short, it is the imperative to always
participate but never act.
Just as Debord radicalised Marx by replacing the accumulation of commodities with
the accumulation of spectacles, we should radicalise the spectacle in the contemporary
“subordination of production to the conditions of circulation” (Bernes, 2013: 180),
where the show-window and the factory wall have dissolved in the indistinction of the
interface. This dissolution of the subject in the dividual of the interface, which we find
thematised by Intimate Transactions, was already described as an essential part of the
spectacle:
The spectacle erases the dividing line between self and world, in that the
self, under siege by the presence/absence of the world, is eventually
overwhelmed […]. The individual, though condemned to the passive
acceptance of an alien everyday reality, is thus driven into a form of
madness in which, by resorting to magical devices, he entertains the
illusion that he is reacting to this fate (Debord, 1995: 153).
In Intimate Transactions, the interface is designed to erase the dividing line between self
and world in the ecological reciprocity of participants and interface. The participatory
ethos afforded by this design constitutes precisely an illusory reaction to one’s fate by
“magical devices” and its critical potential can thus best be described as what Debord
called the “spectacular critique of the spectacle”, i.e. the indistinction between “the
fake despair of a non dialectical critique on the one hand and the fake optimism of a
plain and simple boosting of the system on the other” (Debord, 1995: 138-139). The
despair of ecological crisis finds hope in interfacial media affordances that only affirm
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
120
the technologies of domination. In short, when faced with the interface, there is no
way out.
If taken as intended, the work fails as an analysis of contemporary media and power
on two accounts; one is theoretical and the other historical:
First, the work enters into a zone of indistinction between what I have characterised
as the affordances of the medium and the determinations of the dispositif. Here, the
contribution of the second approach with regard to the first – i.e., its ability to critically
analyse dissymmetrical power operations beyond the reciprocal relation of affordances
between agent and environment in order to locate lines of fracture or a way out of
dispositival predetermination – is suspended, while its emancipatory promise of well-
designed affordances remains in force – an empty promise leading nowhere but the
interface.
Second, the work disregards the historical developments of capital where the spheres
of production and consumption as demarcated by the show-window and the factory
wall have collapsed into the zone of indistinction of the interface from which the only
way out seems to be utter immiseration.7 As production and markets have globalised
and profits have moved from production to financialised circulation, ever greater parts
of the workforce are excluded from the possibility of being exploited by wage labour.
Participation is thus not only an imperative, it is increasingly presented as a privilege.
Not being able to disengage from the interface, to prefer not to, without hurting the
operations of the system, which is the lesson of Intimate Transaction, constitutes the
interfacial dispositif of the preservation of the status quo. Continued participation in
the peaceful but strict protocols of the interface is required for the world to thrive.
The protocols of the interface constitute “a technique for achieving voluntary
regulation within a contingent environment” (Galloway, 2005: 22), where “the
behaviour is emergent, not imposed” (24). Within the regime of the interface you are
free to do whatever the interface protocols allow; you can even try to circumvent, hack
them and thus participate in their further development, as long as you participate, for
as long as you participate nothing happens.
ANDREASEN | Destituting the Interface
121
Armstrong claimed that “WE are the crisis” and hoped for the emancipatory
affordances of a design for ethical engagement. Walter Benjamin had a different
diagnosis: the fact that it continues this way is the catastrophe. 8 The interfacial
circulation of images incites participation without thought or action. When we see the
social media images of Donald Trump, it is far too easy to get caught up in the meme,
in the satisfactory laughter at the narcissist baby, the haughty moron. In the spectacle
of the interface, swift judgment is welcomed so that historical analysis of the
conditions of the present is forever postponed. It is far too joyful to engage in what
Jodi Dean (2010) called “affective networks” where the rapid movement through the
interface affords us enjoyment rather than understanding, participation rather than
action.
If the status quo of the interface and its imperative of participation is the catastrophe,
then what reparation is available to us? Benjamin’s response was to pull the emergency
brake. Instead of a vulgar faith in automatic historical progress, Benjamin indicated
that now is the time to bring the operations of oppression to an end. This is the hidden
reference in Agamben’s reading of Bartleby’s “I would prefer not to”:
‘I would prefer not to’ is the restitutio in integrum of possibility, which keeps
possibility suspended between occurrence and nonoccurrence, between
the capacity to be and the capacity not to be (Agamben, 1999: 267).
“Restitutio in integrum” is used by Benjamin on several occasions to indicate not
reparation of the system, but restitution of the fallen and the exploited, not a balance
or an equilibrium of reciprocal relations, but an end to all relations of domination.
Such restitution springs neither from media affordances nor dispositival lines of
fracture, but from potentiality beyond the capacity to be and not to be, beyond the
capacity to affect and be affected, beyond affordance and determination. It is a
rendering inoperative of the operations of techne – what, in his later writings, Agamben
has called a “destituent potential” (cf. e.g. Agamben, 2016).
Galloway rejected the destruction of technology in favour of the effort to “push it into
a state of hypertrophy.” In his own essay on the dispositif, Agamben rejects both its
destruction and its correct use in favour of a rendering inoperative of its power
(Agamben, 2009). Destitution is neither destruction of what is nor the constitution of
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
122
the new, it is the rendering inoperative of both affordance and determination, both
medium and dispositif. The truth to be found in Intimate Transactions is the catastrophe
of the interfacial status quo and the necessity of its destitution. Destitution of the
interface – the digital dividuation of the participating subject in the age of a
financialised indistinction between production and consumption – is the restitution of
possibility of new forms of life, not as afforded lines of fracture within the dispositif,
but as the simple possibility to be whatever. Such is the hope that can never be fulfilled
by art as other than the acknowledgement that the present status quo is already dead,
or, as was eloquently stated on Twitter: “Revolutionary art is not a mirror held up to
society but a feral peacock attacking its own reflection in the high-gloss paint of a
Ferrari” (Bernes, 2018).
References
Agamben, G. (1999) “Bartleby, or On Contingency,” in Potentialities. Stanford, Ca.:
Stanford University Press, pp.243-271.
Agamben, G. (2009) “What is an apparatus,” in What is an Apparatus and Other Essays.
Stanford, Ca.: Stanford University Press, pp.1-24.
Agamben G. (2016) “Epilogue: Toward a Theory of Destituent Potential,” in Use of
Bodies. Stanford, Ca.: Stanford University Press, pp.263-279.
Alter, A. (2012) “Your E-Book Is Reading You,” The Wall Street Journal, July 19, 2012.
http://www.wsj.com/articles/SB1000142405270230487030457749095005143830
4.
Armstrong, K. (2005) “Intimate Transactions: The Evolution of an Ecosophical
Networked Practice,” Fibreculture 7, online:
http://seven.fibreculturejournal.org/fcj-047-intimate-transactions-the-evolution-
of-an-ecosophical-networked-practice/.
Benjamin, W. (1991) “Zentralpark” Gesammelte Schriften, Band I. Frankfurt am Main:
Suhrkamp Verlag, pp.655–90.
Benjamin, W. (1991) Das Passagen-Werk 1 – Gesammelte Schriften Band V. Frankfurt am
Main: Suhrkamp Verlag.
Bernes, J. (2013) “Logistics, Counterlogistics and the Communist Prospect,” in
Endnotes 3:172-201.
ANDREASEN | Destituting the Interface
123
Bernes, J. (2018) Tweet, online:
https://twitter.com/outsidadgitator/status/1004570607063728128.
Bertelsen, L. (2012) “Affect and Care in Intimate Transactions,” Fibreculture 21:31-71.
Dean, J. (2010) “Affective Networks,” MediaTropes 2(2):19-44.
Debord, G. (1990) Comments on the Society of the Spectacle. London & New York: Verso.
Debord, G. (1995) The Society of the Spectacle. New York: Zone Books.
Deleuze, G. (1992a): “Postscript on the Societies of Control,” October 59:3-7.
Deleuze, G (1992b): “What is a dispositif?,” in Armstrong, T. (ed): Michel Foucault -
Philosopher. New York: Routledge, pp.159-168.
Deleuze, G. (1995): “Life as a work of art,” in Negotiations. New York: Columbia
University Press, pp.94-101.
Endnotes (2010) “Misery and Debt,” Endnotes 2:20-51.
Ernie-Brown, P. (2007) The Aesthetics of Emergence: Processual Architecture and an Ethics-
Aesthetics of Composition PhD, RMIT University, Melbourne.
Foucault, M. (1980) “The Confession of the Self” in Power/Knowledge - Selected
Interviews and Other Writings 1972-1977. New York: Pantheon Books.
Foucault, M. (1982a) “The Subject and Power,” Critical Inquiry 8(4):777-795.
Foucault, M. (1982b) “Technologies of the Self,” in Technologies of the Self - A Seminar
with Michel Foucault. London: Tavistock Publications, pp.16-49.
Foucault, M. (1988): “An Aesthetics of Existence,” in Michel Foucault – Politics,
Philosophy, Culture – Interviews and other writings 1977-1984, pp.47-56.
Foucault, M. (1994) “Self Writing,” in The Essential Works of Foucault, 1954-1984,
volume 1: Ethics. New York: The New Press, pp.207-222.
Foucault, M. (1995) Discipline and Punish – The Birth of the Prison. New York: Vintage
Books.
Foucault, M. (2002) “Space, Knowledge, and Power,” in The Essential Works of
Foucault, 1954-1984, volume 3: Power. New York: The New Press, pp.549-364.
Foucault, M. (2007) “What is Critique?,” in The Politics of Truth. Los Angeles:
Semiotext(e), pp.41-82.
Fuchs, C. (2013): “Social Media and Capitalism” in Producing the Internet. Critical
Perspectives of Social Media. Göteborg: Nordicom, pp. 25-45.
Fuller, M. (2005) Media Ecologies – Materialist Energies in Art and Technoculture.
Cambridge, Ma. & London: MIT Press.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
124
Galloway, A. (2005) “Global networks and the effects on culture,” Annals of the
American Academy of Political and Social Science 597:19-31.
Galloway, A. (2012) The Interface Effect. Cambridge: Polity Press.
Gibson, J. (2015) The Ecological Approach to Visual Perception. New York & London:
Psychology Press.
Groys, B. (2016) “On Art Activism,” in In the Flow. London: Verso, pp.43-60.
Hamilton, J. (2008) “Embodied Communication in the Distributed Network,” in
Wyeld, T.G.; Kenderdine S.; Docherty M. (Eds.): VSMM 2007, LNCS 4820.
Berlin: Springer Verlag, pp. 179–190.
Kittler, F. (1999) Gramophone, Film, Typewriter. Stanford, Ca.: Stanford University
Press.
Manovich, L. (2001) The Language of New Media. Cambridge, Ma. & London: MIT
Press.
Marx, K. (1976) Capital – A Critique of Political Economy, volume 1. London: Penguin
Books.
Marx, L. (2010) “Technology: The Emergence of a Hazardous Concept,” Technology
and Culture 31(3):561-577.
McLuhan, M. & Fiore, Q. (1962) The Medium is the Massage. Toronto: University of
Toronto Press.
Nestler, G. (2018) “The Derivative Condition, an Aesthetics of Resolution, and the
Figure of the Renegade: A Conversation,” Finance and Society 4(1):126-143.
Norman, D. (2013) The Design of Everyday Things. New York: Basic Books.
Paglen, T. (2016) “Invisible Images (Your Pictures Are Looking at You),” The New
Inquiry. Online: https://thenewinquiry.com/invisible-images-your-pictures-are-
looking-at-you/.
Salomon, J. (1984) “What is Technology? The Issue of its Origins and Definitions,”
History and Technology 1(2):113-156.
Schatzberg, E. (2006) “Technik Comes to America: Changing Meanings of
Technology before 1930,” Technology and Culture 47(3):486-512.
Situationist International (2006) “Geopolitics of Hibernation” in Knabb, K. (ed.)
Situationist International Anthology. Berkeley, Ca.: Bureau of Public Secrets, pp.100-
106.
ANDREASEN | Destituting the Interface
125
Stalder, F. & Katja M. (2010) “The Second Index. Search Engines, Personalization
and Surveillance”. http://felix.openflows.com/node/113.
Toffler, A. (1980) The Third Wave. New York: Bantam.
Notes
1 It should be noted that Gibson himself had a very different concept of the medium, which for him was a purely environmental factor such as air or water that “affords respiration or breathing; it permits locomotion; it can be filled with illumination so as to permit vision […]” (Gibson, 2012: 14). For Gibson, Galloway’s first conception of techne would be both a tool, which is quite explicitly “a sort of extension of the hand” (36) and a surface “so treated as to make available an arrested [or progressive] optic array, of limited scope, with information about other things than the surface itself” (279).
2 Agamben points to the Greek concept of “oikonomia” in early Christian theology as the primordial separation of substance and practice, being and doing, which he considers the fundamental characteristic of the dispositif, the Latin translation of “oikonomia” being “dispositio.” With a quick reference to Heidegger’s “Gestell” or “enframing,” Agamben also traces the genealogy of the Foucaultian usage via Jean Hypollite’s reading of Hegelian “positivity” (cf. Agamben, 2009).
3 Someone quantitatively inclined would be able to demonstrate by normalised frequency that among the three periods of Foucault’s thought – the analysis of knowledge, of power and of the subject – the dispositif belongs to the second, and that it is replaced by a focus on “ethos” during the last years of his life.
4 There is a long tradition for problematic English translations of the French “technique” and the German “Technik” as “technology” – a confusion, which has been well described in (L. Marx, 2010; Schatzberg, 2006; Salomon, 1984). The distinction between “technologie” and “technique” in Foucault is not completely consistent either but there is a general tendency in which “technology” designates the operations of power and knowledge. An obvious example is the analysis in Discipline and Punish of a “microphysics of power” which he found in the “political technology of the body” as characteristic of the disciplinary society. On the other hand, “technique” has a tendency to designate a praxis, as in the case of Greek ethopoietis (cf. Foucault, 1994).
5 For Foucault, freedom is a prerequisite of power: “Power is exercised only over free subjects, and only insofar as they are free” (Foucault, 1982a: 790). If freedom is excluded by power operating on the body instead of on the possible field of actions, power becomes violence.
6 The term “prosumer” was described by its originator Alvin Toffler as the “progressive blurring of the line that separates producer from consumer” (Toffler, 1980: 267). While Toffler saw this as affording “[…] a new form of economic and political democracy, self-determined work, labour autonomy, local production, and autonomous self-production […],” Christian Fuchs rightly states that “[d]ue to the permanent activity of the recipients and their status as prosumers, we can say that in the case of corporate social media the audience commodity is an Internet prosumer commodity” (Fuchs, 2013: 33).
7 Cf. Marx (1976: 798) and “Capital may not need these workers, but they still need to work. They are thus forced to offer themselves up for the most abject forms of wage slavery in the form of petty production and services – identified with informal and often illegal markets of direct exchange arising alongside failures of capitalist production” (Endnotes, 2010: 30).
8 “Daß es »so weiter« geht, ist die Katastrophe.” The passage appears in both “Zentralpark,” 673 and Das Passagen-Werk 1, 592.
Torsten Andreasen is a postdoctoral fellow affiliated with the collective research project Finance Fiction - Financialization and Culture in the Early 21st Century at the Department of Arts and Cultural Studies, University of Copenhagen. His work currently focuses on the periodisation of the correlation between literature and financial capital since 1970. He wrote his Ph.D. dissertation on the imaginaries invested
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
126
in the potential of digital cultural heritage archives and has published broadly on archives, the digital, the interface, and cultural theory.
Email: [email protected]
Special Issue: Rethinking Affordance
K.O. Götz’s Kinetic Electronic
Painting and the Imagined
Affordances of Television
ALINE GUILLERMET
University of Cambridge, UK
Media Theory
Vol. 3 | No. 1 | 127-156
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
Between 1959 and 1963, the German Informel painter K.O. Götz produced a series of works inspired by what he perceived to be television’s potential to initiate a new form of “kinetic electronic painting” (Götz, 1961: 14). His corresponding production of the Rasterbilder (Raster Pictures) and the film Density 10:2:2:1 were mapped on the technical and formal possibilities of analogue electronics in general, and of television in particular; however, these works were made without any direct use of the new medium, to which Götz had failed to gain access. This article argues that the concept of “imagined affordance” (Nagy and Neff, 2015) enables a critical reassessment of Götz’s elusive relation to television. Rather than focusing on the lack (of cognition or access) that this concept implies, I argue that “television” functioned as a flexible paradigm through which the artist was able to combine Modernism with the emergent field of information aesthetics. Inspired by the discretized aesthetic of the electronically-produced image, the Raster Pictures and Density 10:2:2:1 predate – albeit in an analogue manner – the first works of computer-generated art by several years.
Keywords
Affordance, K.O. Götz, television, kinetic painting, information aesthetics, Informel
Between 1959 and 1963, the German Informel painter Karl Otto Götz (1914–2017),
then professor of painting at the Düsseldorf Art Academy in West Germany, produced
a series of works that led him to be discussed as an emerging “television artist.”1 The
first corpus, known as the Rasterbilder (Raster Pictures, 1959–61), is a series of black
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
128
and white geometric abstractions composed of small squares, arranged in a gridded
canvas according to statistical rules derived from information theory. The second of
these works is the animated film Density 10:2:2:1 (1962–63). While Götz considered
the former to be the “preliminary stage” to his overarching attempt to generate
“electronically programmed” pictures (Götz, 1995: 29), the latter goes some way
toward realizing this ambition. Shot between 1962 and 1963, Density 10:2:2:1 consists
of stills of hand-drawn raster permutations animated to produce moving sequences of
flickering patterns. Both works were inspired by what Götz perceived to be the
affordances of electronic technology in general, and of television in particular, namely
their capacity to realize a new form of kinetic abstraction. To some extent, Götz’s
project therefore sought to update the modernist exploration of kinetics, most notably
Hans Richter and Viking Eggeling’s experimental films of the 1920s, which developed
the idea of abstraction as a universal language of perception. The film Density 10:2:2:1,
which uses animation techniques, also shares affinities with the work of Oskar
Fischinger and Norman McLaren, both of whom furthered the development of
abstract animation from the 1930s onwards. However, while Götz’s raster works, as
this article argues, originate in a modernist framework, his project of “kinetic electronic
painting” (1961: 14), with its reliance on a new techno-aesthetic framework, also
markedly differ from these earlier approaches.
The idea of using electronic technology to create moving images began with radar
experiments that Götz conducted in Norway during World War II. Struck by the
aesthetic potential of the “Braun tube,” as the early cathode-ray tubes were known, he
believed that analogue electronics had the potential to transform abstract painting: “A
representation of forms of all kinds is possible with the help of the directed electron
ray” (Götz quoted in Mehring, 2008: 33).2 After the war, Götz intuited that the new
medium of television would offer a more sophisticated means of realizing his vision
of “kinetic electronic painting.” As Christine Mehring reminds us, “television,” in
postwar Europe was understood not primarily as a mass medium, but rather in terms
of “its purely technical and formal possibilities” (2008: 32). Indeed, Götz’s interest in
television was largely theoretical, resting on his (sometimes misconstrued)
understanding of analogue electronics, and the discretized image that they enabled.
Yet, for reasons that I shall develop, the works that Götz made as a so-called
“television artist” were produced without his ever having access to television
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
129
technology, complicating the matter further. Consequently, Mehring has argued that
Götz’s raster production has less to do with the television itself than with “a yet
undetermined new medium that most closely resembled television” (2008: 36). Given
these circumstances, it becomes apparent that Götz’s “television works,” as we might
call them, are only loosely connected to the actual technological affordances of the
medium that supposedly defined them.
Peter Nagy and Gina Neff’s concept of “imagined affordance” (2015) sheds light on
the specific dynamic that arises when an artist projects ambitions, informed by their
own background of expertise, onto a technology to which they do not have access.
Nagy and Neff aim to redefine the concept of “affordance” through an examination
of three intersecting factors: the material features of the given technology; the users’
perceptions or expectations of those features; and the specific ends for which these
features are designed. In particular, the authors argue that a contemporary theory of
affordance needs to take into account the beliefs and affects of users in their
interaction with “the blackboxed muck of socio-technical systems” (2015: 4).
“Affordances can and should be defined,” they write, “to include properties of
technologies that are ‘imagined’ by users, by their fears, their expectations, and their
uses […]” (2015: 4). One specific aspect of Nagy and Neff’s definition of “imagined
affordance” will prove particularly useful for Götz’s work. Building on J. J. Gibson’s
definition of “imagery” as “an extension of perceptual knowledge, which is ‘not so
continuously connected with seeing here-and-now as perceiving is,’” they state: “The
point is not solely what people think technology can do or what designers say
technology can do, but what people imagine a tool is for” (2015: 4-5).
In the 1960s, Götz published a number of articles indicating that he possessed a solid
theoretical understanding of television’s technical affordances (Götz, 1959; 1960;
1961). Yet he had already begun to project aesthetic possibilities onto television at a
time – World War II – when it was for him still a medium “merely imagined” (Mehring,
2008: 35). Moreover, having failed to gain access to electronic technology after the
war, he remained unable to test his hypotheses. His professed ambition to use a
television to produce “kinetic electronic painting” remained, therefore, in the realm of
the imaginary. Such a realm, however, ought not to be defined in terms of the lack of
cognition or access that the psychic dynamic of desire implies; on the contrary, Nagy
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
130
and Neff’s conceptual innovation invites a deeper scrutiny of television’s imagined
affordances, as they might have pertained to the artist. In this article, I argue that the
affordances of television, which enabled Götz to imagine the medium as a tool for
painting, indicate a larger discursive field that reconciles the theoretical underpinnings
of Modernism with the statistical principles of information aesthetics. In so doing, I
aim to show that Götz’s raster works are a continuation of the intellectual framework
that shaped his broader painterly practice. In the first part of this article, I argue that
Götz’s ambition to use technology to make kinetic painting can be traced to his
interest, in the mid-1930s, in Richter and Eggeling’s experiments with kinetics. In the
second part, I focus on the role that information aesthetics played in defining the
newly-quantified image field that enabled the Raster Pictures. In the third and final
part, I discuss the imagined affordances of television as exemplified by Density 10:2:2:1.
Fig. 1: Karl Otto Götz, Karant 5.7.1957, mixed media on canvas, 1957. 100 x 120 cm. Private collection,
Munich. © DACS 2019.
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
131
Early kinetic painting
By the late 1950s, Götz had become a leading figure of a European style of gestural,
abstract painting, known as Informel [fig. 1]. In 1959, he was appointed professor of
painting at the Düsseldorf Art Academy and, in parallel to his exploration of Informel,
began producing the first Raster Pictures. These works, he insisted, had “nothing to
do with [his] paintings” (1995: 23, 31); they merely (and apparently entirely
coincidentally) “resembled Informel pictures because there were no clearly defined
forms” (1995: 24). Art historians have taken issue with this assessment, arguing that
the Raster Pictures and the Informel paintings were closely connected through a shared
interest in developing a new abstract visual language (Beckstette, 2009; Mehring, 2008).
Mehring goes further, asserting that “[t]he Informel painting Götz became best known
for, in fact, seems saturated with the ambitions of an electron painter” (2008: 36). But,
as I now argue, the reverse is also true. The dream of electronic painting, at the origin
of both the Raster Pictures and the later Density 10:2:2:1 film, predates Götz’s
awareness of either radar or television, going back instead to the early moments of his
career as a painter. In the 1930s, Götz’s fascination with Richter and Eggeling’s
abstract films, and the works he produced as a result, created the conditions for the
advent of both his Informel style in the early 1950s, and the Raster Pictures from 1959
onwards. Consequently, both corpora of works are embedded in a similar modernist
framework, which emphasizes the importance, in developing an abstract language, of
medial autonomy. This would prove crucial to the way in which Götz conceived of a
new kinetic painting informed by electronics.
Götz writes that his mid-1930s discovery of Richter’s book Filmgegner von heute –
Filmfreunde von morgen (Film Enemies of Today, Film Friends of Tomorrow) (1929),
prompted his experimentation with film (Götz, 1994: 143). It is likely that he had also
already seen Richter’s abstract film Rhythmus 21 (1921);3 he would in any case certainly
have been aware of it, given that one illustration in Richter’s book, captioned “Here
the rapid growth of a square,” directly references the opening sequence of the film
(Richter, 1929: 10). As a result, in the summer of 1936, Götz used two series of his
own works made between 1935–36 as source material for his first filmic experiments:
the Photomalereien (Photo-paintings), composed of over-painted photograms;4 and the
Spritzbilder (Spray-paintings), realized by overlaying several stencils on a blank surface,
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
132
and applying paint by means of a mouth atomizer that allowed the artist to diffuse it
evenly. The resulting abstract shapes were later reworked by painting onto some of the
areas, or drawing figurative patterns over them (Oellers, 2004: 9). The layered aesthetic
that characterized both series of works lent itself to animation: with the help of a
9.5mm Pathé camera and three projectors, Götz filmed the paintings so as to develop
sequences of complex shapes that morphed into one another when projected (Götz,
1993: 154). He produced three short films, which were all destroyed in the Dresden
aerial raids at the end of the war.
While the Photo-paintings and the Spray-paintings remain figurative, they anticipate
the artist’s evolution toward abstraction in two distinct ways. Firstly, they were
produced using a range of techniques that encouraged the automatic, as opposed to
the mimetic, trace. Thus, Götz writes that they were his “first Surrealist works” and
even if, by his own admission, he knew very little about Surrealism at the time (Götz,
1993: 153), it is clear that the manner in which both the Photo-paintings and the Spray-
paintings rely on existing shapes for inspiration and image transformation parallels the
Surrealist techniques of collage and frottage.5 Secondly, in painting and drawing over
automatically-obtained shapes, Götz already favored an abstract aesthetic: “In some
of the photo-paintings, I went so far as to work figments of the imagination into the
image that had absolutely no resemblance with known objects or beings” (1993: 153).
Although we know little of the destroyed films, we can imagine that the animation,
duplication, and layering of the Photo-paintings and the Spray-paintings into new
configurations and sequences, which the three projectors enabled, would have further
blurred the lines between figuration and abstraction, and produced a result where, to
repurpose Götz’s description of the Raster Pictures, “clearly defined forms” are
absent. In this, the works conformed to the agenda set by Richter in his book: film, he
argued, ought to follow in the footsteps of painting, and emancipate itself from the
representation of natural forms, because “what has long been proven in other art
forms is also valid for film: being bound by nature is limiting” (Richter, 1929: 33). It
was not only Richter’s book that gave Götz the idea of using his own paintings to
make his first films; both Richter and Eggeling’s broader experiments with kinetics
also provided the theoretical background for bridging the gap between Informel and
kinetics in his work. Götz’s conception of kinetics as a modernist pursuit, however,
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
133
rests on a slight misunderstanding of his interpretation of Richter and Eggeling’s
artistic process.
In 1919, the pair produced several Rollenbilder (Picture Scrolls): drawing studies realized
on long strips of paper, which depicted the development and transformation of a given
shape, based on the model of the musical variation (Hoffmann, 1998b: 76). In
Erinnerungen, his artistic auto-biography, Götz mistakenly connects the series of ten
drawings that form the Präludium scroll (1919), asserting that the drawings, which
developed a visual theme over an approximate length of six meters, “necessarily led to
[Richter’s] first abstract film Rhythmus 21” (Götz, 1994: 144). While Richter and
Eggeling had originally hoped to set these drawings in motion, this proved more
difficult than they had foreseen, and Richter, in fact, finally gave up on the idea. Thus,
according to Justin Hoffmann:
They did produce a number of test film strips of which one, the filming
of part of the Präludium roll, was later used in Richter’s film Rhythmus 23.
All told, however, [Richter and Eggeling] were badly disappointed by the
results of their work in the UFA [Universum Film A.G.] studios [in Berlin]
when they saw the developed films. Richter came to the conclusion “that
these rolls could not be used, as we actually had thought, as scores for
films” (Hoffmann, 1998b: 78).
In fact, Richter has specified that Rhythmus 21 was made out of “rows of paper
rectangles and squares of all sizes,” rather than based on the scroll format (quoted in
Hoffmann, 1998b: 79). This, in itself, is significant in a manner that was lost on Götz.
While the direct connection that he had perceived between Richter’s scrolls and kinetic
painting was a mistake, the way Richter actually made Rhythmus 21 to some extent
already anticipates a quantified approach to the screen, which would prove central to
the Raster Pictures. Thus, as Hoffmann argues, Richter had already begun to see the
screen as “a precisely calculable form in its own right” (1998b: 79); this comes to light
in Richter’s own description of the process: “In the rectangle and the square I had a
simple form, an element, that was easy to control in relation to the rectangular shape
of the screen” (quoted in Hoffman, 1998b: 79.)
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
134
For Götz, the Picture Scrolls also anticipated a problem common to both his Informel
and raster production: how to visualize the evolution of what he termed “pictorial
schemes” (Bildschemata). Indeed, the Picture Scrolls, although static, already afforded a
kinetic perceptual experience: as the eye wanders from one shape to the next, the
viewer, in Richter’s words, “experiences the representation in a single stream, which
easel painting could not offer” (quoted in Hoffmann, 1998b: 83). For Richter and
Eggeling, the scrolls belonged to a broader endeavour to develop a “universal
language,” which developed the idea that “abstract form offers the possibility of a
language above and beyond all national language frontiers” (Richter, quoted in
Hoffman, 1998b: 76). According to Richter, by the time he first met Eggeling in 1918,
the latter had already developed “a whole syntax of form elements, when I was just
starting with the ABC” (quoted in Hoffman, 1998b: 75).
Götz’s Fakturenfibel (Facture Primer), an artistic diary that he compiled during the war,
and which analyzed the development of anthropomorphic and biomorphic shapes
over a series of variations, apparently closely parallels this project [fig. 2]. Indeed, in
German the term Fakturenfibel refers, precisely, to children’s alphabet books
(Bibelisten).6 Thus, in retrospect, Götz argued that Richter’s scrolls tackled the same
problem as his own formal experiments: namely, “how one could best achieve
sequences of formal transformations in painting” (Götz, 1994: 143). However, what
Götz perceived as a focus on formal variations had more to do, for Richter and
Eggeling, with the avant-garde pursuit of establishing a “new visual system of
communication … for the new society” (Hoffman, 1998a: 65). Whereas, in Richter
and Eggeling’s case, these formal experiments led to a thorough exploration of the
relation between film and music, in Götz’s case, the Facture Primer gave rise to
paintings in their own right, such as the 24 Variationen mit einer Faktur (24 Variations
on a Facture) (1948) [fig. 3]. In the end, this second misunderstanding proved crucial
to the development of Götz’s Informel style, which sought to combine the “formal
transformations” of the Facture Primer with the speed of the monotype (Götz, 1994:
113). The result was, in 1952, the development of a new technique that would establish
Götz’s painterly style for the rest of his career: the fast application of glue and gouache
on paper, whose product is then dynamically reworked with a rake, in turn followed
by a fresh application of paint (Götz, 1994: 113)
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
135
Fig. 2: Karl Otto Götz, Variationen über 3 Themen / Variations on 3 Themes (Pages from the “Facture Primer”).
Woodcuts on laid paper, 1945. Each page, 23 x 20,5 cm. © DACS 2019.
After the war Götz never again tried to “animate” his traditional painterly production
(i.e., the Informel paintings); rather, he entirely transferred his interest in kinetics to the
Raster Pictures. Two related factors account for this decision. Firstly, Götz held a
strong belief in the historical progression of modernist painting (epitomized by what
he calls the “dissolution of the classical concept of form”), from Malevich’s Black
Square of 1913, through Informel, to “serial painting, raster painting, statistical painting,
and electronic painting” (Götz, 1963: 31). For him, new theoretical and technological
innovations both continued and refined the progression of the painterly medium.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
136
Indeed, formal “dissolution,” so central to abstract painting, takes on a new meaning
in the era of information theory and Gestalt psychology: “Dissolution does not mean
disappearance,” asserts Götz (1963: 31); rather, it is possible to see the same form,
presented against a similarly patterned background, no longer as super-structure (i.e.,
according to the figure-ground relationship), but rather in terms of the varying degrees
of density of its elements. In the latter case, writes Götz, “we have dissolved [the form],
in that we have described it at the level of its microstructure” (1963: 31).
Fig. 3: Karl Otto Götz, part of the series 24 Variationen mit einer Faktur / 24 Variations on a Facture, oil
and sand on hardboard, 1948. 57,5 x 46 cm. Photo: Joachim Lissmann © DACS 2019.
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
137
Secondly, Götz, like the Richter of Filmgegner von heute – Filmfreunde von morgen, firmly
believed that modernist painting’s evolution toward abstraction provided a template
for other art forms, and insisted that such progress be media-specific. For instance,
Götz perceived, in analogy with painting, a “dissolution of the classical concept of
form” in music, judging György Ligeti’s Atmosphères (1961) – a work characterized by
the density of its sound texture – to be “Informel music,” and an equivalent to his own
Raster Pictures (Götz, 1995: 67).7 Götz judged, however, that the filmic development
of kinetics since the late 1930s – he mentions Oskar and Hans Fischinger, Norman
McLaren, and Len Lye (1959: 46; 1960: 155) – had relied exceedingly on existing
painting styles and had therefore failed to develop its own formal language:
Mostly, some elements are directly lifted from abstract painting, and are
set in motion with the help of animation techniques or of another process,
in a way which simply degrades the cinematograph to the rank of
reproduction mechanism. […] While the dialectical evolution of the
dissolution, into abstract painting, of the old concept of form created its
own autonomous means of expression, abstract film has not yet
emancipated itself from existing material (Götz, 1960: 155-56).
By contrast, Götz envisioned developing a form of kinetic abstraction that would rely
entirely on the technological affordances of electronic media. The new electronic
forms of representation that emerged in the wake of World War II – from the radar
screen to the television image – promised to facilitate the autonomous development
of kinetic painting. In what follows, I argue that information aesthetics played a crucial
role in shaping a new form of abstraction specific to the electronic age.
From Informel painting to electronic painting
Götz’s war-time radar experiments led him to coin the term “electron painting,” based
on the aesthetically-pleasing images he had obtained by manipulating the “arbitrary
deflection of the electron ray” (Götz, 1961: 14). At the time, “electron painting” simply
consisted in the rudimentary line patterns that resulted from “applying electrical
current to the radar instrument” (Mehring, 2008: 33): “These lines were horizontal,
vertical, or diagonal, depending on the place where one connected anode (+ pole) or
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
138
cathode (– pole),” says Götz; “[t]he straight lines ran in all directions” (quoted in
Mehring, 2008: 33). While he would, between 1960 and 1961, begin to speak of
“electronic painting” instead (Götz, 1961), the motif of the electron had established a
techno-aesthetic framework that would redefine this new form of kinetic painting
against earlier instances of the medium.
Fig. 4: Karl Otto Götz, Statistisch-metrische Modulation 11:5 / Statistical-metrical Modulation 11:5, pencil and felt-tip pen on paper, 1959-60. 50 x 65 cm. Collection Etzold, Städtischen Museum Abteiberg. Photo:
Achim Kukulies. © DACS 2019.
Until the second half of the twentieth century, various modes of artistic representation
– from painting, to photography, to film – shared a reliance upon the materiality of
their medium. By contrast, television brings about what Götz calls the first
“dematerialized image”: “the electronic picture,” he asserts, “is solely made of flashing
electrons” (1959: 47). Götz’s emphasis on the substructure of the electronic image – a
configuration of discrete and chaotic elements that exist just below our perceptual
threshold – has important consequences for the way that he conceived of, and
attempted to produce, his version of kinetic painting.8 Originally, Götz had envisioned
programming his new form of painting directly “within the microsphere of electronic
impulses and superimposed frequencies” (1960: 191). While he never found the
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
139
technical means to realize this project, he drew upon the micro-aesthetic model that
the electron inaugurated to produce a series of Raster Pictures.
Fig. 5: Karl Otto Götz, Statistisch-metrische Modulation 1:15 / 4:12 / 12:4 / 15:1 / Statistical-metrical Modulation, pencil and felt-tip pen on paper, 1959-60. 50 x 65 cm. Collection Etzold, Städtischen
Museum Abteiberg. Photo: Achim Kukulies. © DACS 2019.
Individually entitled Statistisch-metrische Modulation (Statistical-metrical Modulation), the
Raster Pictures take the form of pencil and felt-tip pen drawings on paper or
cardboard, based on various combinations of 2 x 2 cm black and white squares [fig. 4
and 5]. While Götz drew the first Raster Pictures himself (Götz, 1995: 23), the
subsequent, larger works were realized with the help of his students at the Academy.
Karin Martin (now Karin Götz) painted a couple of the Raster Pictures directly on
canvas, such as Statistische Verteilung (Statistical Distribution, 1961), which was made by
using a paint brush and tempera (Götz, 1995: 45).9 Due to its imposing size, the picture
Density 10:3:2:1 (200 x 260 cm, 1959-61) [fig. 6] was split between several students,
who were given individual Bristol boards to take home; the final picture is made up of
eight separate pieces of cardboard mounted on canvas. In each case, the arrangement
of the black and white squares followed a specific “program” developed by Götz
(Götz, 1995: 44). Programming does not here designate a computerized process, but
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
140
involves the statistical analysis of the image field, which was conceived as an aggregate
of discrete and modular elements, in a striking anticipation of the digital image.
Fig. 6: Karl Otto Götz, Statistisch-metrische Modulation “Density 10:3:2:1” / Statistical-metrical Modulation“Density 10:3:2:1”, felt-tip pen on cardboard on canvas, 1959-61. 200 x 260 cm. Private
collection. © DACS 2019.
In painting, such a focus on the quantification of abstraction at the micro-level did not
develop until the late 1950s, with François Morellet’s random distribution systems of
colored squares and triangles on canvas.10 While Götz’s and Morellet’s experiments
were exactly contemporary, it is uncertain whether the former had any awareness of
the latter. However, Götz’s writings indicate that when he produced the first Raster
Pictures in 1959, his frame of reference did not include painting, but rather the
intersecting fields of information theory and aesthetics. Indeed, the way that Götz
describes his programming process evokes ideas that were central to the then-emerging
field of information aesthetics, in particular those of the German philosopher Max
Bense. Before considering the role that television has played in defining the Raster
Pictures, it is therefore important to outline what these works owe to information
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
141
theory and the corresponding development of information aesthetics in the latter half
of the 1950s in Europe.
In 1958, two important books were published in this regard: in France, the physicist
and philosopher Abraham Moles’ Théorie de l’information et perception esthétique (translated
into English in 1966 as Information Theory and Esthetic Perception); and, in Germany,
Bense’s Ästhetik und Zivilisation: Theorie der ästhetischen Kommunikation (Aesthetics and
Civilization: Theory of Aesthetic Communication), the third volume in his Aesthetica
series. 11 The Belgian-born German physicist Werner Meyer-Eppler’s technical
introduction to information theory, entitled Grundlagen und Anwendungen der
Informationstheorie (Basic Principles and Applications of Information Theory), published in 1959,
proved equally instrumental to the development of Götz’s early statistically-
determined paintings (Götz, 1995: 23). During the 1960s, Moles and Bense would each
pioneer his own version of information aesthetics, with the common ambition to use
communication theory as a model to quantify aesthetic perception and artistic
production.
Bense’s notion that works of art can be objectively assessed according to their
“aesthetic information,” – a statistical measure of the work’s information content
based on an order to complexity ratio – would prove to be central to the development
of computer-generated art from 1963 onwards.12 Bense’s collaboration with scientists
and artists at the University of Stuttgart culminated, in 1965, in the first exhibition of
computer-generated graphics worldwide. 13 Götz read the first three volumes of
Bense’s Aesthetica during the 1950s. For the artist, the idea that the aesthetic structure
of an artwork could be measured – which Bense appropriated from the mathematician
George David Birkhoff – and that, in turn, such measures could be used to produce
new aesthetic objects, provided fertile ground for the “programming” of the Raster
Pictures, several years before engineers began to use computers for artistic purposes.14
“I calculated the information content with the help of information theory,” recounts
Götz in Erinnerungen. “The configuration of the small and big units was not
determined, but rather resulted from statistical rules that I established” (1995: 24).
Moles’ appropriation of information theory, which aimed at developing a scientific
theory of aesthetic perception, was equally central to Götz. He met Moles in Paris
shortly after having read his 1958 book (Götz, 1994: 271); thereafter, they remained
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
142
in touch throughout the 1960s.15 In Moles’ reading of information theory, a message
is organized according to a hierarchy of repertoires, or levels.16 He defined the dynamic
between the different levels of signs in an image, from its smaller units or individual
“signs,” to their organization into what the viewer perceives as broader patterns, which
he referred to as “super-signs”: “A super-sign is a normalized and routinized
assemblage of signs from the inferior level” (Moles, 1971: 26). 17 Accordingly, the
Raster Pictures are composed of rectangular “building blocks” (Baustein), each made
of six 2 x 2 cm squares or “elements” (Elemente), which represent the smallest units
used (Götz, 1995: 24). Aggregates of four or eight “building blocks” constitute a small
“field”; in turn, small fields can be combined to create bigger fields, or “super-fields”
(Superfelder) (Götz, 1995: 24). This application of information theory to the visual arts
introduces permutability to every level: not merely to the traditional level of the
macrostructure, but also to the microstructural level, down to the smallest chips within
individual “building blocks.” In order to differentiate these “quantified pictures” from
the rest of his painterly production, Götz insisted that the Raster Pictures were mere
“objects of visual demonstration” for the application of information theory (Götz,
1995: 25). However, the works also had a deeper purpose: to provide the preliminary
steps towards a form of kinetic painting inspired by the aesthetic and technological
affordances that Götz imagined of television. This comes to light in the Raster Picture
Density 10:3:2:1, whose design relies on the application of Moles’ principles to the
discretized field of the television image.
In his 1961 article, “Elektronische Malerei und ihre Programmierung” (Electronic
Painting and its Programming), Götz describes how Density 10:3:2:1 was very precisely
modelled on the grid structure that underpinned the television image. While the use of
the grid as a structuring principle is hardly unique to Götz – Rosalind Krauss famously
demonstrated the centrality of grids to modernist painting, from Malevich and
Mondrian, to Jasper Johns and Agnes Martin (Krauss, 1979) – the Raster Pictures
literally replicate the pixelated structure of the television screen (in German, raster means
both “grid” and “screen”):18 “It is well-known,” writes Götz, “that the television image
is constituted of approximately 450,000 tonal points (Bildpunkten),” and by “some 40
levels of brightness”:
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
143
In the model picture Density 10:3:2:1, approx. 400,000 tonal points
(elements) were ordered and drawn; we proceeded with only two degrees
of brightness, realized with black and white elements, but with four
different degrees of density (Götz, 1961: 14).
These four degrees of density – dark, medium, light, and very light –, emerged from
different combinations of the black and white chips within the “building blocks”
themselves [fig. 7]. The distribution of the different densities relied upon a “numerical
system,” namely the arbitrarily chosen series of numbers 10:3:2:1, where the highest
density level is allocated to the value (10), and the lowest to the value (1) (Götz, 1961:
23). In pictorial terms, this means that out of the sixteen super-fields, ten would be
assigned the darker level of density, three the medium level of density, two the low
level of density, and one the lowest. Once this was established, Götz worked his way
down from the super-fields to ever smaller field units, programming all permutations
down to the smallest “building block” and its six square components (Götz, 1961: 23).
Fig. 7: Four levels of density, illustration in “Elektronische Malerei und ihre Programmierung,” p. 23.
© DACS 2019.
Despite the centrality of the television’s affordances to the production of the Raster
Pictures, Götz did not own a television set until 1965; it is unclear whether he had had
first-hand experience of the medium prior to this date. 19 His understanding of
television was arguably derived therefore from two very different sources: on the one
hand, his wartime radar experiments, and on the other, his later wide-ranging reading
on the topic of information theory. Even though Götz was familiar with the potential
uses of the cathode-ray tube to generate visual representations, after the war he no
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
144
longer had access to electronics. Therefore, shortly after he began producing the first
Raster Pictures, the artist resorted to a “thought-experiment” in order to describe the
production of electronic painting (Götz, 1960: 156).
In an article published in 1960, titled “Vom abstrakten Film zur Elektronenmalerei”
(From Non-Objective Film to Electron Painting), Götz speculates on the respective
capacities of film and television to achieve his aim (Götz, 1960).20 He postulates a
surface of three by four meters, gridded into 120,000 fields of one square-centimeter
each. The empty fields would be filled with black, white and grey squares, according
to a specific plan, to create changing patterns. Each permutation, filmed in turn, would
correspond to a frame of 1/24th of a second. Therefore, in order to generate the
impression of seamless movement between each permutation in a 10-minute film,
14,400 frames, of 14,400 permutations, would be needed (Götz, 1960). Götz estimates
that it would take two people forty years to complete such a film: “the cinematographic
method,” he concludes, “proves to be a highly inefficient procedure” (1960: 157). By
contrast, the artist anticipates that the electronic modulations of television frequencies
could produce a similar result in a drastically reduced time of 133 seconds (1960: 158).
In this new, imagined scenario, it would no longer prove necessary to draw each
individual permutation of the black, white and grey chips manually, as the changes
would be electronically generated. The television, now more than a mere receiver of
transmitted images, would be used to “experimentally produc[e] composite picture
signals” (1960: 157), analogous to the grid patterns of the Raster Pictures.
How clearly Götz understood television’s technical affordances at the time remains a
matter of speculation. He does not specify how the “signals” would be produced: he
merely notes in passing that his article cannot get into the matter (1960: 157), casting
doubt on the feasibility of the project. Moreover, a further episode demonstrates that
what Götz termed “experimental” (1960: 158) might have been better described as
wholly speculative. In 1960, he had, through the intersession of Meyer-Eppler,
obtained an appointment at Siemens in Munich.21 He hoped to convince Siemens that
television technology could be used to make electronic painting. The Siemens
directors, however, were chiefly interested in the potential commercial output of
Götz’s idea: they wanted to know whether it would lead to a new appliance that the
company could mass-produce and sell. In Erinnerungen, Götz writes in response: “How
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
145
could I have known, when the technical implementation of the moving electronic
raster picture was not yet at all clear?” (Götz, 1995: 31).
In other words, Götz’s proposal would have required Siemens to invest time and
money into a project whose output neither they, nor Götz, could precisely anticipate.
It is likely, therefore, that what the artist had envisioned as the technical affordances
of television was, at least in part, wishful thinking. In this respect, it is telling that his
vision of “kinetic electronic painting” never materialized, even when the technology
had become more readily available, as he retrospectively acknowledged; 22 instead,
Götz’s experiments paved the way for new artistic practices that developed away from
painting. This comes to light, as early as 1963, in the video art of the Fluxus artist Nam
June Paik, who wrote that “[his own] interest in television has been fundamentally
inspired by [Götz].”23 As Siemens declined to provide Götz with the financial and
technical support he needed to realize his project, the artist concluded: “For the time
being, I restricted the further development of my ideas to the production of new Raster
Pictures and new programs to find out which image structures could be created with
which programs” (Götz, 1995: 31).
Within the framework of affordance theory, Götz’s unsuccessful attempt to gain
access to television technology exemplifies what Jenny Davis and James Chouinard
identified as a “discouraging” context (Davis and Chouinard, 2016: 245). The authors
argue that in order to be actualized, a specific affordance needs to coincide with a
certain number of material conditions, of which they list three: firstly, a knowledge
sufficient to perceive an object’s use (“perception”); secondly, the skill to use it
(“dexterity”); and lastly, the ability to access, or engage with, the object (“cultural and
institutional legitimacy”) (2016: 245-246). Those circumstances determine whether
agents are “allowed,” “encouraged,” or “discouraged,” in their use of a given artifact
or technology (2016: 246). The failure to gain support from Siemens resulted in a
characteristically discouraging context due, in this case, to the artist’s lack of
institutional – and commercial – legitimacy. In this, Götz’s experience contrasted
sharply with the interdisciplinary collaborations that took place in scientific
laboratories, such as Bell Labs, in North America in the early 1960s.24 However, these
circumstances did not wholly discourage Götz, lending an interesting twist to Davis
and Chouinard’s framework: the agent, in this case, actualized the affordances of
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
146
television insofar as he used the medium as a screen onto which to project his own
artistic ambitions. Götz would go on to realize these affordances by other means, as
the film Density 10:2:2:1, produced between 1962 and 1963 in collaboration with Karin
Martin, demonstrates.25
The imagined affordances of television
Two years after his failed attempt to gain access to television technology at Siemens,
Götz reverted to the medium of film to produce Density 10:2:2:1.26 As noted above,
Götz had previously written off film as unsuitable for his purpose. By 1962, however,
he had built a rostrum camera that enabled him to film and animate stills of hand-
drawn rasterized images with more ease than he had anticipated in the 1960 thought
experiment. The film – silent, black and white – is fifteen minutes long, and divided
into three parts: an opening sequence of approximately two minutes that displays a
short preview of kinetic painting; a middle section that documents Götz and Martin
making the film; and a final section, entitled Ein Rasterfilm von K.O. Götz 1962–63, that
contains a longer sequence of kinetic painting. In the middle part, Martin, who hand-
drew most of the panels, sits at a desk completing a raster pattern with a felt-tip pen,
all the while referring to the “program” – a wad of instruction sheets that compile
sketches of the micro-level permutations. Götz hovers behind her, pipe in hand,
sometimes pointing to a detail here or there on the unfinished image. Later, the two
artists are filmed sitting by the rostrum camera, whose tall metallic structure is barely
visible against the dark background.
Framing this middle section, the opening and concluding sections provide two
different insights into how the imagined affordances of television rendered a new form
of kinetic painting possible. Save the occasional flicker that occurs in isolated places
on the image, the opening sequence shows what appears to be a static Raster Picture,
maintaining the ambivalence between canvas and screen. The brief flashes of light that
correspond to a filmed modification of the microstructure are not perceived as a
change in the image structure, but rather appear as pure spontaneous movement of
light particles on the surface of the canvas. Every few seconds, a more noticeable
permutation affects the broader structure of the image, but the transition between the
macrostructure permutations is less smooth than at the micro-level. Rather than
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
147
evoking the effortless movement of the image on the television screen, they betray the
frame-by-frame filming process, and the subsequent animation into an imperfect
illusion of “kinetic electronic painting.”
In the final section of the film, however, Götz experiments further with the
permutation levels and the speeds of display, until the image appears as an evenly
flickering surface, while the macrostructure of the canvas simultaneously shifts at a
slightly slower speed. This third and final section of the film most clearly demonstrates
that what Götz, in Erinnerungen, termed the “statistical movement of raster pictures”
(1995: 77) has little to do with previous filmic attempts to animate pre-existing abstract
shapes across the screen, as was the case from Richter’s Rhythmus 21 (1921) to Oskar
Fischinger’s An Optical Poem (1938). Rather, the micro-level movement of the black
and white elements in Density 10:2:2:1 powerfully evokes the pixelated surface of the
television screen and the barely perceptible flicker of its tonal points.
The pixelated appearance of the television image originates in its discrete structure: in
order to be transmitted point-by-point, the image needs to be reduced to raster
elements, before being recombined on the surface of the screen (Hölling, 2017: 117).
Mostly, the human eye tends to perceive the television image, however pixelated, in a
continuous fashion: our perception naturally tends to Gestalt. According to Friedrich
Kittler, Paul Nipkow, the inventor of the television circuit in 1883, counted on this
natural tendency, namely: “the inertia of the eye and its unconscious ability to filter out
the image flicker either physiologically through the after-image effect already employed
by film, or more generally or mathematically through the integration of individual
pixels” (2010: 209). In the final section of the film, this comes to light when the micro-
level permutations seem to unify the macrostructural changes, enabling a perceptual
seamlessness that had been lacking in the earlier passages of the film. But at times, the
eye may hesitate between focusing on the micro-level of the pixels, and the macro-
level of the Gestalt. For instance, when the otherwise-imperceptible flicker of the
television screen tires the eye, it concurrently draws attention to the quality of its
surface. At the moment when it is perceived as a discrete and discontinuous surface,
the television image offers a new aesthetic model for painting in the electronic age.
Discontinuity is by no means specific to the electronic image. Painting, it may be
argued, is also a discrete practice that combines separate marks into a broader picture.27
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
148
Moreover, painting’s perceptual dynamic resembles that of television: the viewer may
see the brushstrokes alternatively as meaningful Gestalt or as discontinuous marks.
Pointillism, to take an obvious example, stretches to its limit the viewer’s capacity to
perceive distinct marks as a continuous whole. With Seurat, to borrow Richard Shiff’s
analysis, the representational system of points eventually turns upon itself, revealing,
through the material mark, the artist’s hand, instead of the image it was intended to
depict: “Seurat’s dot – a dab of viscous paint – suddenly becomes ‘noise,’ the antithesis
of what is usually called ‘information’” (Shiff, 2001: 142). But to describe the dot as
“noise” would miss the major feature of Götz’s painterly aesthetic of television. In the
Raster Pictures and film, the micro-field of the point is valued not because it reveals
something that either subtends or exceeds representation, but rather for its own sake.
Therefore, an aesthetic of discontinuity, as inaugurated by television and transposed into
painting by Götz, is specific to an historical moment that had begun to perceive images
in quantitative terms, as “discrete quantities of data, like telegrams” (Kittler, 2010: 208).
Density 10:2:2:1 is the most developed of Götz’s “television works.” As such, the film
is both an admission of (technological) failure, and a success. The various
misconceptions and practical impediments that separated Götz from television also
permitted his imaginative construction of the medium, leading him to produce a
corpus of works that anticipated the artistic appropriation of electronics in the years
to come. By 1965, the engineers Georg Nees and Frieder Nake, working closely with
Bense, had successfully applied the philosopher’s principles in order to program one
of the first series of digital pictures. Götz had been correct in intuiting that the aesthetic
affordances of television’s discretized screen would lead to a new form of art; what he
narrowly missed, in order to fully deploy his artistic ambition, was the advent of the
digital computer.28
Conclusion
This article argued that Nagy and Neff’s concept of “imagined affordance”
productively modifies Gibson’s inaugural definition of what an environment “offers”
(Gibson, 1979: 127). Indeed, this reformulation renders the concept of affordance
particularly suitable for reflecting upon the imperfect artistic appropriation of pre-
digital technologies. Despite the discouraging context that prevented Götz from
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
149
gaining access to electronic technology, television nevertheless afforded a techno-
aesthetic model for the Raster Pictures and the film Density 10:2:2:1. This model,
however, differed from any actual affordances of the medium. Therefore, the concept
of “imagined affordance” prompted a deeper investigation of the role that “television”
and its broader associations played for the raster works.
Götz’s dream of electronic painting was closely connected to the development of his
own painterly practice from the 1930s onwards. Inspired by Richter and Eggeling’s
explorations of kinetics, these early works – the filmic experiments of 1936, and the
Facture Primer – were embedded in what Götz perceived (sometimes mistakenly) to
be a modernist agenda. Modernism, I argued, provided a common framework for the
development of his Informel signature style in the early 1950s and for the later
production of the Raster Pictures. As a result, Götz believed that the new form of
kinetic painting, which the medium of television enabled, differed markedly from
previous efforts in the genre. Indeed, his emphasis was on developing an autonomous
language, which would correspond to the new historical situation. The film Density
10:2:2:1 best exemplifies how “television,” in the end, functioned as an aesthetic
paradigm that enabled the artist to update modernist abstraction for a moment defined
by analogue electronics and the emergence of early digital technologies.
By 1963, Götz already evokes the new possibilities that computers, especially those
affixed with a cathode-ray tube output – i.e., a screen – would afford for electronic
painting: “When one thinks how much ‘easier,’ or less taxing, it would be to realize
such [raster] pictures electronically, that is to say, that they would appear on the screen,”
writes Götz, “it is obvious that these technical means will be used” in the future (1963:
62, my emphasis). 29 It is finally the computer that promises to realize the as-yet
unachieved goal of “kinetic electronic painting,” despite its limitations at the time.30
“Television” merely signified the latest technological incarnation of the electronic
image available in the early 1960s. Yet, as the raster works demonstrate in retrospect,
it afforded a pathway towards the digital.
Acknowledgements
I wish to thank Karin Götz for allowing me to reproduce K.O. Götz’s paintings; Ina Hesselmann at Stiftung Informelle Kunst for her assistance, and Katrin Thomschke
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
150
for pointing me in the right direction. My thanks also go to Ashley Scarlett, Martin Zeilinger, and the two anonymous reviewers for their helpful suggestions.
References
Beckstette, S. (2009) ‘Das Informel als Geburtshelfer der Medienkunst - Produktive
Bildstörung’, artnet magazine. Available as a PDF on K.O. Götz’s website at:
http://www.xn--ko-gtz-zxa.de/pages/texte_filme/texte.html (last accessed 23
January 2019).
Bunge, M. (2004) ‘Die postkinematographische Malerei von K.O. Götz im Kontext
seiner schriftstellerischen Arbeit’, in: R. Melcher, ed., K.O. Götz - Impuls und
Intention. Werke aus dem Saarland Museum und aus Saarbrücker Privatbesitz (exh. cat.).
Worms: Wernersche Verlagsgesellschaft, pp.23-36.
Burbano, A. and García Bravo, E. (2016) ‘Konrad Zuse: enabler of computational
arts?’, in E. Reyes-García, P. Châtel-Innocenti, K. Zreik, eds., Archiving and
Questioning Immateriality, Proceedings of the 5th Computer Art Congress. Paris: europia
Productions, pp.190-203.
Davis, J. L. and Chouinard, J. B. (2016) ‘Theorizing Affordances: From Request to
Refuse’, Bulletin of Science, Technology & Society 36(4): 241-248.
Edwards, B. (2013) ‘The Never-Before-Told Story of the World’s First Computer
Art (It’s a Sexy Dame)’, The Atlantic, 24 January.
https://www.theatlantic.com/technology/archive/2013/01/the-never-before-
told-story-of-the-worlds-first-computer-art-its-a-sexy-dame/267439/ (last
accessed 23 January 2019).
Ernst, M. (1948) Beyond Painting and Other Writings. New York: Wittenborn.
Friedberg A. (2009) The Virtual Window: From Alberti to Microsoft. Cambridge, MA:
MIT Press.
Gibson, J. J. (1979) The Ecological Approach to Visual Perception. Boston, MA: Houghton
Mifflin.
Götz, K.O. (1959) ‘Abstrakter Film und kinetische Malerei’ (subsection of ‘Gemaltes
Bild - Kinetisches Bild’), in: H. Bienek und H. Platschek, eds., blätter + bilder,
Zeitschrift für Dichtung Musik und Malerei (vol. 5). Wurzburg/Wien: Andreas Zettner,
pp.45-47.
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
151
Götz, K.O. (1960) ‘Vom abstrakten Film zur Elektronenmalerei’, in: F. Mon, ed.,
movens. Dokumente und Analysen zur Dichtung, bildenden Kunst, Musik, Architektur.
Wiesbaden: Limes Verlag, pp.151-158; English summary p.191.
Götz, K.O. (1961) ‘Elektronische Malerei und ihre Programmierung’, Das Kunstwerk
12: 14-23.
Götz, K.O. (1963) ‘Das manipulierte Bild’, Magnum. Die Zeitschrift für das moderne Leben
47: 28-31, 62.
Götz, K.O. (1968) ‘Möglichkeiten und Grenzen der Informationstheorie bei der
exakten Bildbeschreibung’, in: H. Ronge, ed., Kunst und Kybernetik. Cologne: M.
DuMont Schauberg, pp.183-192.
Götz, K.O. (1993) Erinnerungen 1914–1945, vol. 1. Aachen: Rimbaud.
Götz, K.O. (1994) Erinnerungen 1945–1959, vol. 2. Aachen: Rimbaud.
Götz, K.O. (1995) Erinnerungen 1959–1975, vol. 3. Aachen: Rimbaud.
Higgins, H. and Kahn, D., eds. (2012) Mainframe Experimentalism: Early Computing and
the Foundations of the Digital Arts. Berkeley and London: University of California
Press.
Hoffmann, J. (1998a) ‘Hans Richter, Munich Dada, and the Munich Republic of
Workers’ Councils’ (trans. T. Slater), in: S. Foster, Hans Richter: Activism, Modernism
and the Avant-Garde. Cambridge M.A. and London: The MIT Press, pp.48-71.
Hoffmann, J. (1998b) ‘Hans Richter: Constructivist Filmmaker’ (trans. M. Nierhaus),
in: S. Foster, Hans Richter: Activism, Modernism and the Avant-Garde. Cambridge M.A.
and London: The MIT Press, pp.72-91.
Hölling, H. (2017) Paik’s Virtual Archive: Time, Change, and Materiality in Media Art.
Oakland: University of California Press.
Kittler, F. (2010) Optical Media (trans. A. Enns). Cambridge: Polity.
Krauss, R. (1979) ‘Grids’, October 9: 50-64.
Leisberg, A. (1961) ‘Neue Tendenzen’, Das Kunstwerk 10: 3-34.
Mehring, C. (2008) ‘Television Art’s Abstract Starts: Europe circa 1944–1969’, October
125: 29-64.
Meyer-Eppler, W. (1960) ‘Optische Transformationen’, in: F. Mon, ed., movens.
Dokumente und Analysen zur Dichtung, bildenden Kunst, Musik, Architektur. Wiesbaden:
Limes Verlag, pp.159-160.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
152
Moles, A. (1966) Information Theory and Esthetic Perception (trans. J. E. Cohen). Urbana
and London: University of Illinois Press.
Moles, A. (1971) Art et Ordinateur. Paris: Casterman.
Mon, F. ed. (1960) movens. Dokumente und Analysen zur Dichtung, bildenden Kunst, Musik,
Architektur. Wiesbaden: Limes Verlag.
Nagy, P. and Neff, G. (2015) ‘Imagined Affordance: Reconstructing a Keyword for
Communication Theory’, Social Media + Society: 1-9.
Nake, F. (2009) ‘The Semiotic Engine: Notes on the History of Algorithmic Images
in Europe’, Art Journal 68(1): 76-89.
Oellers, A. (2004) ‘Zwischen Konzept und Imagination: Ein Rückblick auf 70 Jahre
Malerei’, in J. Linden et al., K.O. Götz: Ein Rückblick. Aktuelle Arbeiten (exh. cat.).
Aachen: Suermondt-Ludwig Museum and Ludwig Forum für Internationale
Kunst, pp.8-17.
Rosen, M. ed., in coll. with P. Weibel et al. (2011) A Little-Known Story about a
Movement, a Magazine, and the Computer’s Arrival in Art: New Tendencies and Bit
International, 1961–1973. Karlsruhe and Cambridge MA: ZKM and the MIT Press.
Rottmann, K. (2014) ‘Polke in Context: A Chronology’, in K. Halbreich, ed., Alibis:
Sigmar Polke 1963–2010 (exh. cat.). London: Tate, pp.20-63.
Richter, H. (1929) Filmgegner von Heute - Filmfreunde von Morgen. Berlin: Verlag
Hermann Reckendorf.
Seitter, W. (2003) ‘Painting has Always been a Digital Affair’, in: A. Lütgens and G.
van Tuyl, eds., Painting Pictures: Painting and Media in the Digital Age. Bielefeld:
Kerber Verlag, pp.30-35.
Shiff, R. (2001) ‘Realism of low resolution: digitisation and modern painting’, in: T.
Smith, ed., Impossible Presence: Surface and Screen in the Photogenic Era. Chicago:
University of Chicago Press, pp.125-156.
Steller, E. (1992) Computer und Kunst. Mannheim: B.I. Wissenschaftsverlag.
Notes
1 The expression is taken from Christine Mehring’s “Television Art’s Abstract Starts: Europe circa 1944–1969” (2008: 35). The present discussion owes a lot to this important contribution to scholarship on Götz. In asserting that “[b]y 1961, Götz was discussed as a television artist without, strictly speaking, ever having worked with a television” (2008: 35), Mehring is referring to an article by Alexander Leisberg entitled “Neue Tendenzen,” published in Das Kunstwerk in November 1961,
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
153
which was later “taken to task for praising works such as Götz’s that did not yet exist” (Mehring, 2008: 35, fn). The phrasing in Leisberg’s article, in fact, is more ambiguous: it merely mentions the “attempts of K.O. Götz – for the time being occupied with working with pre-calculation and model images – to develop an electronic painting using the means of television” (Leisberg, 1961: 34). Unless otherwise stated, all translations are my own.
2 The quote originates from the Fakturenfibel (Facture Primer), an artistic diary of forms that Götz compiled during the war. “Facture Primer” is Mehring’s translation. On the etymology of Fakturenfibel, see below, note 6.
3 In his artistic auto-biography, Erinnerungen, Götz writes that Hans Richter had been “a famous person for him since his youth,” immediately before mentioning Rhythmus 21 (1994: 143).
4 The photograms were realized in collaboration with Anneliese Hager (then Brauckmeyer). Hager had worked as a microphotography technical assistant during the 1920s, and would later become known for her Surrealist photograms and poetry. Hager became Götz’s first wife, after the war.
5 Max Ernst gave an account of this method in his text “Beyond Painting,” published in 1936 (Ernst, 1948). Given that Götz produced these two series of work between 1935–36, this is unlikely to have been a direct influence. However, Götz read Herbert Read’s What is Surrealism? in 1936, and subsequently corresponded with Read, sending him some of his photograms and photo-paintings from 1935–37 (Götz, 1993: 153-154).
6 The term Fakturenfibel comes from the latin factura: creation, by extension: form, style; and from the German Bibelisten: alphabet book, often mispronounced by children as Fibel (Das grosse Kunstlexicon von P. W. Hartmann, http://www.beyars.com/kunstlexikon/lexikon_2878.html (last accessed 23 January 2019).
7 For a detailed account of Götz’s relation to other artistic media, see Bunge, 2004. 8 Nam June Paik’s Magnet TV (1965), where the artist encouraged the audience to “manipulate the
cathode-ray tube with a horseshoe magnet and a degausser, both of which interfere with the flow of electrons in the tube and create baffling forms on-screen” (Hölling, 2017: 82), goes some way towards visualizing the chaotic substructure of the television image. The transitory abstract images thus created are a quasi-literal enactment of one might imaging “electron painting” to be.
9 Karin Martin married Götz in December 1965. 10 Morellet used the telephone directory as a ready-made random-numbers table, anticipating computer
art’s use of random-number generators in the late 1960s. Morellet’s paintings are almost exactly contemporary of Götz’s first Raster Pictures of 1959, e.g. Répartition aléatoire de triangles suivant les chiffres pairs et impairs d’un annuaire de téléphone (Random Distribution of Triangles Using the Even and Odd Numbers of a Telephone Directory) (1958), and Répartition aléatoire de 40 000 carrés suivant les chiffres pairs et impairs d’un annuaire de téléphone (Random Distribution of 40,000 Squares Using the Even and Odd Numbers of a Telephone Directory) (1960). A notable, earlier, exception are Ellsworth Kelly’s Spectrum Colors Arranged by Chance paintings, produced between 1951–53. However, unlike Morellet or Götz, who used a systematic numerical approach, Kelly’s paintings were made by drawing lots.
11 The five volumes of Bense’s Aesthetica series were published between 1954 and 1965. 12 According to Erwin Steller, the first computer-generated work to be made at the University of
Stuttgart was a 10 x 10 cm plotter drawing, generated using the plotting device known as “Zuse’s Automat” (after Konrad Zuse, its inventor) or “Graphomat Z64,” following a program designed by Frieder Nake, in 1963 (Steller: 57). Nake and Georg Nees, another pivotal figure in this respect, were closely connected to Bense; Nees exhibited his computer drawings at the first display of computer-generated art worldwide, organized by Bense, also at the University of Stuttgart, in 1965 (see note 13). On the Graphomat Z64 see Burbano and García Bravo, 2016.
13 On this event, see Nake, 2009. 14 See above, note 12. It is worth noting that recent scholarship has uncovered a few notable exceptions.
Kurt Alsleben had already produced plotter drawings, together with the physicist Cord Passow, on an analogue computer in Hamburg in December 1960 (see Rosen, 2011: 9). In North America, A. Michael Noll is generally credited as the first person to have made “computer-generated art,” with his 1962 series of “Patterns”; but Benj Edwards showed that an earlier piece of computer graphics was made by an IBM employee on an AN/FSQ-7 computer, part of the SAGE (Semi-Automatic Ground Environment) military system, as early as 1956 (Edwards, 2013).
15 On Götz’s invitation, Moles gave two lectures at the Academy in December 1965.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
154
16 For instance, in reading a text, we might focus on the spelling of each individual word, as when
proofreading; or we might approach the words more globally, paying attention to their meaning (Moles, 1966: 125).
17 Information theory’s statistical approach enables the deconstruction and analysis of an image’s structure at the micro-level, as a series of images that Götz produced in the late 1960s demonstrates. The images originally illustrated a talk that Götz gave in 1967, and were subsequently published in Götz, 1968: 185. They were later reproduced in Moles, 1971: 29. On this occasion, Moles writes that Götz produced the image on a computer at the University of Bonn, an assertion that Karin Götz categorically denied (email to the author, 28 May 2018). Götz himself makes no mention of using a computer to generate the image in the 1967 talk.
18 More recently Anne Friedberg has written on the intersection between the grid and the electronic screen (Friedberg, 2009).
19 The Götz couple received a television from Karin’s parents as a wedding present. Karin Götz believes that Götz did not have first-hand experience of television at the time when he was making the Raster Pictures (Karin Götz, email to the author, 21 January 2019).
20 In the English summary appended to the German publication, the title is translated as “From non-objective film to electronic painting,” rather than “electron painting.” While it is unclear whether the difference in terms was of great significance to Götz at that time, he had used Electronenmalerei to describe the early radar experiments and retains the term in the 1960 article; by contrast, he uses Electronische Malerei to discuss the later television-inspired works from 1961 onwards.
21 Götz had met Meyer-Eppler in 1957, at the time when he lived in Frankfurt (Götz, 1995: 23). In the late 1950s, Meyer-Eppler had experimented with the aesthetic possibilities of the oscilloscope, recording the “optical transformations” (Meyer-Eppler, 1960: 159) that various combinations of electronic current produced – a fact that may explain his support for Götz’s project. Selected results of these experiments were published in 1960 in the journal movens, which also included Götz’s article “Vom abstrakten Film zur Elektronenmalerei” (Mon, 1960).
22 See a note from 2010, appended to the subsection entitled “Abstrakter Film und Elektronische Malerei,” in Götz, 1959. The note was added to the PDF version of the article, available on the artist’s website: http://www.xn--ko-gtz-zxa.de/pages/texte_filme/texte.html (last accessed 23 January 2019).
23 Nam June Paik, untitled text, in pamphlet “Exposition of Music–Electronic Television,” published on the occasion of his 1963 exhibition Exposition of Music–Electronic Television at Galerie Parnass, Wuppertal, quoted in Mehring, 2008: 30.
24 Even such institutional collaborations were often of a precarious nature. As Hannah B. Higgins and Douglas Kahn note of the early 1960s context: “These institutions inhered to geopolitical, military, corporate, and scientific priorities that were not immediately or obviously amenable to the arts. For those artists lucky enough to find access to these computers, technical requirements mandated the expertise of engineers, so the process was always collaborative, yet rarely sustainable over any great length of time” (2012: 1, my emphasis).
25 While Karin Martin was responsible for the bulk of the drawing work, according to Mehring other students of the Düsseldorf academy also helped (2008: 36).
26 Accessible on K.O. Götz’s website: http://www.xn--ko-gtz-zxa.de/pages/texte_filme/filme/film.html (last accessed 23 January 2019). 27 On painting as a discrete practice, see Seitter, 2003. 28 Götz illustrates his article “Das manipulierte Bild,” which was published in 1963, with an
electronically-generated image, made at Bell Labs on an IBM 7090 (Götz, 1963: 31). This suggests that his awareness of the artistic possibilities of the computer roughly coincides with the making-process of the film Density 10:2:2:1.
29 Götz first mentions the computer in relation to his project of electronic painting in “Vom abstrakten Film zur Elektronenmalerei” (1960: 155), but only as a means to generate statistical analyses (i.e. to help with “programming” the pictures).
30 Götz writes: “However, the storage capacity and speed of our newest computers are not yet sufficient to program satisfactory kinetic pictures” (1963: 62).
Aline Guillermet is a Junior Research Fellow in Visual Studies at King’s College, University of Cambridge. Her postdoctoral research considers the impact of
GUILLERMET | K.O. Götz’s Kinetic Electronic Painting
155
technology on painting since the 1960s. She is the author of several articles on postwar German art, including “‘Painting like nature’: Chance and the Landscape in Gerhard Richter’s Overpainted Photographs”, Art History, 40: 1 (February 2017). Aline co-convenes the Digital Art Research Network at the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), University of Cambridge. Email: [email protected]
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
156
Special Issue: Rethinking Affordance
Reframing the
Networked Capacities
of Ubiquitous Media
MICHAEL MARCINKOWSKI
University of Bristol, UK
Media Theory
Vol. 3 | No. 1 | 157-184
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
James J. Gibson’s concept of perceptual affordances has a long history, particularly within the field of human computer interaction (HCI) where the concept has been used in various ways to address both the material and cultural requirements of interactive systems. New modes of digital media which look to engage a range of affordances as present in contemporary smartphone platforms offer an opportunity to rethink this critical divide within the use of the concept of affordances. Defining a concordance between Gibson’s use of the term and Manuel DeLanda’s theory of assemblages, it becomes possible to chart the networks of affordances present in the interaction with and function of these new media forms. Through an analysis of Kate Pullinger’s Breathe, a redefined understanding of the possibilities of affordances is developed, one that is concerned with both the materiality of the system itself and the speculative frame that is developed.
Keywords
Affordances, Assemblage Theory, Electronic Literature, Ambient Literature
Introduction
There has long been contention running across a number of disciplines regarding the
nature and reach of James J. Gibson’s (1986) concept of “affordances.” First
introduced by Gibson in order to provide an account of visual perception in the field
of environmental psychology, the concept was quickly taken up in a number of areas,
particular in areas related to human-computer interaction (Norman 1988). There, it
was used to provide an explanation for the ways that computer systems made
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
158
themselves available to users. While Gibson (1986) described the affordances of the
environment as “what it offers the animal, what it provides or furnishes,” saying that
affordances “are in a sense objective, real, and physical unlike values and meanings,
which are often supposed to be subjective, phenomenal, and mental” (129), the
concept was subsequently used in an expanded fashion to have cognitive, cultural, and
conventional implications (Norman, 2008). As the term came to take on this expanded
meaning, it came to be subject to charges of the cultural relativism implicit in the
identification of any affordance (Costall and Still, 1989). In this, questions were raised
about the viability of the application of Gibson’s initial “objective, real, and physical”
formulation of affordances within more complex cultural settings (Greeno, 1994;
Turner, 2005; Costall, 2012). Such critique came to include a consideration of the
implications that an information processing model of psychology has for Gibson’s
theory (Jenkins, 2008).
The importance of this long-running consideration of the conceptual power and
usefulness of Gibson’s term is put into sharp relief by the work of the Ambient
Literature project (ambientlit.com). Focusing on the design, implementation, and study
of new modes of pervasive and literary media, the Ambient Literature project
simultaneously engaged the material, functional, and semiotic affordances of
interactive systems as they were utilized toward cultural and literary effect. As a form
of situated media, the case of ambient literature provides a helpful example in
addressing the question of the contemporary status of the term affordance. This comes
as works of ambient literature engage physical location, literary meanings, and
contemporary networks of information technology. In this article, the work Breathe by
Kate Pullinger will be used as an example in order to draw out how Gibson’s term
“affordance” can be understood today.
In tracing out the various networks of affordances present in Breathe, what becomes
evident is that for complex works of interactive and pervasive media it is not possible
to disentangle material affordances from cultural ones. That is, a work like Breathe takes
advantage of affordances that make perception physically possible in general, as well
as affordances that rely on a learned familiarity with semiotic systems. Instead of
distinguishing between “affordances in general” and “canonical affordances”, as does
Alan Costall (2012), or between “simple” and “complex” affordances as does Phil
MARCINKOWSKI | Ubiquitous Media
159
Tuner (2005), it becomes necessary to consider a more deeply-set ontological
reconfiguration of the idea of how affordances can function.
With this, the aim is to both contribute to the continued development of the term as
well as to restore some of Gibson’s (1986) original meaning:
It is a mistake to separate the natural from the artificial as if there were
two environments; artifacts have to be manufactured from natural
substances. It is also a mistake to separate the cultural environment from
the natural environment, as if there were a world of mental products
distinct from the world of material products (130).
Expanding on this thematic within Gibson’s account of his concept, what this paper
proposes is a re-consideration of the ontological terrain of affordances as they are
considered within the field of digital media, particularly as works of ambient literature
bridge the human reception of works with their material occurrence. Using Manuel
DeLanda’s (2006; 2016) Deleuzian consideration of social assemblages and their
interactions as a theoretical starting point, the case of the ambient literature project
(and one work in particular) will be used to rework the idea of affordances in the study
of interactive digital media along a materialist and flattened ontology.
The paper will proceed as such: Following an introduction to ambient literature in
general, the specific work which is to be examined, Breathe, will be described. Focusing
on the foundation that such a work has in traditions of HCI, developments in the
field’s use of the concept of affordances will be analyzed, highlighting the divide
between physical and cultural uses of the term. As an answer to this problematic,
Manuel DeLanda’s conception of social assemblages will be introduced, with particular
attention paid to the way that these assemblages engage uses of language. Finally,
Breathe will be reconsidered through this newly-developed theoretical lens and the
implications of this consideration of affordances/assemblages will be discussed.
Ambient Literature
The ambient literature project was an ambitious, multi-university project focused on
the conceptualization and development of new forms of literary media which “produce
encounters between humans and the complex systems to which they are subject”
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
160
(Dovey, 2016: 140). It brought together academics, authors, designers, developers,
media producers, coordinators, and support staff in order to create smartphone-based
works of literature that took advantage of the modalities and networked connections
afforded by contemporary information communication technology (ICT). The
concept of ambient literature was developed with particular attention to way that ICT
has been understood in the wake of Mark Weiser’s (1991) initial conceptualization of
ubiquitous computing (ubicomp) in the late 1980s.
The publication of Weiser's article on the development of the idea of ubicomp, “The
Computer for the 21st Century,” laid out a vision for computing in which computers,
as media, were relegated to a background, supporting role. Working with researchers
at Xerox’s Palo Alto Research Center, the vision for the future of computing detailed
by Weiser was one in which computers acted silently, in the background, taking care
of the more mundane tasks of life, leaving users free to engage in creative and fulfilling
activities. Instead of spending time and effort to schedule meetings, ubicomp systems
would silently arrange meetings according to the various participants’ schedules,
allowing them to focus on the important matters at hand. Coffee would be brewed in
the morning, weather reports would be presented right when they were needed, the
right document would be on your desk just in time to get to work. Computing would
swirl all around us, always on, supporting us in our daily lives without the need for us
to worry about engaging or maintaining the systems that made this possible.
Computers would adapt to us, not us to them.
A key aspect of the idea of ubicomp (or as it also came to be known under different
branding, “pervasive computing” or “ambient intelligence” [Ronzani, 2009]) was that
computing could be integrated seamlessly and quietly into the world around us (Weiser
and Brown, 1997). Building on a vast array of data sources, from personal histories to
city-wide sensor networks, ubicomp would embed computing into the fabric of our
daily lives while at the same time ensuring that we never had to give it another thought.
It would become ingrained in our environment and yet remain invisible.
Of course, like most good visions of the future, the results of Weiser’s account of the
development of computing was more complicated than initially envisioned (Chalmers
and Galani, 2004; Rogers, 2006; Abowd, 2012). Political systems, national cultures,
new technologies, and existing social configurations all served to moderate the
MARCINKOWSKI | Ubiquitous Media
161
development of the ideas first laid out in ubiquitous computing (Bell and Dourish,
2007). Importantly, however, even as ubicomp’s early vision of a world run on
computational rails hit a number of roadblocks and false starts, it did inform the
development of the modern smartphone, a device which relies on always-on
computing, vast troves of user data, and arrays of sensor and information
communication networks. As our lives have become enveloped by computing and
networks of data in the wake of the smartphone, the question asked by ambient
literature is “what might happen when data aspires to literary form” (Dovey, 2016:
140)? As ubicomp – or at least one version of ubicomp – becomes mundane and part
of our everyday lives, how can it be transformed into a resource for aesthetic and
specifically literary experiences which tap into our common cultural heritages?
The proposition of Ambient Literature is to take ubicomp’s model of a data-enabled
world and to turn it around: instead of using techniques developed to push mundane
interactions to the periphery of our attention, how can these same techniques be used
to surface literary experiences as they exist around us? How can literary experiences be
blended in with the world around the reader in an immersive way? In this, works of
ambient literature look to build upon Weiser’s vision in order to integrate creative
works of literature and culture into the world around readers. As works which are
embedded within a wider world through the use of mobile devices, networks, and the
vast arrays of data available, how do these works come to afford certain interactions
and implications?
Kate Pullinger’s Breathe
As a research project, Ambient Literature was structured around the commissioning
of three new works of digital media, each of which was focused on the idea of
developing experiences which connected textual literature to the situation of their
experience. More than just a program of creative practice research (Smith and Dean,
2009; Nelson, 2013), the project was also surrounded by a program of empirical
participant research. While some of the outcomes of this empirical work will be
included here, the methodology and analysis will not be discussed. A more thorough
account of the methodological approach can be found elsewhere (Marcinkowski,
2018). A complete record of the research data collected around the project has also
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
162
been made available through an open access data repository (Marcinkowski and
Spencer, 2018).
Of the three works commissioned and studied by the Ambient Literature project, I
want to focus here on just one that illustrates the particular thematic that is to be
developed. Breathe, by Kate Pullinger (2018), is a work of ambient literature designed
to be read through a smartphone’s web browser. Made in partnership with the
London-based publisher Visual Editions and Google Creative Labs, it is a short story-
sized text, ideally meant to be read in one sitting at home.
A ghost story set in the present day, Breathe haunts the reader through a text that is
altered by the conditions of its reading. Reading the time and location from the system,
the piece draws in local conditions into the work: time of day, weather, nearby streets,
cafes, the season, and so on are all adapted based on the conditions of reading and
woven into the text. Drawing from Google’s own place-based APIs (Application
Programming Interfaces), the world of Breathe is populated by a continually developing
account of the reader’s surroundings. If a new shop opens nearby, Google’s databases
are updated and the set of resources available to Breathe is expanded to include this.
This is not a choose your own adventure or branching narrative – the narrative of the
piece remains the same for all readers – but the experience is customized for each
reader depending on their situation.
At the same time that it relies on the networked and locational affordances of the
smartphone in order to create an uncanny sense of familiarity with the reader’s setting,
Breathe also engages readers’ learned habits of interaction as part of the work itself.
While presenting an initial interactive paradigm which mimics that of a traditional
ebook, allowing readers to flip from page to virtual page, the experience unexpectedly
shifts as the reader finds swiping to be no longer consistently effective. Without
explanation, text starts running backwards, “unwriting” itself on the screen; readers’
swipes only leave black smudges across the white background of the page; the page of
text is covered by shifting clouds; while in other instances text is only visible as the
reader tilts their phone at an angle, as if trying to read through an obfuscating glare on
the glass.
MARCINKOWSKI | Ubiquitous Media
163
In all of these ways, the work announces its non-traditional nature to the reader, taking
advantage of learned habits of interaction in order to surprise the reader and force
them to reflect on their normal, seamless engagement with their device. Paired with an
uncanny knowing of their whereabouts and situation, the readers’ experience comes
to focus on the affordances of the device, what it is capable of doing, and how. Breathe
emphasizes the specific interactive modalities of the smartphone and uses those
modalities as part of the ambiance of the experience.
As will be developed, what Breathe presents is a unique confluence of textual,
contextual, learned, and physically material conditions which, when taken together,
complicate an easy account of the affordances of which the work takes advantage. It
relies on both the immediate and local conditions of its reading, while at the same time
relying on far-flung and network-enabled determinations.
What is an affordance in computing?
I’ve been coy about it so far, and hopefully the term has just slipped by as you’ve been
reading, but for an article that attempts to re-work the idea of what “affordance” can
mean in the space of digital media, I’ve used various forms of “affordance” in a very
casual, but hopefully nevertheless intelligible manner. After all, it’s been more than 50
years since Gibson first used the term “affordance” in the field of environmental
psychology to describe what an environment offers to an animal, and it’s been at least
30 years since it has filtered into common usage, especially in the field of human-
computer interaction (HCI) research.
“Affordances” came to become a central concept in HCI largely through the work of
Don Norman (1988) and his book The Psychology of Everyday Things.1 In it, Norman
described how various objects – doors, coffee pots, washing machines, telephones,
computer interfaces – all played into people’s existing cognitive models about how the
world worked. By providing a link to users’ existing models, such objects gave
individuals some clue as to how these objects might “afford” some kind of interaction.
As a doorknob provides a surface able to support a human hand gripping, turning, and
pulling, it provides an indication of its purpose. Norman’s logic was that in the work
of design, it would be beneficial to tap into users’ existing cognitive models in order
1 The book was later re-released under the title The Design of Everyday Things (Norman, 2013).
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
164
to help them understand unfamiliar interfaces. As such, if the aim was to design
something to be pulled or turned, it might be helpful to have it resemble some familiar
aspect of a doorknob.
This link between affordances and cognitive models led to affordances in HCI being
largely seen as a concept linked to cognitive psychology. This was distinct from
Gibson’s original environmental formulation of the concept. Instead of focusing on
the relationship between an organism (in Norman’s case, human users) and their
environment (the computer interface), Norman cast affordances as depending on a
sense of familiarity from the perspective of the organism. For Gibson, the concept
was not related to what went on inside the head of the animal or the way in which they
recognized objects in their environment but was concerned with the fundamental and
really-existing relationship between animals and their environment. Already here, the
tension between the situational nature of a work like Breathe – as it brings the reader’s
environment into contact with the work – and the learned and culturally-meaningful
text becomes apparent.
What became problematic in Norman’s human-centered account of affordances was
the possibility that any discussion of the affordances of a system relied on a culturally-
relativistic position (Costall and Still, 1989). That is, instead of affordances describing
a physical relationship between animals and their environment, it came to rely on a
learned cultural accumulation of habit or knowledge. For computing, of course, the
default position of such cultural learning came to be based on a largely North
American, white, and male perspective. In this, there was a fundamental blindness to
the culturally imperialistic aspects of computer interfaces as they were exported around
the world (Philip et al., 2010).
This early dislocation (and the resulting problems) of the meaning of the term
“affordances” as it was used in HCI was not lost on Norman, who later sought to
clarify the meaning of the term for an HCI audience. As a corrective, he attempted to
draw a distinction between Gibson’s “real affordances” and the version of the concept
that he employed, which pointed to those that are perceived by users. In providing an
account of the kind of “perceived affordances” described in The Psychology of Everyday
Things, Norman (1999) highlighted the distinction between the two uses of the term:
MARCINKOWSKI | Ubiquitous Media
165
Please don’t confuse affordance with perceived affordances. Don’t
confuse affordances with conventions. Affordances reflect the possible
relationships among actors and objects: they are properties of the world
(42).
In clarifying the role of affordances in thinking about user-centered design, Norman
highlights an important issue for understanding Gibson’s concept of affordances: that
they are distinct from the kinds of cultural conventions that make up much of our
interactive lives. This highlights Gibson’s (1986) assertion that affordances should be
thought of as “objective, real, and physical” and that they are “unlike values and
meanings” (129). This, of course, clarifies one perspective of the affordances of a work
like Breathe: that it relies upon the objective affordances of ICT, sensor networks, and
smartphones to deliver the media experience to readers.
This tidy picture, however, is complicated by the nevertheless relational nature of
affordances and the “possible relationships among actors and objects.” As Gibson put
it, describing a kind of relativism distinct from the kinds of cultural relativism
Norman’s account was accused of:
They are not just abstract physical properties. They have unity relative to
the posture and behavior of the animal being considered. So an affordance
cannot be measured as we measure in physics (127-128).
That is, in its original use, the idea of affordances was linked to the particular
configuration of the given animal to which it was applied as it exists within a particular
environment. For Gibson, affordances offered “surfaces” which support the actions
of actors within their environment. Different pools of water of different depths offer
different affordances to fish as to cats. A leaf affords a surface upon which an insect
might walk, but not a human. In this, the term lays out a murky space that exists
between a physical reality that can be measured and a kind of phenomenological and
existential being. Linked to the situation and context of their encounter, affordances
are tied to abilities and actions.
This terrain of considering affordances at once physical, separate from values, and
relative to the animal considered opens up a particular space for the consideration of
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
166
interactions with digital media. As the idea of affordances has been previously seen to
resist an accounting of the kinds of cultural relativism built into its use in fields beyond
psychology (Costall and Still, 1989), in what ways is it possible to account for
affordances in human interactions, particularly as they are laden with meanings and
implications? For thinking about affordances in the light of digital media, how is it
possible to divorce meanings and values from the “objective, real, and physical”
properties of the media?
Where Costall (2012) and Turner (2005) make claims for a bifurcated sense of
affordances, with canonical or complex affordances on one site and general or simple
affordances on the other, Norman’s reconsideration (1999) of his use of the concept
takes a more nuanced view. In Norman’s revised account, perceived affordances sit
atop real affordances, with real affordances making the perception of affordances
possible. Unlike Costall’s and Turner’s approach of distinguishing classes of
affordances by type, Norman opts for a progressive distinction, with “real”
affordances underlying their perception. Perceived affordances only come about
because of their relation to real ones. Turner and Costall, on the other hand, create
new categories of affordances that, following Norman’s initial mischaracterization,
offer a phenomenological rendering of affordances. They rely on the development of
a specialized vocabulary to distinguish what is afforded culturally or by convention
from what is offered by the physical environment itself.
Even as he draws a connection between real and perceived affordances, Norman
(1999) nevertheless maintains a firm distinction between affordances and symbolic
communication:
Far too often I hear graphical designers claim that they have added an
affordance to the screen design when they have done nothing of the sort.
Usually they mean that some graphical depiction suggests to the user that
a certain action is possible. This is not affordance, neither real nor
perceived. Honest, it isn’t. It is a symbolic communication, one that works
only if it follows a convention understood by the user (40).
What I want to put forward in the coming sections is specifically that symbolic
communication can be understood as an affordance. This is to be based largely on the
MARCINKOWSKI | Ubiquitous Media
167
idea that symbolic communication (and cultural action more generally) can be
understood as a kind of physical system which is “relative to the posture and behavior
of the animal being considered” (Gibson, 1986: 127-128). That is, systems of
inscription and interpretation can be viewed as physical components of the animal and
their environment. While there isn’t space in this paper for a full rendering of the
background of this consideration, it is relevant to point to Simon and Newell’s account
of physical symbol systems in the field of artificial intelligence (Newell and Simon,
1976), particularly as it collides with Lucy Suchman’s account of situated action (Vera
and Simon, 1993; Suchman, 2006). Put directly, the interpretation of symbolic systems
is an activity that an animal takes part in given a certain posture of the body (the
physical capacity to read and the established physical cognitive structures for reading,
for instance) and the affordance of the environment (the inscription of text on a page,
for example). Air affords a surface for birds who have learned to fly, an exit sign
affords the person who can read a part of the surface for escaping from a fire.
Maintaining any distinction between cultural and physical affordances becomes
difficult, if not untenable, as interactive modalities, such as those present in a work like
Breathe, explicitly engage with physical forms which rely on culturally symbolic
interactions for their function. For Breathe, it is not possible to simply bifurcate
affordances into two distinct varieties. One avenue for the theoretical re-consideration
of affordances comes via Manuel DeLanda’s account of the material capacities of
assemblages. By rending a reading of DeLanda along a trajectory laid out by Gibson,
it becomes possible to develop a provisional picture of how a work like Breathe might
function. In this, it becomes possible to understand Breathe’s various component
interactions (with the text, the environment, the smartphone interface) along a single
conception of “affordances.”
Affordances, capacities
DeLanda’s account of social assemblages takes ideas developed in the work of Gilles
Deleuze (and others) and gives them an immediate illustration in the formations of
our contemporary social world. Providing a framing for the ways that social forms
come into being, DeLanda’s work is reminiscent of Harold Garfinkel’s (1967)
ethnomethodology and the more closely related work of Bruno Latour (2005) and
others (Law and Hassard, 1999) in the area of Actor Network Theory. In these
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
168
approaches, despite their sometimes-strong differences, the idea of a cohesive and a
priori social form is refuted by a kind of emergent and localized occurrence of social
interactions.
In laying out a theory of social assemblage, DeLanda presents an ontological account
of the fundamental organization of social forms. This relies on a realist consideration
of how the various structures (assemblages) that make up our social world come in
and out of being, and interact with each other, as well as the role that human beings
play as part of this process. For DeLanda (2006), these assemblages are characterized
by “relations of exteriority.” This means that “a component part of an assemblage may
be detached from it and plugged into a different assemblage in which its interactions
are different” (10).
For DeLanda (2006), the span of the exteriority of the identity of social entities is
demonstrated “not only by their properties but also by their capacities, that is, by what
they are capable of doing when they interact with other social entities” (7). Here,
“capacities” becomes a central term, particularly because of its relative homology with
Gibson’s affordances. The kinds of extrinsic relationships built up beyond any kind of
internal affiliation between disparate things favored in DeLanda’s ontology provides a
first description of Gibson’s affordances: in deducing affordances, the animal is
considered relative to the environment within which it exists. The affordance appears
only in the conjunction of the animal against some surface. For an assemblage, this
conjunction takes place based on the capacities of each assemblage involved. Just as
Gibson’s affordances are relative to the animal and expressed only as a sense of
possibility given the particular arrangement within the environment, so too is
DeLanda’s sense of capacities:
We can distinguish, for example, the properties defining a given entity
from its capacities to interact with other entities. While its properties are
given and may be denumerable as a closed list, its capacities are not given
– they may go unexercised if no entity suitable for interaction is around –
and form a potentially open list, since there is no way to tell in advance in
what way a given entity may affect or be affected by innumerable other
entities (10).
MARCINKOWSKI | Ubiquitous Media
169
The user, in this case, is not a user in and of itself, but only in their interactions with
some system and vice versa. The reader of Breathe cannot be understood separately
from their reading of the work, even as they may nevertheless be readers of other
works.
As DeLanda (2016: 1) describes it, quoting Deleuze (2007): “the assemblage’s only
unity is that of co-functioning” (69). He goes on to say that in Deleuze’s definition,
“two aspects of the concept are emphasized: that the parts that are fitted together are
not uniform either in nature or in origin, and that the assemblage actively links these
parts together by establishing relations between them” (2). This is essential here, since
this concept of assemblages is applicable to not only thinking about the assemblage of
a user and a system, or the reader and a work like Breathe, but also the ways that the
system or the reader themselves are constituted according to their relations of
exteriority.
Just as Gibson’s affordances “cannot be measured as we measure in physics,” the
interactive capabilities of an assemblage’s capacities are likewise beyond the purview
of a positivist accounting:
But in an assemblage these relations may be only contingently obligatory.
While logically necessary relations may be investigated by thought alone,
contingently obligatory ones involve a consideration of empirical
questions, such as the coevolutionary history of two species (11).
In both, the actions of any user, the manner in which they configure the movements
of their body and position themselves in relation to some environment depends on
their own intentions and aims as they go about engaging with a system. There is no
logical necessity that links users’ actions and the affordances/capacities of a system.
This has been demonstrated in any number of examples of the re-purposing of systems
for novel uses (the evolution of Twitter provides a good example of this [Siles, 2013]).
Even as the definition of capacities remains contingent and dependent on the
conditions of their relations, how they are perceived is immaterial to their causes. In
this, assemblages share in the conceptually independent nature of affordances,
particularly considering Norman’s stated reconsideration and explication of the
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
170
distinction between real and perceived affordances. Like Gibson’s affordances,
DeLanda’s (2006) theory of assemblages is wholly realist, with “the very fact that it
cuts across the nature-culture divide” being “evidence of its realist credentials” (3).
This being consistent with Gibson’s (1986) own proposition that it is a “mistake to
separate the cultural environment from the natural environment” (128).
This is illustrated in Breathe as it comingles jarring interactive paradigms, the activation
of global data networks, and a semiotic reading of text toward a singular meaningful
affordance of the work. The perceived affordances of the alternating intuitive and
counter-intuitive work in tandem with the real affordances of the networked systems.
These are in turn surfaced to the reader in their reading of the text as it takes advantage
of them as part of a semiotic system.
What begins to develop in bringing these two systems into contact with one another
is a slow dissolve of the boundaries established separating real affordances from any
sense of cultural or conditioned connection. While Gibson explicitly denies the
immediate connection between “values and meanings” and affordances in his initial
formulation, it is not difficult to see how there may nevertheless be a connection:
Behavior affords behavior, and the whole subject matter of psychology
and of the social sciences can be thought of as an elaboration of this basic
fact. Sexual behavior, nurturing behavior, fighting behavior, cooperative
behavior, economic behavior, political behavior – all depend on the
perceiving of what another person or other persons afford, or sometimes
the mis-perceiving of it (135).
While affordances themselves might have no intrinsic value, they nevertheless serve to
impart some communication of value. This is the case as they are the medium by which
it becomes possible to interpret some relational cue. Just as they provide the solid
substance of surfaces to stand on in Gibson’s account, so do affordances and capacities
offer a more dynamic sense of relation. What can be seen in each, both in Gibson’s
affordances and in DeLanda’s assemblages, is that they can play a variety of roles, as
DeLanda (2006) puts it: “from a purely material role at one extreme of the axis, to a
purely expressive role at the other extreme” (12). The assemblage of a work like Breathe,
as it establishes and motivates relationships between distinct parts (the text, the
MARCINKOWSKI | Ubiquitous Media
171
technologies at play, the situation of the reader, the readers themselves, and so on),
cuts across all these types of roles. For Breathe, as a literary work, what comes to matter
is the particular way that this re-figured sense of affordances might engage questions
relating to a linguistic text.
Codes, coding, coded surfaces
In examining the relationship between affordances and language, it is important to
approach the issue of “coding” as it is present in DeLanda’s work. While a
consideration of the affordances of Breathe in light of DeLanda’s assemblages is
relatively straightforward, the question of language provides an opportunity to delve
deeper into the connection between assemblages and affordances. In Breathe, this
question of language is driven by the interplay among the interface of the work, the
wider situation within which the work is read, and the text itself.
Following Deleuze and Guattari (1988), DeLanda’s account of assemblages includes a
consideration of “territorializing” and “deterritorializing” movements. Such
movements are put into motion by the stabilization and destabilization of the identity
of assemblages. Put briefly, as assemblages are subject to processes of territorialization,
they sharpen their boundaries and increase their homogeneity. As they are subject to
deterritorialization, their boundaries become less defined and they become more
diverse.
Here, what is most important is the way that processes of territorialization lead to
processes of “coding” or the formalization of communicative assemblages. This can
be seen in the case of language, genetics, and other forms of reproducible patterns of
communication. That is, through the various capacities of an assemblage, as they come
to be stabilized, specific reproducible formations arise which allow for the systematic
coding and decoding of an assemblage’s capacities. In this, coding is not expressive as
in the sense above, as expression is considered as an in-formal process. Coding,
however, represents a kind of reification of expression into a formal system. Where
the territorialization of an assemblage represents a first articulation of this kind of
expression, coding represents a second, formalizing system in which the definition of
rigid rules develops protocols for reproduction. Decoding, on the other hand,
represents that moment in which these rigid rules are broken down, as in the case of
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
172
informal conversations in which more formal protocols of conversation might not be
met.
What this consideration of coding as a unique feature of assemblages puts forward is
not that different from considerations of canonical or complex affordances. In each,
there is some division between the sheer material aspects of affordances and those that
serve some higher, information-processing ends. Just as Gibson saw affordances in
the composure or posture of animals to each other while also allowing for the idea that
affordances are not accorded with values and meanings, so too can it be seen that the
expressive capacities of assemblages are distinct from codes such as language. For
DeLanda, however, even as this coding might be of an order above the regular
relations of the capacities of assemblages, they are still cut from the same cloth and are
part of an overarching continuum of capacities.
This is illustrated in Breathe as the canonical affordances of an interaction with an ebook
(swiping from one page of text to another) is disrupted. The disruption caused by
readers not being able to swipe smoothly from one page to the next comes to be linked
with a semantic shift in the work from that of a straightforward narrative to one that
speaks directly to the reader. The expressive deterritorializing move of the interactive
shift cascades into a deterritorializing of the terms of the coded languages at work.
In building from this account of the common ground between affordances and
capacities toward this consideration of expressive surfaces and the territorialized
process of coding, the aim is to be able to set language and culture on an even terrain
with other kinds of affordances present in a digital system. As hinted at above, this is
not the first such attempt toward this, with the work of Herbert Simon and Alan
Newell (1976) working on the problem from the other end, so to speak; building up
from computational models out toward a synthesis of human intelligence and
understanding. At the foundation of this, for them, lay an assertion of the physical
nature of cognition; not in an embodied sense, but in a sense which saw cognition
taking place through the logical manipulation of physical symbol systems.
By positing the existence of an assemblage below that of the human – the physical
symbol system – Simon and Newell saw the possibility for the construction of artificial
means of human intelligence that would be independent of a human identity. Viewed
MARCINKOWSKI | Ubiquitous Media
173
in this light, the coding of human culture, of language, and of basic thought, is not
intrinsic to the human, but an extrinsic system of relations that has simply been
subsumed into the human organism as part of a wider assemblage.
In this, the assemblage of a language faculty (of coding and decoding meaning into
transmittable forms) functions simply as another material affordance for the human.
This follows from both Gibson’s and DeLanda’s accounts: This is seen in Gibson’s
(1986) relativistic account of affordances as having a “unity relative to the posture and
behavior of the animal being considered” (127-128). Similarly, it can be found in
DeLanda’s (2006) description of the way that an assemblage’s capacities “may go
unexercised if no entity suitable for interaction is around – and form a potentially open
list, since there is no way to tell in advance in what way a given entity may affect or be
affected by innumerable other entities” (10). The capacity of a language act functions
as an affordance to the human organism to whom it matters.
Breathe (and works of ambient literature more broadly) take this link between
traditional, physical affordances and those given in language as a matter of course. In
these works, the physical affordances of place and a reader’s embeddedness within a
situation are aligned along a common engagement of affordances. This common base
runs from the capacities of the component assemblages of the work through coded
systems of language.
Building from Gibson’s (1986) earlier quoted assertion that “[b]ehavior affords
behavior” (135), it is possible to say that the linguistic posture put forward as a
meaningful expression on the part of one party of a conversation is taken up by the
listener, who, in reading the message as it is materially transmitted, has some activation
of the awareness of potential affordances and future capacities that are to be made
available. In this, language and a specific sense of a functional cultural relativism
(knowing and being affected by a certain language) comes to be illustrative of what
might be termed a “speculative affordance.” This notion of a speculative affordance is
one founded on the possibility of the proper set of capacities being present in the
environment (the environment in this case coming to include other people). This
speculative nature of affordances was something for which the use of Gibson’s
concept was critiqued by Martin Oliver (2005). In responding to Gibson’s account of
the possibility of affordances, Oliver stated that “all that could be said then is that a
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
174
thing afforded something to someone in a specific circumstance” (403). While Oliver’s
critique appears to be valid along a traditional reading of affordances, it is just this kind
of speculative configuration that has been noted in relation to audiences’ engagement
with works of ambient literature (Marcinkowski, 2019).
Instead of partitioning affordances into two regions, those that are cultural affordances
and those that are not, what is given here is a more general consideration of
affordances. This comes as the affordance of the possibility of affordances, one which
comes to house both material affordances and their relative potential, as well as a
phenomenological type of affordance as it has been heretofore known. While this kind
of speculative affordance is undergirded by the theory of assemblages and their
capacities, as given by DeLanda, in its application to questions of digital media it retains
Gibson’s utility in the analysis of interactions. In this, it merges two distinct aspects of
experience: first, the double articulation of a system of coding as it is read along a
realist trajectory of the existence of physical systems; second, the relational rendering
of the surfaces which afford this kind of articulation to take place.
The mechanics of a work of ambient literature displays this in a double way. First, and
most evidently, this comes in the way that the text of the work is supported by the
environment. In Breathe, the reading of the text is linked to the situation of its reading.
Second, the possibility for this interpretation of the code of the text is only possible
because of the networked affordances of the platform that make the conditional text
possible. That is, the physical affordances linking technological systems (as will be
discussed in the next section below) set up the conditions for the interpretive reading
of the text.
As concepts, Gibson’s affordances and DeLanda’s account of the capacities of
assemblages are unique, but in their consonance, they begin to develop a picture of the
material interactions that contribute to human interactions with technological and
cultural systems. The basic premise of their concomitance here is to put forward a view
of material affordances or the capacities of assemblages that is able to account for
systems of values and meaning. This is given not as a separate layer or special type of
affordance or capacity, but as part of a continuum occurring across a flattened
ontological space. Consonant with affordances, capacities describe the possible
conjunction of surfaces, whether these surfaces are such as those concerning an animal
MARCINKOWSKI | Ubiquitous Media
175
in its environment or any other type of assemblage. At bottom, DeLanda’s account of
the capacities of assemblages opens the way for a thinking of a sense of affordances in
which there is a more general system. Such a system provides for both people and the
physical systems with which they interact, from culture to the materials of
communication themselves.
Affording the assembled interface
In merging these veins of thought together toward a re-thinking of the idea of
affordances, it is possible to think of affordances beyond just human or animal
engagement with the world. It becomes possible to include language and culture
alongside the material of their enaction. In examining digital media, this comes to
include, importantly, the proposition of the function of ICT itself. That is, the
smartphone not only affords human grasping as it is of a shape and size such that it
can be held in the hand, but it also affords layers of network protocols, stacks of code,
and APIs. Our telecommunication networks present vast assemblages. In these, each
component is not determined by its interior relations, but instead by what it makes
possible within the broader network. The capacities of each of the assemblages provide
a coded and territorializing linkage within the network. That is, each segment of the
ICT network presents an affordance to the other, a configuration that is both material
and coded allowing for further territorialization and growth of the assemblage.
This is a connection between the idea of the interface of a digital system offering
affordances to users, as is commonly discussed, and a more remote or hidden set of
interfaces, as discussed by Christian Ulrik Andersen and Søren Bro Pold (2018) with
their concept of the “metainterface.” With this, they set up an aesthetic and analytic
argument for the consideration of the interface in digital cultural works that extends
beyond the immediate interface and begins to look at the layers of interfaces that exist
invisibly within systems. The interface of a system, the traditional moment in which
the affordances of a system are made evident to users, is pushed back to come to
include the platform itself. For Andersen and Pold, this becomes worthy of
consideration as these systems play a central part in contemporary society.
For them, the hidden layers of the interface have an aesthetic proposition of their own,
both because these layers of interfaces display a specific kind of aesthetic, and because
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
176
they are pervasive and enveloping. Here, as an accounting of the affordances of a
digital system is expanded to include physical capacities and cultural codings, this
thinking of the metainterface provides some guiding clarity toward the political
ramifications of our present systems. As the conception of Gibson’s assemblages is re-
oriented toward being thought of in terms of assemblages, what does it mean for
thinking about the possibilities of systematic control?
Such an idea as the meta-interface contributes a thinking of the entirety of a digital
system as representing the capacities or concordances of the system, and highlights the
cultural force that the hidden backend of digital systems can exert upon the experience
of the user. This understanding of how the metainterface itself becomes a surface for
interaction will be critical for thinking about how Pullinger’s Breathe can be
conceptualized under this re-framed conception of the affordance.
Material capacities of networked affordances
An initial consideration of the affordances of Breathe is straightforward enough. As a
smartphone-based webapp, Breathe relies on an interface of affordances that initially
coincides with the uses of a smartphone. Readers touch the screen, swipe, and engage
with the work almost exactly as they have learned to interact with an ebook on a
smartphone: they read each page, sliding their finger across it from right to left to flip
over to the next page when they are done with the first. In this, it relies on an interactive
paradigm which is easy to describe in the kinds of terms that Norman might use to
describe the perceived affordances of an interaction which builds on the groundwork
of possibility laid by “real” affordances. By having a cognitive model of the way that
books and ebooks work, readers are able to easily engage with the piece. Beyond this
sense of perceived affordances, the size, texture, and proximity of the glass screen of
the phone affords a surface for touching.
From this, the work begins to engage the cognitive model of the reader directly in the
narrative itself. As the reader is introduced to the supernatural elements of the story,
their normal cognitive models for interaction begin to falter: instead of being able to
easily swipe from page to page, the path of their finger across the screen leaves a black
trail, leaving them to puzzle over how to move on. Depending on how they are holding
their phone, pages become obscured by opaque shadows revealing new text; texts
MARCINKOWSKI | Ubiquitous Media
177
become automatically covered over by new text; instead of pages swiping away, the
text runs backwards. In this, the fact of the affordance, both real and perceived, comes
to play an expressive role in the work.
In this, affordances – like capacities – are used in an expressive way. They become
intermingled with the text which is itself dependent on the possibility of gathering
situational information from the phone’s sensors and networked connections. In this,
the experience of the work is not defined by the totality of the system itself, but by the
relations of exteriority of the components which, through their own systems of
capacities, come together to form the assemblage of the work. The narrative, the
smartphone, the web browser, the network stack, the cellular network, global systems
of geolocation, the vast databases of local information compiled by Google, networks
of weather data – all of these exist in an independent fashion while being “held
together” by the work. As a work, Breathe represents a global assemblage combining
the material affordances of machine code, human language, and all the various
modalities of communication that allow them to be brought together. In this, at each
turn, the capacities of these various material assemblages afford the possibility of the
work, and vice versa. The touch screen, cellular networks, databases, GPS satellites,
mobile processors, smartphone software, and socio-technical system of information
gathering utilized by Google comes into conjunction with other assemblages of the
neighborhood streets, shop names, weather systems, seasons, and, importantly, the
reader’s knowledge and recognition of these conditions. As in DeLanda’s account of
the capacities at play in assemblages, all of these various components have both
material and expressive roles which are not easily disentangled. As Breathe makes
conscious use of the affordances of digital technology as a tool for narrative
development, it is easy to recognize a reconfigured account of affordances at work.
The expressive capacities at work have functional implications and vice versa.
With Breathe, it is possible to trace out the assemblages at work up and down the
networked stack of affordances and the linguistic text of the work. It displays codings
which range from the wholly programmatic computer code that supports the work all
the way through the fuzzy algorithms employed for location detection to the human
readable language of the text itself. At each of these junctions, there are various
capacities at work that allow for the interaction of assemblages, from the backend
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
178
systems of the work which know how to appropriately call APIs to the moments in
which literate readers are presented with an intelligible text. As such, it is necessary to
attend to two levels simultaneously: the first comes with a consideration of the
mechanisms of ICT by which symbolic interactions enact the worldwide networks
making up the technical function of the work itself; the second comes at the moment
of the interaction with the reader, as they, situated within the capacities of these
networks, are offered opportunities for some recognition of the material capacities of
the setting that they are in.
As the reader engages with the work, they partake in processes of territorialization and
deterritorialization as they alternately coalesce and scatter the assemblage of the work
through their reading. On the one hand, their reading establishes a homogeneity of the
component parts of the work, bringing them together under the banner of a single
work. On the other, their specific reading under their own specific conditions leads to
an ultimately heterogeneous identity of the work.
In this, the meanings of the work are the result of the physical affordances and
capacities of their environment. In the sense-based experience of the work, the
surfaces with which readers engage are not just those that are immediate to them, but
those that they can also see from a distance, without seeing them directly. The
networked capacities provide meaning in only this speculative way. The affordances
of the work are not simply divided between physical or perceived affordances. Instead,
they may be traced along the various material networks of affordances that make both
the technology and the meaning of the work possible. As a piece of digital media,
Breathe works across these networks of material affordances as it engages the
movement and comportment of the reader toward their phone. At the same time,
these material networks of assemblages also provide the material codes necessary for
the language and computer code necessary for making the networked actions possible.
The cultural affordance of speculation
From all that has been said, it is clear that Breathe functions as an assemblage, more
than even just at the kind of theoretical and ontological level established by DeLanda.
It relies on readers, the text, smartphones, the situation of the reader, their
geographical surroundings, sensor networks, remote databases, global information
MARCINKOWSKI | Ubiquitous Media
179
networks, and the various individual and shared histories that make these parts fit
together. Across all of these are networks of capacities that serve to interlink and
provide these separate components with the identity of the work called Breathe. But
how does this appear to the reader? If we are to follow along the trajectory of the use
of the term “affordance” from Gibson to Norman and beyond, continuing to think of
the interface and the question of human-centered design, what should we think of the
moment when this interlinked assemblage of capacities comes to matter to the user?
For readers of Breathe, what comes to be the common locus for their engagement with
the work is a persistent concern with what the work affords, and how it affords it.
What gives the work its force, as a contemporary ghost story about lost mothers and
the refugee crisis in Europe, is that readers are left adrift wondering how – and along
what kinds of affordances – the work might play out. This sentiment was on display
in interviews with readers of Breathe (Marcinkowski, 2019).
This unsteadiness in the possibilities of the work is something that has already been
discussed in terms of the shifting interaction paradigms. As the conventions of
interactions with ebooks are consciously undermined by the work, readers are left
without a stable footing at the level of their perceived affordances. More importantly,
however, it is through the speculative capacities at work in the networked assemblage
of the work that the unique formulation of affordances described here can be seen
most strongly.
For readers of Breathe, what came to matter was not just what happened in the work
itself, but what might be happening in the work. As readers’ situations (locations, local
weather, time of day etc.) come to be incorporated within the piece, readers are left
without knowing the exact mechanisms by which these variations within the text are
introduced. For some readers, whether or not the entirety of the text is unique to their
experience was questioned. Through the various affordances at play in the work –
some of which might be completely obfuscated from the reader’s view – readers could
be left wondering if the application was tracking their movement for days before
reading. Through algorithmic sleights of hand, the application is able to use a sliver of
geolocational data to spin out a web of implications for the reader.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
180
With this, readers’ expectations for what the piece had to offer not only leaned in to
supply what amounted to folk theories regarding the backend function of the
application – at its most suspicious, readers thought of Google’s involvement with the
piece and suspected persistent tracking of their movements – but also led readers to
wonder what else might be possible. In coming to recognize the global assemblage at
work, readers engage not just with the affordances of the interface present before them
on the screen, but with the interface of the global system of technology as it is writ
small into a short story-sized narrative. For Breathe, the key capacity of the assemblage
comes to be its link to the larger and longer history and cultural paradigm of
technology and the idea of technological progress.
What this points to is the sense that readers’ engagement with the work is driven by a
sense that they are engaged not just with the work as it is, but as it could come to be:
in tapping into global networks of ICT and making these material pathways germane
to the specific conditions of a meaningful narrative, the edges of the affordances as
they are immediately present blur with what lies just beyond their reach. As this is
linked to readers’ understandings and expectations of contemporary technology, the
affordances at play are acted out at the level of the metainterface. Much like as
discussed by both Gibson and DeLanda, the affordance or capacity of a system is not
just what is present, but also what might yet be possible, but is still unknown.
This develops a sense of the idea of affordances that, as described in the situational
rendering by James Greeno (1994), are comprised of attunements and constraints.
Readers are, on the one hand, attuned to what is going on, in a cultural sense. On the
other, they are also cognizant of the constraints that are placed on them in any
interactive setting. However, what Breathe and the other works of ambient literature
present is an engagement with a cultural attunement in which the sheer idea of some
kind of constraint on technological possibility is significantly tempered. In this, new
media works like Breathe take as part of their interactive paradigm the idea that readers’
cognitive models allow the door to speculation to be left open. The capacities of these
works, by virtue of the diffuse assemblage which makes them possible, become
ambient distributed across the assemblage of the work.
MARCINKOWSKI | Ubiquitous Media
181
What is possible
The central provocation that is put forward here is a wider and more fully material and
realist account of the idea of affordances. This being the case, even as it points toward
an ambient sense of affordances being linked to the possibilities of an assemblage.
Instead of becoming bogged down with a worry over the difference between physical
affordances and those that might be considered to be culturally rendered, what is
proposed is a fully material rendering of the entire spectrum along a flattened ontology
in which the facts of human culture are no different than the facts of the curve of a
jug handle. Each present real systems that offer some capacity to be drawn into large
assemblages. The complexities of the human hand as it is afforded the opportunity to
grasp are not so different from the complexities of the human mind as it recognizes
not just affordances across an ICT network, but also language. In this, the idea of the
perceptual affordance is not so distinct from a rendering of physical symbol systems
that affordances, when considered as capacities, can be thought of as existing between
inanimate things, as in the case of computer codes and electrical switches.
For a work like Breathe, this re-formulated approach to affordances in which the
capacity for meaning is set equal to the capacity for physical interaction opens a way
toward understanding audiences’ engagement with the work in new ways. For works
of interactive media, affordances come to refer not only to those aspects that are
immediately present in the interface, but also to those aspects that are only
speculatively encountered or expected by audiences. These networked capacities for
the elicitation of meaning raise questions in regard to how to think about chains of
affordances in media and how the context of the affordance becomes part of the
affordance itself.
Affordances are simple things that, by their attendant capacities, are able to create
complex webs of potential interactions. By re-framing affordances in this way, future
links can be drawn between the classical understanding of affordances as they exist at
the moment of interaction and the wider socio-technical setting in which our systems
today function.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
182
Acknowledgements
Thanks to Jon Dovey, Kate Pullinger, Amy Spencer, Matt Hayler, and Nick Triggs, who all made contributions to this research. This work is funded by a grant from the UK’s Arts and Humanities Research Council.
References
Abowd, G. (2012) ‘What Next, Ubicomp?: Celebrating an Intellectual Disappearing
Act.’ In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, 31–40.
UbiComp 2012. https://doi.org/10.1145/2370216.2370222.
Andersen, C., & S. Pold (2018) The Metainterface: The Art of Platforms, Cities, and Clouds.
Cambridge, MA: MIT Press.
Bell, G., & Paul Dourish (2007) ‘Yesterday’s Tomorrows: Notes on Ubiquitous
Computing’s Dominant Vision.’ Personal and Ubiquitous Computing 11(2): 133–43.
https://doi.org/10.1007/s00779-006-0071-x.
Chalmers, M., & A. Galani (2004) ‘Seamful Interweaving: Heterogeneity in the
Theory and Design of Interactive Systems,’ In Proceedings of the 5th Conference on
Designing Interactive Systems: Processes, Practices, Methods, and Techniques, 243–52. DIS
2004. https://doi.org/10.1145/1013115.1013149.
Costall, A. (2012) ‘Canonical Affordances in Context,’ Avant 3(2): 85–93.
Costall, A., & A. Still (1989) ‘Gibson’s Theory of Direct Perception and the Problem
of Cultural Relativism,’ Journal for the Theory of Social Behaviour 19(4): 433–41.
https://doi.org/10.1111/j.1468-5914.1989.tb00159.x.
DeLanda, M. (2006) A New Philosophy of Society: Assemblage Theory and Social Complexity.
London: Bloomsbury.
———. (2016) Assemblage Theory. Edinburgh: Edinburgh University Press.
Deleuze, G., & F. Guattari (1988) A Thousand Plateaus: Capitalism and Schizophrenia.
London: Bloomsbury.
Deleuze, G., & C. Parnet (2007) Dialogues II. New York: Columbia University Press.
Dovey, J. (2016) ‘Ambient Literature: Writing Probability,’ In Ubiquitous Computing,
Complexity and Culture, edited by U. Ekman, J. Bolter, L. Diaz, M. Søndergaard, &
M. Engberg, 141–54. New York, NY: Routledge.
Garfinkel, H. (1967) Studies in Ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall.
Gibson, J. J. (1986) The Ecological Approach to Visual Perception. Hillsdale, NJ: Lawrence
Erlbaum Associates.
MARCINKOWSKI | Ubiquitous Media
183
Greeno, James G. (1994) ‘Gibson’s Affordances,’ Psychology Review 101(2): 336–42.
Jenkins, H. (2008) ‘Gibson’s “Affordances”: Evolution of a Pivotal Concept,’ Journal
of Scientific Psychology 12: 34–45.
Latour, B. (2005) Reassembling the Social. Oxford, UK: Oxford University Press.
Law, J., & J. Hassard (1999) Actor Network Theory and after. Oxford, UK: Wiley-
Blackwell.
Marcinkowski, M. (2018) ‘Methodological Nearness and the Question of
Computational Literature,’ Digital Humanities Quarterly 12(2).
http://www.digitalhumanities.org/dhq/vol/12/2/000378/000378.html.
———. (2019) 'Reading Electronic Literature; Reading Readers,' Participations:
International Journal of Audience Research (Special Issue on Readers, Reading and
Digital Media) 16(1).
Marcinkowski, M., & A. Spencer (2018). ‘Ambient Literature Participant Research,’
https://doi.org/10.17870/bathspa.6286508.v1.
Nelson, R., Ed. (2013) Practice as Research in the Arts: Principles, Protocols, Pedagogies,
Resistances. Basingstoke and New York: Palgrave Macmillan.
Newell, A., & H. Simon (1976) ‘Computer Science as Empirical Inquiry: Symbols
and Search,’ Communications of the ACM 19(3): 113–26.
https://doi.org/10.1145/360018.360022.
Norman, D. (1988) The Psychology of Everyday Things. New York: Basic Books.
———. (1999) ‘Affordance, Conventions, and Design,’ Interactions 6(3): 38–43.
https://doi.org/10.1145/301153.301168.
———. (2013) The Design of Everyday Things: Revised and Expanded Edition. London:
Hachette UK.
———. (2008) ‘Affordances and Design,’ JND.org. Accessed September 6,
2018. https://jnd.org/affordances_and_design/.
Oliver, M. (2005) ‘The Problem with Affordance,’ E-Learning and Digital Media 2(4):
402–13. https://doi.org/10.2304/elea.2005.2.4.402.
Philip, K., L. Irani, & P. Dourish (2010) ‘Postcolonial Computing: A Tactical Survey,’
Science, Technology & Human Values 37(1): 3–29.
https://doi.org/10.1177/0162243910389594.
Pullinger, K. (2018) Breathe. London: Visual Editions. https://breathe-story.com/.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
184
Rogers, Y. (2006) ‘Moving on from Weiser’s Vision of Calm Computing: Engaging
Ubicomp Experiences,’ In UbiComp 2006: Proceedings of the 8th International Conference
on Ubiquitous Computing, eds. P. Dourish & A. Friday, 4206:404–21. Lecture Notes
in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg.
https://doi.org/10.1007/11853565_24.
Ronzani, D. (2009) ‘The Battle of Concepts: Ubiquitous Computing, Pervasive
Computing and Ambient Intelligence in Mass Media,’ Ubiquitous Computing and
Communication Journal 4(2): 9–19.
Siles, I. (2013) ‘Inventing Twitter: An Iterative Approach to New Media
Development,’ International Journal of Communication Systems 7: 2105–27.
Smith, H., & R. Dean, eds. (2009) Practice-Led Research, Research-Led Practice in the
Creative Arts. Edinburgh: Edinburgh University Press.
Suchman, L. (2006) Human-Machine Reconfigurations: Plans and Situated Actions. 2nd ed.
Cambridge, UK: Cambridge University Press.
Turner, P. (2005) ‘Affordance as Context.’ Interacting with Computers 17(6): 787–800.
https://doi.org/10.1016/j.intcom.2005.04.003.
Vera, A., and H. Simon. (1993) ‘Situated Action: A Symbolic Interpretation,’ Cognitive
Science 17(1): 7–48. https://doi.org/10.1207/s15516709cog1701_2.
Weiser, M. (1991) ‘The Computer for the 21st Century,’ Scientific American 265(3): 94–
104. https://doi.org/10.1038/scientificamerican0991-94.
Weiser, M, & J. Brown (1997) ‘The Coming Age of Calm Technology,’ In Beyond
Calculation, 75–85. Springer New York. https://doi.org/10.1007/978-1-4612-
0685-9_6.
Michael Marcinkowski is a leaning technologist at the University of Bristol. He is
co-author of the forthcoming book Ambient Literature (Palgrave).
Email: [email protected]
Special Issue: Rethinking Affordance
Rethinking while Redoing:
Tactical Affordances of Assistive
Technologies in Photography by
the Visually Impaired
VENDELA GRUNDELL
Stockholm University, Sweden
Goldsmiths, University of London, UK
Media Theory
Vol. 3 | No. 1 | 185-214
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
This article addresses ableism in 21st century network society by analysing afford-ances in the practices of visually impaired photographers. The case study details how these photographers use assistive devices, tweaking affordances of both these devices and the photographic apparatus: its technical materialities, cultural conceptualizations and creative expressions. The main argument is that affordances operate in exchanges where sharing differences is key; visually im-paired photographers make differences sharable through images, revealing vulnerabilities that emerge within a socio-digital condition that affects users across a spectrum of abilities. The argument unfolds through a rare combination of affordance theory about imaginative and diverse human-technology relations, media theory about technological dependence and disruption, disability studies on normativity and variation, and art historical readings informed by semiotics and phenomenology. The article contributes to cross-disciplinary research by demonstrating that affordances can be tactical, intervening in pervasive socio-digital systems that limit who counts as a normal user.
Keywords
Affordance, tactics, assistive technology, photography, disability.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
186
Situating affordance: Assistance in following and breaking
norms
Here I am at Advanced Prosthetics / Please, please can you / change my settings /
THIS IS NOT POETRY, they said / Be happy with what we give you / We got you
Jillian Weise in Biohack Manifesto (Davis, 2016: 520)
This article addresses ableism in the 21st century network society through an analysis
of the tactical affordances that are realized by visually impaired photographers. More
specifically, it explores how the practices of Pete Eckert, Kurt Weston, and the Seeing
with Photography Collective address prejudices levied against disability by revealing
and reconfiguring the ways in which photographic technology facilitates and enables
use. This discussion unfolds at the interdisciplinary intersection between media studies
on technological dependence and disruption (e.g. Galloway, 2004; Betancourt, 2016),
disability studies on normativity and diversity (e.g. McRuer, 2006; Ellis & Goggin,
2015), and art historical image readings using semiotics and phenomenology (e.g.
Andrews, 2011; Schneider, 2011). The additional application of affordance theory will
serve a cross-disciplinary purpose, offering insight into interactions of disability,
materiality and art in a digital context. These interactions are vital to the article’s three-
part argument. Firstly, that affordances are realized through exchanges in which the
sharing of difference is key. Secondly, that the sharing of difference reveals how users,
defined as both able and disabled, are vulnerable in current configurations of the net-
work society. And thirdly, that the visually impaired photographers discussed within
the context of this paper provide valuable examples of this sharing by using a visual
medium to address norms about visuality; they make difference sharable through their
images.
In Biohack Manifesto, Jillian Weise poetically captures how the act of sharing differences
is a foundational yet precarious experience that unfolds through environments and
devices, many of which are shaped by mainstream definitions of normality. Like Weise,
visually impaired photographers may need assistance to make art and live life. Yet, they
debunk any default notion of need when they develop individual responses to generic
assistive devices. Weise’s use of personal pronouns – I, you, we – turns the subject
position into a mode of embodying possibilities (Butler, 1988: 521; Iversen, 2007: 91).
As her poetic hacking extends from body to society, the poem connects possibilities
GRUNDELL | Rethinking while Redoing
187
embodied in users with possibilities embodied in the devices that they use. Mainstream
normality shapes technical devices that are built to universal standards as well as
assistive ones intended to approximate them. If affordance theorist Donald Norman
is reassuring in his notion that assistive devices keep errors from repeating (2013: 216),
Weise repositions the error such that it is seen to alert users to settings that shape their
agency. Correspondingly, through grounded examinations of contemporary
photographic practices undertaken by people living with visual impairment, this article
aims to show how their resulting photographs alert users across a diverse and dynamic
spectrum of abilities. From this perspective, the capacity of these photographs to alert
users to the settings that shape agency may develop into a particular kind of affordance.
In an effort to support this aim, the analysis revisits both classic definitions of
affordance associated with 1970s ecological psychology, in which the “affordances of
the environment are what it offers the animal […] for good or ill” (Gibson, 2015: 119),
and 1980s design, where “affordance refers to the perceived and actual properties of the
thing […] that determine just how the thing could possibly be used” (Norman, 2013:
9). Much of the recent emphases on imagination and variation in communication and
sociology research are grounded within these definitions, while also developing them
further. A process-oriented and socio-technical focus on imagined affordances, for
example, “incorporates the material, the mediated, and the emotional aspects of
human–technology interaction” (Nagy & Neff, 2015: 2) in an effort to free affordances
from direct experience by stressing its inherently mediated character. A focus on mech-
anisms and conditions, by contrast, pinpoints “how artifacts request, demand, allow, en-
courage, discourage, and refuse” and how the user, in turn, perceives function, their physical
and cognitive ability to use the artifact, and the cultural and institutional validation of
this use (Davis & Chouinard, 2016: 2, 5). Reflecting the theoretical approaches and
frameworks developed within these texts, the following study homes in on
relationality, variability, and dynamism in the distinction between affordances, features
and outcomes (Evans et al, 2016).
The article applies this understanding of affordance in order to investigate the capacity
of the selected photographs to alert users to the settings that shape agency and the
ways in which this capacity may develop into a particular kind of affordance: a tactical
one. A tactical affordance is a possibility for intervention into a limiting system (de
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
188
Certeau, 1984: xviii-xxiv, 29-39, 68-72). Tactics become crucial in a network society
where users engage with tools and environments in increasingly digital systems that
situate sighted users as the norm (Castells, 1996-1998; 2013; Garcia, 2013: n.p.;
McRuer, 2018: 90). Tactical affordances recognize and expand how law and policy
defines assistive technology, enabling individuals with disabilities to engage more fully
in valued activities (e.g. AGE-WELL, 2017: 8). Across today’s networked platforms,
images often serve to promote and provoke a mainstream stance. By contrast, the
Flickr group, Blind Photographers, subverts sighted ideals by claiming that everybody
needs assistive technologies (Ellcessor, 2016: 81-83). Furthermore, the affiliated
photographers engage in valued activities by using devices whose protocols favour
sighted users as well as devices defined as being of assistance only to persons with
disabilities. They thereby challenge narrow definitions of both ability and disability as
they create images for an audience differently sighted than them – perhaps for
everybody.
The photographers featured here – Eckert, Weston, and the Seeing with Photography
Collective – have spearheaded the Blind Photography movement over the last fifteen
years, participating in public statements such as the first major museum exhibition,
Sight Unseen: International Photography by Blind Artists (touring worldwide since 2009), and
the publication of the Collective’s iconic book, Shooting Blind: Photography by the Visually
Impaired (2002). These achievements signal a momentous shift in how the work of
impaired photographers is understood; it is gaining increased acceptance as art rather
than being seen primarily as therapeutic disability art. The move between margins and
mainstreams helps to provide context for this article’s argument as it captures how
disability and photography connect as a discursive formation in which images reflect,
perpetuate and generate discourse (McRuer, 2006: 6, 20-21; Siebers, 2008: 30;
Foucault, 2010: 38, 74, 116). The featured images capture and render explicit the
discursive formations that situate them while also expressing critique. They point to a
technologically driven society, especially a digital one that is so markedly visual and
geared for augmentation that it becomes ableist, i.e., prejudiced against disability
(Siebers, 2008: 7-9; Norman, 2013: 42-43, 283-286; OED). Pervasive yet unperceivable
computational structures characterize this “socio-digital” condition, where inaccessi-
bility to data is akin to disability – shaping the user with “fits and starts, accommoda-
GRUNDELL | Rethinking while Redoing
189
tions and innovations, learned skills and puzzling interfaces” (Ellis & Goggin, 2015:
39; Ellcessor, 2016: 9, also 63-65, 74-75, 187).
To show how tactical affordances evolve in socio-digital conditions, this analysis
evokes the “unruly body” as a position from which to address ableist
conceptualizations of normality by detailing its “ragged edges” (Siebers, 2008: 65, also
67; McRuer, 2006: 6-10, 31; 2018: 20-23; Davis, 2016: 1-3). This position links three
means of disrupting normality: to queer, to crip, and to glitch. From this perspective,
the glitching of technical protocols resembles the cripping of ableist restraints, which
evolved from the queering of social scripts that control markers of identity (Butler,
1988: 525-526; McRuer, 2006: 19; 2018: 20-24; Siebers, 2008: 55; Norman, 2013: 128-
129; Ellis & Goggin, 2015: 116-117; Hirschmann & Smith, 2016: 273-274). These
disruptions become tactics as they affect systems that require a certain kind of body to
pass as normal. Both able and unruly users embody sighted norms that are embedded
in technologies and that afford vision – such as the photographic apparatus.
Photography facilitates unruliness when observers begin to question their means of
observation (Iversen, 2007: 91-94; Schneider, 2011: 138-144). The photographers
discussed here use their visual impairment to question visuality: a multisensory mix of
sight, seeing, visibility, and visualization that points to the ties between embodied
experience and social power.
Like Weise’s poem, the photographers address normality by sharing their differences
in the media landscape, one of the avenues through which disability is defined, govern-
ed, and encountered (Ellis & Goggin, 2015: 20, 113-117; Ellcessor, 2016: 4; Kleege,
2016: 448). As art is vital to this landscape, the analogy between unruly bodies and
unruly images connects this study to art historical traditions – like Dada and Surrealism
– concerned with how breaking aesthetic norms through errors sparks critical reflect-
ion. The analysis shows how technical and sensory errors reveal norms, yet avoids
tropes like automatically linking errors in bodies and images or assuming that errors
are always critical. The theme of disability and technology thus brings the socio-digital
condition to bear on art’s capacity to test limits. Art offers insight into societal changes
by revealing conditions that stay hidden within everyday routine (Noë, 2015: 15-17,
145, 166-167).
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
190
The article enters into dialogue with both artists and scholars, offering close qualitative
interpretations that enrich the understanding of how affordances work in practice
(Flyvbjerg, 2011). It does so by detailing how acts of sharing differences matter for the
operation of affordances, grounded in empirical examples of photography to which
we now turn.
Operating affordance: Visually impaired photographers at
work
Where I’m going is so different that I have to have a plan […] I visualize and then I adapt. I assume it
will be about three-quarters the way I planned, and a quarter what happens.
Pete Eckert in Sight Unseen: International Photography by Blind Artists (McCulloh, 2009: 28)
The following section will provide an analysis of three illustrative case studies in an
effort to chart how visually impaired photographers activate affordances that enable
and articulate both them, as users, and the devices that they use. As Pete Eckert
captures in the preceding quote, this interaction reveals how a dynamic between
chance and control supports a reconceptualization of the technological apparatus.
Pete Eckert
Pete Eckert calls himself a visual person, turning to photography after becoming legally
blind several decades ago (ibid: 2-3, 28). Avoiding digital cameras as they do not “click
into place,” Eckert uses “all the tools of blindness to build photos” including a dog
and cane; a talking computer and timer; an iPhone; a Braille camera and light meter;
and various windup gadgets (2018, email). These tools serve both tactile and auditory
purposes – and Eckert ensures the “click into place” by carving steps in the focus rail
with a jewelry file (ibid). Using these tools, Eckert constructs scenes with homemade
props and friends as actors; he composes a “one shot cinema” capable of conveying
open-ended narratives (McCulloh 2009: 28). A filmic mode evolves in the darkened
space illuminated with lasers, flashlights, lighters, candles, and gunpowder before the
open shutter of a large-format, composite body view camera. To him, as to other
impaired photographers, the camera is an assistive device for seeing beyond the visual
(ibid: 2-3, 28).
GRUNDELL | Rethinking while Redoing
191
Fig. 1: Pete Eckert: Bone Light No. 94119-10 (2016). Used with permission by the artist.
The Bone Light series (Fig. 1) represents a biofeedback loop that emerged as Eckert
worked to rewire his visual cortex; he sought to counter vision loss through the
triangulation of touch, echolocation, and memory: “In the world I depict I can see,
albeit via my other senses [---] I can see light coming from my skeletal structure” (2018,
email, web). In image No. 94119-10, Eckert models light and dark to visualize the
biofeedback loop with elements that signify mixed emotions. Outstretched fingers
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
192
signal both caution and curiosity together with the feet planted steady on the floor.
Eyes peek through the dissolving head with human fortitude. Hemlines of shirt and
trousers add familiar contours to the distorted body. The mixed effect comes about
through Eckert’s bodily investment in visualizing his environment, honed with a
degree in sculpture that extends to photography as he sculpts the materials of his
tableaux with tactile movements. These movements blend and sharpen the contrasts
that form the basis of vision. His response to visual impairment dethrones seeing as
the best route to visualization: “maybe especially with no input, the brain keeps creat-
ing images” (ibid: 3, 28).
According to Douglas McCulloh, curator of Sight Unseen, practice and condition are
collapsed in the series: “[t]he roving light is an uncanny substitute for the artist’s miss-
ing sight” (ibid: 28). Here, disability comes across as an advantage, as Eckert’s
deteriorating physical sight has given way to a form of inner vision (ibid: 2-7, 28). The
photographers in this case study offer nuance to this binary stance in the understanding
of the relation between inner and outer vision as opposites, as the concluding section
will clarify. Eckert’s effort to visualize “a nonvisible wavelength” is one example (ibid,
also 42). His first photographic experiments in response to losing his sight was to shoot
at night with a small, fast camera that allowed for easy movement. To venture out like
this became a way for him to reclaim an altered experience of personal space while
also expanding his physical range in an environment that was no longer visually access-
ible to him. While later works such as Bone Light appear more staged, his interaction
with the environment still reveals a deep interest in photographing the nonvisible. This
reclaiming seems like a feature or an outcome of using the camera, rather than a typical
affordance. However, the camera affords an engagement that is not only visual, but
also haptic and kinetic as it connects visual and tactile aspects of experience with bodily
movement. By harnessing and implementing the affordances of the camera, Eckert
was able to add sensory data rather than reducing it, emphasizing a visceral corporeality
rather than a more cerebral inner vision. This activity would enable his later
explorations, bringing about new possibilities for action.
The Seeing with Photography Collective
Although sighted, Mark Andres initiated the Seeing with Photography Collective, in
1980s New York, in an effort to develop photography as a mental and physical process
GRUNDELL | Rethinking while Redoing
193
while confronting issues around disability (Hoagland, 2002: 19). The group, which he
calls an “ensemble,” undertakes collective experiments in an effort to re-evaluate the
perceived intersection between photography and vision (ibid, 2002: 19-20). A key
example of this re-evaluation is that the collaborations include photographers that
range from fully sighted to fully blind, and from amateur to professional. By creating
a space where individuals can share a wide spectrum of visual abilities, the Collective
counteracts an ableist notion that photography is only for the fully sighted. In Portrait
in Paper (Fig. 2), for instance, Andres assisted Sonia Soberats, who had no professional
background in photography when she joined the Collective, to use photography as a
means of processing the experience of going blind after losing her family.
The collaborations involve articulating ideas, setting scenes, posing people, pointing
cameras, directing flashlights, and focusing the enlarger to make a print that carries the
bright distorted layers characteristic of chronophotography (ibid: 19-20). Photography
comes across as multi-sensory, as the collaborators use their voices and bodies to gauge
the sizes and scales of sitters and scenes. The image renders these relations as a process
unfolding between individuals, objects and environments rather than as the frozen
framed instant often associated with photography: “Nobody sees the whole image
until the Polaroid is opened” (ibid: 19, also 21). The quote signals inclusion as it points
out that nobody, regardless of visual ability, has complete control over the
photographic process and its resulting image. Furthermore, this lack is a source of
creativity for all photographers rather than an obstacle to creativity for photographers
with a visual impairment.
Yet, the narrative about the Collective in Shooting Blind sometimes emphasizes
obstacles. Disability seems overpowering in portraits presented as “plaintive bones”
that show the “strain and resignation” of a “pared and harrowed” life (ibid: cover, 5,
7). Such wording dramatizes disability in a similar way as Sight Unseen does with regards
to the work of Eckert (discussed above) and the work of Weston (discussed below).
However, the interpretations put forth in these publications also convey a more
enriching complexity, that corresponds with the interpretation in this analysis:
“Stamina, tension, imprisonment, humour, and hallucination are frequent themes, yet
the element of mourning is often playful, and the collective enterprise is more than
therapy” (ibid: 5, also 6, 21). This complexity is evident in the image by Andres and
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
194
Soberats (Fig. 2). The sitter’s face appears through thin sheets of wet paper, modulated
by the rapid swirls of the moving flashlight during an exposure long enough to capture
movements between profile and frontal view. The aesthetic renders the body’s
boundaries unfinished and vibrant, as if in an emergent state in which the eyes are
about to form a gaze that meets the viewer from within their deeply shaded sockets.
With and without its accompanying disability narrative, the image conveys both the
tension and the play noted above. In this analysis, the image conveys the emergent
state of all bodies – thus exemplifying a state in which we share differences and make
differences sharable.
Fig. 2: Sonia Soberats and Mark Andres: Portrait in Paper (2009). Used with permission by SWP.
GRUNDELL | Rethinking while Redoing
195
While a sighted photographer, like Andres, may handle the flashlight during the image-
making process, it gains an assistive quality through Soberats’ use as it further enables
her to be active in the creation of the image. The flashlight in this case affords both a
controlling of light that is prevalent in mainstream sighted photography while also
facilitating the aestheticization and inclusion of alternative perspectives, namely the
haptic and embodied perspectives of blind and visually impaired photographers. The
resulting image in this case captures and collapses the diverse bodily and spatio-
temporal dynamics of a collaboration that includes variously sighted participants.
These dynamics are readable in the image as traces of light, aligning the Collective with
mainstream traditions while providing alternatives to ableism: “It is very different from
a normal photographic method where you see what you are going to take” (Andres in
Hoagland, 2002: 19). Andres’s statement confirms that these photographers move
between mainstream and margin, sharing characteristics with both common and
uncommon photographic practices. This analysis confirms that their in-between
position facilitates the re-evaluation of the perceived intersection of photography and
vision that Andres seeks, by inviting viewers with diverse abilities to reflect on what
counts as normal both within and beyond photography.
Kurt Weston
Kurt Weston stresses that blindness is a common yet contested part of being human
(Grundell, 2018). Weston’s practice changed from fashion to art photography after he
lost his sight in the mid-1990s because of complications associated with HIV/AIDS.
He describes being gay, ill, and blind as “a journey into otherness” that is stigmatizing,
but that also calls attention to the fact that “we are all headed toward decay and dis-
ability” (Weston in McCulloh, 2009: 100, also 2-3). Despite identifying the universality
of this experience, he engages critically with the term ‘disability’. Assistive devices
enable his life and work: magnifying loupes, monoculars, handheld LED-lights to illu-
minate camera controls, glasses for low vision optometry and large monitors with en-
larging software. Not only does Weston explicitly advance the claim that everybody
needs assistive technology, identifying another universality, but these tools also figure
into his art, revealing affordances that help and hinder his engagement with the image
(ibid).
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
196
Weston’s engagement with disability revolves around levelling his own impairment
with those of others, creating viewer positions that share his situation (2018, email).
He creates these positions through both his images and their display. One example of
this is the video installation Paper Doll, which forms part of the series Visual Assist that
explores assistive devices as both blessing and curse (ibid). The video shows a person
using an assistive device to see a doll moving to a recording. The audience mirrors the
situation, forced to peer through holes in a partition. These positions – doll, user,
audience, artist – bring the viewer of the artwork closer to the viewer in and behind
the artwork, sharing diverse and challenging views. A similar theme and a similar effect
characterize Outside Looking In (Fig. 3) from the series Blind Vision (2000 – ongoing).
This series comprises a collection of self-portraits produced with the use of a scanner
– an imaging technology that Weston began incorporating into his practice after
experiencing sight loss. While the display of this series does not involve the viewer
spatially and physically as in Paper Doll, it does exemplify how the image invites the
viewer to share the photographer’s situation through aesthetic means.
Fig. 3: Kurt Weston: Outside Looking In (2015). Used with permission by the artist.
In order to create the images in Blind Vision, Weston presses his body against the
scanner glass and is illuminated by light coming from inside the machine rather than
from an external source, as is usually the case in photography. As Outside Looking In
GRUNDELL | Rethinking while Redoing
197
(2015) illustrates, the process results in a shallow depth-of-field, rendering the scanned
objects through sharp contrasts that take on semiotic importance. Minute details of
skin are articulated yet blurred as the tips of the nose and fingers touch the glass. Face
and hand fill the visual space with a human presence destabilized by the flat expanses
where the scanner has failed to register, challenging the representation of a unified
body. Glasses and camera visually mirror each other’s lenses, underlining their assistive
quality yet also becoming dysfunctional as they exclude the human user: the glasses are
opaque and placed rather than worn, and the grip on the camera only permits to “shoot
blind.”
This analysis of the interaction between visual elements suggests that Weston’s work,
like the work of Eckert and the Collective, engages with disability discourse and
beyond. For instance, the images’ emphasis on visual apparatuses calls attention to the
coinciding terms of vision and visual impairment in a manner that remains regardless
of whether or not the viewer knows about the photographer’s condition. The image
points out that visual apparatuses integrate human and nonhuman eyes in both
enabling and disabling ways, exemplified by the glasses placed over the eyes yet
blocking the view. Like the earlier examples, Weston thus conveys the body in a way
that invites reflection on what a normal body is or what it could be. This happens in
part through his creative negotiation of what counts as a normal performance of both
photographers and their devices – for instance, what you can and should do with a
scanner depending on how you perceive its affordances. In his self-portraits, Weston
expresses himself as “an abnormal, anti-conventional, and culturally marginalized
body” (ibid.). This statement addresses ableist notions that limit definitions of
normality and yet it does so in a way that underlines the important role that shared
spaces play in linking experiences across and beyond abilities. By drawing on
photography as well as medical visual culture – the Blind Vision series combines optical
devices with syringes or, as in Outside Looking In, echo the aesthetic of a botched
medical scan – he points out affinities between technologies that manage and mediate
shared instances of vulnerability. In this vein, his work demonstrates how these
imaging technologies can counteract vulnerability by assisting both disabled bodies and
the idealized abled body, while also facilitating an interrogation of discourses that
define the terms of vulnerability, assistance and normality. In doing so, they open up
a space for viewers with varying abilities to share their experiences.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
198
Eckert, Weston, and the Seeing with Photography Collective: Diverse
responses to disabling experiences
This section brings out connections between the three cases as they have unfolded in
the discussion of individual practices and particular works. The connections link the
work of these specific artists to more general questions about disability and user
agency, discussed further in the following sections.
Eckert carves steps in the focus rail, Weston pushes his face against the scanner bed
and Soberats puts wet paper on her sitter’s face. Their hands-on and head-on
approaches to photography may be practices developed in response to disability yet,
beyond any specifically disabled positions, they may reflect the ways that all users
necessarily “gesture and dance to interact with […] devices” (Norman, 2013: 283).
These photographers incorporate the so-called ‘tools of blindness’ into their
photographic practices, the affordances of which are intended to neutralize disability
by enabling the approximation of normal sight. At the same time, the photographers’
need for assistance also calls attention to disability, occasioning an opportunity to
address the terms and limits of normality.
Eckert, Soberats and Weston all incorporate devices designed for disabled individuals
into the photographic apparatus, while simultaneously identifying the assistive qualities
of devices designed for able-bodied users. They thereby expand both the possibilities
of visualizing their environment and the functions of their devices. These devices assist
the visually impaired in managing light and optics in both normative and experimental
ways. Management of light and optics is fundamental to photography while also
connecting the medium to the 19th century Impressionist practice of painting-with-
light. Within the Blind Photography movement, references to such culturally validated
experiments in visual perception recur in descriptions of the sensory particularities of
photographs and photographers as well as in claims to a historical link with canonized
avant-gardes; both of these tendencies are seen to add legitimacy to works emerging
from the movement (Hoagland, 2002: cover, 5-6, 8; Eckert, 2018, email).
While this connection plays an important role in grounding the work of photographers
who live with disability, it may result in reductive interpretations of their work as
disability art, or of themselves not only as crips but as supercrips. A supercrip does not
only reclaim the pejorative label cripple by identifying as a crip but turns cripness into
GRUNDELL | Rethinking while Redoing
199
a superpower. This figure is ascribed a unique expertise in a struggle for normality that
involves everybody crippled by injury, illness or age (McRuer, 2006: 30, 35-37, 2018:
13, 19-22; Siebers 2008: 63, 68; Ellis & Goggin, 2015: 114). The refiguration of artists
living with disability as supercrips appears in artistic and institutional framings of
visually impaired photographers; this is apparent in McCulloh’s emphasis on inner
vision and Eckert’s command of his visual cortex. This is perhaps unsurprising as the
artistic avant-garde is often construed as a social position with augmentative tendencies
in both ableist and disability discourses. This being said, while a blind person may have
the advantages that blindness affords, such as potentially moving with greater
confidence in the dark, it is risky to frame disability as either an augmentative advantage
or disadvantage. An emphasis on advantage can be essentializing as it often treats
advantage as an essential quality of a particular disability; from this perspective,
advantage is construed as a potential (though perhaps unrealized) enhancement re-
gardless of the unique reality of individual experience and actions. Advantage should
instead be recognized as a matter of practice – ongoing labour – rather than being
bound up with a conceptualization of identity as “a publicly regulated and sanctioned
form of essence fabrication” (Butler, 1988: 528). The discursively encouraged identity
of the supercrip recalls the societal support needed to validate particular perceptions
and dexterities (Davis & Chouinard, 2016: 4-6). However, this analysis shows that the
images reveal a more complex position than any simplified dichotomy between ability
and disability: they question all kinds of settings as well as their accompanying labels.
The interplay of light and dark serves as more than a metaphor for the presence or
absence of sight, as the blurs and edges that articulate the bodies in these images also
connote diverse responses to multifaceted disabling experiences.
These observations support a reframing of narrow definitions of disability and the
assistive technologies that are intended to simplify the work of visually impaired
photographers. Instead of signifying a lack within the photographer, or turning lack
into asset for the sake of the supercrip, this analysis suggests that the images do not
passively carry disability as a marker of identity. They rather mediate an agency of
expressing experience, as they stress that asking questions about how to do disability
is more important than illustrating how to be disabled. This shift from being to doing
becomes apparent through a consideration of the dynamics of light and dark, notable
in all three examples. Their aesthetic similarities, though differently expressed, contra-
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
200
dict a default uniqueness assigned to inner vision. Instead, a common ground emerges
from which to engage with the discursive pressures that define us all. The analysis
affirms that these images shape such a common ground, facilitating an understanding
of difference beyond dichotomy. The visual realm thus encompasses blindness as a
part of the sensory and social relations that shape notions of visuality in its deepest
sense: sight and seeing, visualization and visibility.
Disability brings a “visual friction” that invites the impaired to develop “social hacks”
against stereotypical behaviours – a blending-in that masks impairment so that it ceases
to impair (Lehmuskallio, 2015: 100, 102). This social hack resembles Weise’s poetic
biohack as the invocation to “change my settings” expresses a desire to pass as normal
while simultaneously claiming space for disabled bodies by collapsing the experiences
inside and outside the poem: “the metaphor of walking and poetry assumes a certain
functionality that fails in reality” (Davis, 2016: 519). Both hacks expose a tension
between abled and disabled, pointing to the need for a shared space where for instance
variously sighted individuals can explore and perhaps resolve that tension. This analysis
suggests that creative practices like poetry and photography provide such a space by
drawing out and subverting stereotypes.
While narratives that chart the overcoming of disability pervade the network society,
digital augmentations seem primarily available to able-bodied users who, for example,
may not need devices to click. Though disabled users are often early adopters of new
technologies, many devices remain inaccessible because average users perceive that
adapted designs affect the average user experience – a problematic effect, negative or
not (Ellis & Goggin: 2015: 41-44). Differing experiences of access, as detailed here,
point to how the socio-digital condition regulates technologies in ways that exclude
certain users on both material and affective levels (Ellcessor, 2016: 158-164). The
material and affective dimensions of technologies and their corresponding affordances
are thus increasingly important within mediated environments (Nagy & Neff, 2015).
Building on the preceding analyses of how several visually impaired photographers
activate the photographic apparatus to produce meditations on vision, the following
sections will advance the article’s two main arguments: namely, that affordances are
realized through exchanges where the sharing of differences is key; and,
correspondingly, that visually impaired photographers make difference sharable
GRUNDELL | Rethinking while Redoing
201
through images that reveal users as vulnerable across a spectrum of abilities. In an
effort to accomplish this, the next section puts these examples in dialogue with con-
ceptualizations of affordance that define which actions become possible depending on
how – and how much – we can see.
Troubling classical theories of affordance: With and against blindness
Without a good model, we operate by rote, blindly; we do operations as we were told to do them; we
can’t fully appreciate why, what effects to expect, or what to do if things go wrong.
Donald Norman in The Design of Everyday Things (Norman, 2013: 28)
[A] boundary that is unique to the observer’s particular anatomy. It is called the blind region in
physiological optics. [---] It is altered when a person puts on eyeglasses […] Thus, whenever a point of
observation is occupied, the occupier is uniquely specified…
James J. Gibson in The Ecological Approach to Visual Perception (Gibson, 2015: 197)
James J. Gibson and Donald Norman, key figures within canonical accounts of
affordance, situate blindness as both lack and excess. Underperformance or over-
presence, both correspond blindness to a kind of dysfunction: an obstacle to being in
the world. In doing so, they offer an entry-point through which to reflect on how
visually impaired photographers expand the concept of affordance by engaging the
presumed obstacle: their eyes.
Blindness appears in Norman’s discussion of ‘conceptual models’ as the mental maps
that enable us to predict the effects of actions performed by objects and by ourselves
(2013: 25-28, also 98-99). In this model, prediction is the basis for understanding. Since
predicting depends on recognizing visual patterns – i.e., on seeing – a bad model makes
this recognition harder. In other words, a bad model is bad because it does not attain
a fully sighted ideal. For Norman, individuals thus become dependent upon their visual
capacities and corresponding apparatuses. Considered in relation to technologies, users
may suffer not only because of conventional visual impairments, but also if their age,
height or language hinders them from recognizing the visual patterns that enable use
– all of which are obstacles to achieve an able-bodied ideal. While Norman supports
designing for diversity, in a manner that might help to overcome these barriers, he
claims that assistive devices may remain unused because they advertise infirmity or are
ugly. “Most people do not wish to advertise their infirmities […] to admit having in-
firmities, even to themselves. [---] Most of these devices are ugly. They cry out ‘Dis-
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
202
ability here’” (ibid: 243-245, also 285). To advertise the wrong thing or the right thing
in the wrong – ugly – way is an expected concern in design.
Norman’s conceptual model positions disabled people as special whether they fail or
surpass a standard; this is similar to the narratives of overcoming associated with the
supercrip. This contradiction exposes the difficulty in handling specialness when
discourses that determine normality can ascribe ableist functions to both norms and
deviations (Davis, 2010; Cryle & Stephens, 2017). Specialness here draws on a flexi-
bility lauded in design for affording a universal inclusivity, which paradoxically shapes
a subject whose striving for normal abilities is necessary in order to fulfil societal logics
that perpetuate exclusion (McRuer, 2006: 12-13, 16-17, 41; Norman, 2013: 246-247;
Davis, 2016: 2; Ellcessor, 2016: 112-116, 158, 187-188). Flexible users adapt more
easily to universal standards than unruly users do. This process recalls how institutions
codify normality in Weise’s poem: “Insurance: You are allowed ten socks/year / In-
surance: You are not allowed to walk in oceans” (Davis, 2016: 520). An emphasis on
hiding infirmities – the opposite of advertising as a public token of social acceptance
– confers the ugliness of the mediation to a user who, like Weise, cannot avoid stating:
“Disability here.”
James J. Gibson’s ecological optics, from which the theory of affordance develops,
offers an opening towards diversity. A blind spot appears with every position:
wherever I look, I see my own nose too (Gibson, 2015: 197). My body blocks an
entirely free access to my surroundings. The environment changes in the presence of
my unique anatomy, as it perceives places and movements. The body thereby specifies
the occupied position and the individual who occupies it. Since the body becomes an
excessive presence, blindness becomes an impairment. If following Gibson, this
impairment seems easily remedied with glasses despite being an inescapable part of
human embodiment. This perspective points to a wish for pure seeing similar to the
notions of inner vision earlier, and a simplified notion of assistive devices. Yet, it also
implies that all observers with noses, and bodies more generally, share a similar ex-
perience as a result of their differences and not despite them. This shared experience
is fundamental to meaningful relations between individuals and environments;
significantly, it does not exclude blindness from the exchange that shapes the terms of
relation and therefore the realization of affordances (Evans et al, 2016: 36, 46-47). Acts
GRUNDELL | Rethinking while Redoing
203
of sharing, as a result, help to afford understanding between variously abled
individuals.
This discussion brings out a recurring theme of universality and difference in classical
affordance theory. This theme causes a lingering problem for the visually impaired.
The problem occurs as these theorizations posit a normative kind of visuality: seeing,
and seeing in a particular way, becomes fundamental since it shapes relational activities
like insight, attention and empathy – turning blindness into a negative metaphor
(Kleege, 2016: 440-441, 448; McRuer, 2018: 191). This limited understanding of
visuality limits the affordances of assistive devices within medical, social and cultural
models of disability if unchecked. Meanwhile, these models develop in ways that
challenge such limitation, for instance by shifting the issue of assistance. If a medical
model focuses on the individual defined as disabled, the social model focuses on which
environments produce definitions like disabled, and the cultural model combines them
with an emphasis on critical creative expression (Siebers, 2008: 3-5, 25-27, 63; Ellis &
Goggin, 2015: 21-35; Ellcessor, 2016: 3-4, 10; Hirschmann & Smith, 2016: 263-274).
Blindness and Photography in the Network Society
This analysis recognizes how non-normative users make the terms of a normative
visuality explicit, and therefore sharable, as their position as other-than-able-bodied is
well suited to demonstrate the inevitability of all human corporeality (Butler, 1988:
522-523; Siebers, 2008: 193). The featured photographers accomplish this by
confronting various models of disability through their own body. As Weston puts it,
“these images confound restrictive conventional discourses and defy oppressive norms
for bodily appearance and behaviour” (2018, email).
However, conceptualizations of blindness in classical affordance theory are premised
on and emerge from an able-bodied experience of sight. Impaired photographers
intensify this tension since their use of technologies to make art and live life recalls
that an able-bodied ideal underpins a social identity that is encouraged and even
expected but unattainable (Siebers, 2008: 15-16). Their circumstances make their
choice of photography as existential as it is pragmatic, pointing out that our activities
shape our identities. The mode of vulnerability aestheticized in their works is not en-
demic to a marginal group but affects user agency in a world defined by visually
navigated technologies. The acknowledgement of shared vulnerability supports the
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
204
notion that affordances operate in exchanges defined by the sharing of differences –
for instance, when observers begin to question their own means of observation, like
their eyes. As the case studies show, to share experiences of vulnerability through
images affords such self-reflection both in those who create them and those who view
them. We become aware of the ableist norms that make us vulnerable: less a
characteristic of our specific identity than a characteristic of the process through which
identity is continuously constructed. The remainder of the article delves into this
process to clarify how this affordance may become tactical – starting with the
integration of the social, the technical and the bodily that pervades network society.
The effects of visually-oriented vulnerabilities are made particularly apparent through
photography as it has become a key feature of contemporary digital culture; the
constellation of technologies and practices that comprise photography work to attract,
interpellate, steer, track, and target users within the digital flows of the 21st century
network society (Lister, 2013; Kuc & Zylinska, 2016; Lagerkvist, 2018). The impact of
these functions raises the issue of whether vulnerability may be an affordance, a feature
or an outcome of digital technology – or perhaps all three (Evans et al, 2016: 39-41).
Over time, certain visualizations circulating through the network society may take
precedence over others as more accurate depictions of reality. Conceptualized as
diverse yet designed to neutralize disruption, the photographic apparatus prescribes a
bodily investment that pertains to all but disables some. If photography primarily
serves a user who embodies an imagined consensus on normality (Nagy & Neff, 2015:
2-7), it may also afford resistance since it calls the universality of the reality that it
depicts into question. This performative quality reveals the hidden structures that are
mediated by the apparatus (Iversen, 2007: 94, 97, 100-101; Schneider, 2011: 135, 144).
One structure revealed here is the ableism that produces disability by excluding some
bodies from participation and feeding insecurities about all bodies (Butler, 1988: 522,
528; McRuer, 2006: 20; Ellcessor, 2016: 2-3, 77; Hirschmann & Smith, 2016: 269-271).
Impaired photographers, like those discussed above, develop tactics against the norm-
ative limitations that are mediated through such structures by changing the affordances
of assistive devices: using them to question them. Their visualizations describe ability
and disability together, intervening in the systems that validate depiction. This tactic
gains ground if it makes the system visible to itself, facilitating a direct address of
GRUNDELL | Rethinking while Redoing
205
hidden structures (de Certeau, 1984: xvii-xxiv, 34-39). Users can thus reposition
disability as an “othering other” that recognizes the otherness of the able body too
(Siebers, 2008: 6, also 60). The images here visualize an impairment that awaits all
bodies to some degree, someday, as nobody is able enough for long enough.
This being said, assistive technologies complicate the assumption that tactics can be
seamlessly equated with the breaking of norms. Technology conditions the statements
that it enables. For visually impaired photographers, technological assistance thus
supports the vulnerability that drives them to create images with and about impair-
ment. They may follow a norm by balancing out the disability while also breaking the
norm by exposing it in the image. The image turns the error into a tactic against
standardization, a cultural constraint resulting from a push towards universal usability
where “everyone learns the system only once” (Norman, 2013: 252, also 248). None-
theless, human erring is due to the system’s requirements overriding the requirements
of a user who is “forced to serve machines [and] punished […] for deviating from the
tightly prescribed routines” (ibid; 168).
Errors become useful when users accept that our devices and our selves are vulnerable:
systems and individuals are always already broken (McRuer, 2006: 30; 2018: 23; Siebers,
2008: 67; Hirschmann & Smith, 2016: 280). The undesignable gains value when the
system cannot fix an error and the uninterpretable causes a time-out for reflection: a
temporary suspension of dependence (Norman, 2013: 184-185, 231). The photo-
graphers’ interactions involve both the known and the unknowable. Eckert states, “I
use any light source I can understand” and then uses the light he perceives as radiating
from his bones (McCulloh, 2009: 28). As the analysis shows, the inaccessibility of a
prescribed use alerts users to their own access and affords other uses. In the process,
the recognition of patterns that are not exclusively visual challenges the primacy of
vision in the conceptual model of the world. For instance, the Collective’s use of flash-
lights reveals scratch-like patterns (Hoagland, 2002: 6) that trace kinetic and haptic
actions in a photographic space that is also a social space. The images generate know-
ledge through a “repeated corporeal project” with stylized gestures that yield
unexpected outcomes (Butler, 1988: 522, 519).
The analysis in this section shows how vulnerability characterizes users positioned by
both assistive and other technologies, and how disruptive practices reveal and reclaim
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
206
positions of vulnerability. The argument that the sharing of differences is key to the
operation of affordances, and that this exchange rests on an acknowledgement of
shared vulnerability, finds support as the photographers here make vulnerability pro-
ductive without neutralizing disruption and reinforcing normality. Rather, disruption
affords a kind of repositioning: “[i]t is only when we come across something new or
reach some impasse, some problem that disrupts the normal flow of activity, that con-
scious attention is required” (Norman, 2013: 42). The next section analyses this
repositioning of the vulnerable user – and thereby of the affordances of the devices
that they use – in further detail to bring out its tactical potential: alerting users to the
conditions of their use.
Repositioning affordance: Unsmooth operations and tactical
coalitions
All of us are the other.
Kurt Weston in Sight Unseen: International Photography by Blind Artists (McCulloh, 2009: 100)
Weston’s words signal that the other is intrinsic to a socio-digital condition. While this
sense that we might all be the other within one context or another has a universalizing
effect, within digital contexts, the other is often associated with that which falls outside
of the normalized parameters of computability, namely the disruptive error or glitch.
To harness such disruptions is an incentive in glitch art, which explores technical errors
to question a system by making it “injured, sore, and unguarded” (quote in Galloway,
2004: 206; Kelly, 2009: 285-295; Krapp, 2011: 53-54, 67-68; Manon & Temkin, 2011:
§15, 33, 46, 55; Betancourt, 2014: 10-12, 2016; Grundell, 2016, 2018). The
photographers here share this approach to vulnerability as that which poses a con-
tingent risk to the normalized operations of technological systems. While they do not
identify as glitch artists, their concern for risks around normality connects their work
to glitch art. In this analysis, glitches do not mark a moment of failure as much as a
moment of disrupting expectations of technical operations (ibid). Both glitch art and
disability aesthetics reveal the socio-digital conditions of the medium by calling
attention to the structures and processes of mediation – and to how the technical is
always at once social and bodily. In the following, the glitch thus serves as an analytical
tool to deepen the discussion of the featured photographers.
GRUNDELL | Rethinking while Redoing
207
The risk for technologically situated bodies evokes the roots of the word glitch: losing
balance in a slippery place (OED). This snagging slipperiness juxtaposes a smooth
operation. Smoothness rests on protocol: instructions that govern material and sym-
bolic conditions of network society (Galloway, 2004: 74-75, 122, 241-246). Protocols
shape affordances by shaping how humans and devices interact. While tactical uses
like hacking may support a particular protocol, users can also “resculpt it for people’s
real desires” (ibid: 175-176, 241-242; Garcia, 2013: n.p.). Weise satirizes how the
system feeds and denies desire: “be happy with what we give you / we got you” (Davis,
2016: 520).
Assistive devices keep us from slipping and steady us if we do: they facilitate an able-
bodied form of control that is positioned as normal (Norman, 2013: 243-248; AGE-
WELL, 2017: 8). For instance, failure causes a “taught helplessness” when things break
down (Norman, 2013: 62-63, 113). Established definitions of assistive technologies
target those deemed helpless: the ones that Weise’s system “gets”. Disability and glitch
cultures game such systems: activism through and against prescriptive mediation (Ell-
cessor, 2016: 136-137). In this analysis, a glitched body – not as an ontological essence,
but as an experience of disrupting normative systems – points to a shared glitchability.
The photographers here perform photographic protocols, using cameras and bodies
to manage light and optics. Yet, they break protocol by turning a scanner into a camera
or treating phantom sensations as a light source. They defy a standard integration of
the sensory and technological apparatuses that determine which users pass as normal
in systems where normality is key (Schneider, 2011: 137, 156, 160; McRuer, 2018: 14-
16, 22, 29, 190-191). A preferred user position emerges through an imagined consensus
about the meaning of default structures and the positioning of user bodies within them
(Nagy & Neff, 2015: 2-7; Ellcessor, 2016: 76-77). A digital designer may smooth out
Eckert’s clicks and notches if they perceive the uses, or affordances, that they enable
as negative. Disability reveals such ordinary design processes as hegemonic ableism
and, yet, individuals adapt to such cultural decisions: from eyes to fingertips to posture,
and from attention span to typing pace. Eckert modifies and replaces his devices.
These instances of adaptation are disruptive and ultimately reveal, and therefore afford,
the development of more diverse devices (Ellcessor, 2016: 76-77). While both users
and devices typically perform protocols by repeating norms, disability factors in re-
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
208
imagining them – and, in turn, calling for validation (Ellis & Goggin, 2015: 116-117;
Davis & Chouinard, 2016: 2-6; Ellcessor, 2016: 63-65). This study details a creative
attention and physical grit that empowers individuals to transform painful experiences
by sharing them (Butler, 1988: 522; Siebers, 2008: 60-61, 188-189, 193; McRuer, 2018:
24).
Tactical transformation starts with noticing the systems on which you depend. The
glitch extends beyond technology to the affective realm where haptic and episte-
mological levels of use meet: where I learn from my experience. Inclusive design that
invites disruption without isolating the disrupter as ‘too special’ avoids enforcing a
difference that only benefits the mainstream – especially design for mediation that
constitutes and corrects identities (McRuer, 2006: 12-13, 41; Siebers, 2008:17, 30, 56,
189-190; Ellis & Goggin, 2015: 1-2; 113-115; Ellcessor, 2016: 187; Hirschmann &
Smith, 2016. 278; IDRC, 2018). Disruptions ease the burden of acting in concert and
accord (Butler, 1988: 525-526). Creating images without seeing as the manual
prescribes thus offers a non-normative way of learning. By modifying devices to ex-
plore boundaries around normality, the featured photographers set examples for
everybody who feels anxious about these boundaries. Such explorations invite an ack-
nowledgement of the brokenness that shapes processes of seeing and making, being
and becoming (Siebers, 2006: 68; Hirschmann & Smith, 2016: 279-284). The analysis
supports the claim that everybody needs assistive technologies, insofar as variously
abled users need assistance to approximate current norms of visuality that prioritize
control. Technology cannot avoid “the injured, sore and unguarded” – the unruly.
The photographers here take a position of mutuality: they are in control and in need.
A choice emerges between the mainstreaming of difference and the subversion of the
mainstream in an effort to accommodate difference. The images address this choice
by either hiding or stressing their conditions of production. To display assistive
elements stresses disability yet makes it transparent and therefore negotiable. As
exemplified in all three images included here, fragmented layers of assembled bits
break up the unified image to signal the impossibility of a unified body (Siebers, 2008:
27). A first step to repositioning this unruly body is to invite viewers to acknowledge
vulnerability, by anchoring all participants in the intimate interactions of an environ-
ment that allows for the uncontrollable. These interactions happen in everyday life but
GRUNDELL | Rethinking while Redoing
209
require further attention from users – including the artists and scholars that this article
connects. The visceral strength of these photographic practices amplifies everyday
experiences rather than deviating from them. For instance, technology sensitizes users
as they adapt to the conditions of the interaction on a subconscious muscular level,
while responding to unexpected events with an affective startle not unlike a glitch
(Norman, 2013: 50-51).
From this perspective, Weston’s legal blindness is different from my near-sightedness
by degree rather than type. The opposition between ability and disability is a cultural
decision. Weston’s lenses on display remind me of my glasses, and of how the auto-
focus on my camera stands in for them to adjust my sight. The triviality of this
observation is relevant from a tactical viewpoint since intervention happens from
within a system.
Visually impaired photographers engage with the mediation of the image, the image-
maker and the image discourse. In doing so, they spark a seeing that reshapes the
imagined affordances of the eyes: what eyes let us do and be (Nagy & Neff, 2015: 5).
Experiences of sensory and technological integration are grounded in a process of
embodiment that “resists universalizing claims and uses the multiple particularities as
a source of knowledge” (Ellcessor, 2016: 160, 163). Particularities put forth in the case
studies exemplify the sense that tactics are both spatial and temporal. Time invested in
creation – moving flashlights, waiting for a scan – becomes time to experience,
generating “leaky, syncopated, and errant moments […] that play with time as
malleable political material” (Schneider, 2011: 180; original italics). It is tactically
important to assert the presence of disabled users in a network society with socio-
digital conditions that place them “outside the normal range of civic and cultural ex-
periences” (Ellcessor, 2016: 25, 81). The interactions of these photographers invite
coalitions between users, affording the acknowledgment that questions directed to the
blind apply to us all: “how do you orient yourselves, bear the loneliness, stand the
streets?” (Hoagland, 2002: 8). The media environment yields manifold positions when
a focus on disability invites a “wrestling with the margins” – margins presumed within
a socio-digital hegemony (ibid: 196). Such a margin cuts through Weston’s work as he
incorporates assistive devices that afford both support and discomfort. In this vein,
these devices are prosthetic both in the sense of extending the body and of othering
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
210
the body in need of assistance. Otherness becomes a shared condition with an
acknowledgment of the experiential as inextricable from the discursive: necessarily
social and political. The physical investment in making these photographs thereby ex-
tends to include the viewer, whose experience of the image is equally inextricable from
the discursive.
These creative practices do not glitch technology – only slightly modify it. Still, they
replicate a glitched mediation to capture a disabling moment: to transform it and share
it with a variously sighted viewer. In this analysis, this results in a glitching of our
habitual expectations on both users and use: who could or should be doing what with
which devices. Such expectations form part of how we perceive and actualize afford-
ances. When their photographic work exposes and challenges expectations, it thus
develops a tactical affordance.
Like the excerpt of Weise’s poem cited earlier, their images both mirror and generate
the structures that shape them – that shape the definition of the bodies in which the
seeing resides and that make the images possible. Weise points out that you notice
your settings only when they need to be changed. These settings are technical and
sensory, the two ever more intertwined. The hacking that occurs in the poem – like
the queering, cripping, and glitching in the images – reaches into the settings so that
users can identify the conditions that define their position as able or disabled. This
alert may contribute to visualizing a more accessible future (Ellcessor, 2016: 97, 199-
200).
Conclusion
This article shows how the photographic practices of the visually impaired can facilitate
a self-reflective alert through a disruption that activates a tactical affordance. The
tactical quality is not an object or a feature of an object they use, since these enable
mainstream uses too, nor is it an outcome of how they use them since the
interpretation of the resulting image may repeat mainstream tropes – its range of
appearances and interpretations indicates variability. Within these parameters, the
analysis does identify an affordance (Evans et al, 2016: 39-41). Moreover, this
affordance is specifically tactical since it enables interventions into a socio-digital
condition that is at once pervasive and limiting.
GRUNDELL | Rethinking while Redoing
211
Tactical affordance is pertinent since it is inclusive: it alerts users across a diverse and
dynamic spectrum of abilities. Acknowledging the tactical affordances in photography
by the visually impaired thus contributes towards this article’s aim to address ableism
in network society. The analysis meets this aim by working through the main argument,
detailing how the photographers make differences sharable through images that reveal
how users defined as both able and disabled become vulnerable under the network
society’s socio-digital condition, defined largely through terms of visuality and
visualization emerging from an able-bodied perspective. The case study demonstrates
that digital affordances affect their life and work in conflicting ways. While digital
devices and platforms are intrinsic to the photographers’ photographic production and
circulation, digitality also excludes them by generating and upholding a sighted user
position.
The act of sharing emerges as key to the operation of affordances. The analysis shows
how this operation actualizes classic and contemporary interpretations as it connects
environmental factors, object properties, and human agency in technologically
mediated relations. The photographs reveal mechanisms and conditions of affordance,
as the photographers reconfigure given functions of both assistive and mainstream
technologies as well as their own dexterity to use these technologies. Furthermore,
they reclaim societal validation for this reconfiguration. Their images thus provide
tactical examples for users to react to and act upon.
References
Andrews, J. (2011) ‘The Photographic Stare’, Philosophy of Photography 2(1), pp.41-56.
Betancourt, M. (2014) ‘Critical Glitches and Glitch Art’, HZ Journal 19: pp.1-12.
--- (2016) Glitch Art in Theory and Practice: Critical Failures and Post-Digital Aesthetics.
London and New York: Routledge.
Butler, J. (1988) ‘Performative Acts and Gender Constitution: An Essay in Phenomen-
ology and Feminist Theory’, Theatre Journal 40(4): pp.519-531.
Castells, M. (1996-1998) The Information Age I – III. Oxford: Wiley Blackwell.
--- (2013) Communication Power. Oxford: Oxford University Press.
Certeau, M. de (1984) The Practice of Everyday Life. Berkeley, Los Angeles, and London:
University of California Press.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
212
Cryle, P., and E. Stephens (2017) Normality: A Critical Genealogy. Chicago and London:
University of Chicago Press.
Davis, J.L, and J.B. Chouinard (2016) ‘Theorizing Affordances: From Request to
Refuse’, Bulletin of Science, Technology, & Science 36(4): pp.241-248.
Davis, L.J. (2010) ‘Constructing Normalcy’, in: L.J. Davis, ed., The Disability Studies
Reader (third edition). London and New York: Routledge, pp.3-16.
--- (2016) ‘Introduction: Disability, Normality, and Power’, in: L.J. Davis, ed., The
Disability Studies Reader (fifth edition). London and New York: Routledge, pp.1-14.
Eckert, P., Blind Photographers Guild, blindphotographersguild.org
Ellcessor, E. (2016) Restricted Access: Media, Disability, and the Politics of Participation. New
York and London: New York University Press.
Ellis, K., and G. Goggin (2015) Disability and the Media. London and New York:
Palgrave.
Evans, S.K., K.E Pearce, J. Vitak, and J. Treem (2016) ‘Explicating Affordances: A
Conceptual Framework for Understanding Affordances in Communication
Research’, Journal of Computer-Mediated Communication 22: pp.35-52.
Flyvbjerg, B. (2011) ‘Case Study’, in: N.K. Denzin and Y.S. Lincoln (eds.), The SAGE
Handbook of Qualitative Research. Thousand Oaks: Sage, pp.301-316.
Foucault, M. (2010) The Archaeology of Knowledge. New York: Vintage Books. [1972].
Galloway, A. R. (2004) Protocol: How Control Exists after Decentralization. Cambridge and
London: MIT Press.
Garcia, D. (2013) ‘Re-reading de Certeau: Invisible Tactics’, Tactical Media Files,
blog.tacticalmediafiles.net, May 20: n.p.
Gibson, J.J. (2015) The Ecological Approach to Visual Perception. New York and London:
Psychology Press. [1979].
Grundell, V. (2016) Flow and Friction: On the Tactical Potential of Interfacing with Glitch Art.
Stockholm: Art & Theory.
--- (2018) ‘Navigating Darkness: A Photographic Response to Visual Impairment,
Liminalities: A Journal of Performance Studies 14(3): pp.193-210.
Hirschmann, N.J, and R.M. Smith (2016) ‘Rethinking “Cure” and “Accommodation”’,
in: B. Arneil and N.J. Hirschmann, eds., Disability and Political Theory. Cambridge:
Cambridge University Press, pp.263-284.
GRUNDELL | Rethinking while Redoing
213
Hoagland, E. (2002) ‘Introduction’ and interviews, in: Seeing with Photography
Collective (2002) Shooting Blind: Photographs by the Visually Impaired. New York:
Aperture, pp.5-8, 19-21, 82.
IDRC, Inclusive Design Research Centre (2018) ‘What Is Inclusive Design?’ OCAD
University, no author stated.
Iversen, M. (2007) ‘Following Pieces: On Performative Photography’, in: J. Elkins, ed.,
Photography Theory. London and New York: Routledge, pp.91-108.
Kelly, C. (2009) Cracked Media: The Sound of Malfunction. Cambridge and London: MIT
Press.
Kleege, G. (2016) ‘Blindness and Visual Culture: An Eyewitness Account’, in: L.J.
Davis, ed., The Disability Studies Reader (fifth edition). London and New York:
Routledge, pp.440-449.
Krapp, P. (2011) Noise Channels: Glitch and Error in Digital Culture. Minneapolis and
London: University of Minnesota Press.
Kuc, K., and J. Zylinska, eds. (2016) Photomediations: A Reader. London: Open
Humanities Press.
Lagerkvist, A. (2018) Digital Existence: Ontology, Ethics and Transcendence in Digital Culture.
Lehmuskallio, A. (2015) ‘Seeing with Special Requirements: Visual Frictions during the
Everyday’, Journal of Aesthetics and Culture 7: pp.100-106.
Lister, M., ed. (2013) The Photographic Image in Digital Culture (second edition). London:
Routledge.
Manon, H.S., and D. Temkin (2011) ‘Notes on Glitch’, World Picture 6: n.p.
McCulloh, D. (2009) Sight Unseen: International Photography by Blind Artists. Riverside:
University of California.
McRuer, R. (2006) Crip Theory: Cultural Signs of Queerness and Disability. New York: New
York University Press.
--- (2018) Crip Times: Disability, Globalization, and Resistance. New York: New York
University Press.
Nagy, P., and G. Neff (2015) ‘Imagined Affordance: Reconstructing a Keyword for
Communication Theory’, Social Media + Society 1(2): pp.1-9.
Noë, A. (2015) Strange Tools: Art and Human Nature. New York: Hill and Wang.
Norman, D. A. (2013). The Design of Everyday Things. Cambridge: MIT Press. [1988].
OED, Oxford English Dictionary, search terms: ableism, disability, glitch.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
214
Schneider, R. (2011) Performing Remains: Art and War in Times of Theatrical Reenactment.
London and New York: Routledge.
Schreiber D., and R. H. Wang, primary authors (2017) Access to Assistive Technology in
Canada: A Jurisdictional Scan of Programs. AGE-WELL NCE (Aging Gracefully across
Environments using Technology to Support Wellness. Engagement and Long Life
Network of Centres of Excellence, Incorporated).
Seeing with Photography Collective, seeingwithphotography.com
Siebers, T. (2006) ‘Disability Aesthetics’, Journal for Cultural and Religious Theory 7(2):
pp.63-73.
--- (2008) Disability Theory. Ann Arbor: The University of Michigan Press.
Weise, J. (2016) ‘Biohack Manifesto’, in: L.J. Davis, ed., The Disability Studies Reader
(fifth edition). London and New York: Routledge, pp.519-521.
Weston, K., kurtweston.com
Vendela Grundell is an art historian, photographer, teacher, and postdoctoral re-searcher with a postdoctoral project on photography and visual impairment (Ahlström & Terserus Foundation) at Stockholm University and Goldsmiths, University of London. Publications include “Navigating Darkness: A Photographic Response to Visual Impairment” in Liminalities (2018), Flow and Friction: On the Tactical Potential of Interfacing with Glitch Art (Art & Theory, 2016) and a chapter in Art and Photography in Media Environments (Lusófona University, 2016). Email: [email protected]
Special Issue: Rethinking Affordance
The Affordances of Place:
Digital Agency and the
Lived Spaces of Information
MARK NUNES
Appalachian State University, USA
Media Theory
Vol. 3 | No. 1 | 215-238
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
Affordances provide a useful frame for understanding how users interact with devices, but applications of the term to digital devices must take into account that human agents operate within a material environment that is distinct from the digital environment in which these devices operate. A more restrictive approach to affordances would focus on the agency of digital devices distinct from the agency of human users. Location-aware mobile devices provide a particularly compelling example of the complex interplay of agents and agencies, and how “augmented affordances” give rise to a lived space of information for human users.
Keywords
actor-network theory, affordances, digital agents, mobile computing, sense of place, smartphones
The near-ubiquity of ubiquitous computing and mobile devices foregrounds the degree
to which our everyday lives now involve data. This foregrounding shifts our
relationship to our lived environment by embedding experience within a framework
of digital interaction – not only making the world “clickable,” as Wise (2014) suggests,
but likewise making information space inhabitable. We no longer need to imagine our
computers as vehicles designed to transport us to and through cyberspace; today,
cyberspace is all around us – an information overlay mapped onto our everyday
experience of place. As William Gibson (2011) himself has noted, reflecting on the
term he coined in 1982: “cyberspace is everywhere now, having everted and colonized
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
216
the world. It starts to sound kind of ridiculous to speak of cyberspace as being
somewhere else.” As Jones (2014) notes, “the timeline of eversion” may well begin
with the first appearance of smartphones in 2006 and 2007, which took computing off
of our desktops and placed it into our hands (22). 1 Once these devices become
location-aware, the process of eversion is complete; cyberspace is indeed everywhere.
The potential for location-aware mobile devices to bring the computational and
networking powers of computers off the desktop and on the move simultaneously
alters our sense of place and our experience of the place of information in everyday
life. As such, these devices provide a particularly literal instance of understanding our
environmental relation to both information and information technologies, to the
extent that these devices locate us in a space that is simultaneously embodied and
informatic. From McLuhan (1994) onward, an ecological approach to media has
asserted that the medium mattered – and more so, that its materiality mattered. This
ecological approach to media acknowledges that “the overall human environment
includes and incorporates technological extensions, and these are never merely add-
ons. They alter our sensibilities and capacities, our notions of self and other, our
notions of privacy and propriety, and our orientations in space and time” (Anton,
2016: 131). It is within this ecological context that an examination of digital
affordances can help us better understand the impact of mobile devices on our lived
relation with information, as well as our embodied experience of space and place in an
age of mobile computing.
J.J. Gibson (1979) developed the concept of affordance to help explain an organism’s
embeddedness within its environment, arguing that what an animal perceives depends
upon a kind of coupling between organism and environment based upon that
particular animal’s potential for action within that particular environment. This
ecological approach to perception offers an understanding of how agents make use of
their environments, and the sorts of interactions that give rise to ways of not only using
the environment, but also embodying space through use. As such, affordances are
“equally a fact of the environment and a fact of behavior” (Gibson, 1979: 129).
Affordances are invariant to the extent that it is not the will or desire of the organism
that brings forth the affordance, but rather this structural coupling between an
organism’s potential for action and its environment (Gibson, 1979: 138-139). These
NUNES | The Affordances of Place
217
invariants communicate a specific set of relations between actor and world such that,
in Gibson’s (1979) words, “to perceive the world is to coperceive oneself” (141).
Gibson’s concept of affordances has been applied in a wide range of contexts; some
recent reviews of the literature have attempted to reframe the concept within a more
narrow band of applications, for example by modifying exactly which affordances
these applications attempt to address (Parchoma, 2014; Osiurak, Rosetti and Badets,
2017; Bucher and Helmond, 2017). In their comprehensive discussion of social media
affordances, Bucher and Helmond note: “While all conceptualizations of affordances
take Gibson’s original framing of the term as a starting point, they differ in terms of
where and when they see affordances materializing (i.e. features, artefacts, social
structures) and what affordances are supposed to activate or limit (i.e. particular
communicative practices, sociality, publics, perception)” (240). In thinking through
human interactions with location-aware devices, which in turn interact with and
determine a user’s sense of place, we might want to begin by following Bucher and
Helmond in their distinction between “high-level affordances” which “locat[e]
affordances in the relation between a body and its environment,” and “low-level
affordances” that “locat[e] affordances in the technical features of the user interface”
(240).
According to this perspective, Norman’s (2013) discussion of affordances would have
to be categorized as “low-level,” and, at least from the standpoint of object design, it
is certainly accurate to observe that Norman’s definition offers an adequate
explanatory framework for describing users’ bodily interactions with location-aware
mobile devices. However, it is worth noting the degree to which Norman maintains
an ecological understanding of human interaction with objects, insisting on a
“relational definition” of affordances as “jointly determined by the qualities of the
object and the abilities of the agent that is interacting” (11). For Norman, an affordance
is “a relationship between the properties of an object and the capabilities of the agent
that determine just how the object could possibly be used” (2013: 11). This relational
definition sets up a kind of “discoverable,” communicative system between object and
agent, a “mapping” that first must be realized before it can be actualized (20-23). And
while Bucher and Helmond are correct in noting Norman’s greater attention to user
interfaces, Norman also introduces an important distinction between affordances,
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
218
which “determine what actions are possible,” and signifiers,2 which “communicate
where the action should take place” (14). A lever affords pushing. Visual cues in the
form of arrows signify that the agent can push the lever in two directions. Mapping
discovers and communicates a relation between the spatial orientation of those two
directions and the motion of the object it controls – for example, a projection screen
in a lecture hall.3 Or, to apply this idea to a familiar interaction on a location-aware
mobile device: the screen on my smartphone affords touch, and each app icon serves
as a signifier in both Norman’s sense and in a semiotic sense, identifying each
application while at the same time indicating where on the screen to touch. When I
tap on the Google Maps icon, for example, the app ‘zooms’ out from its particular
location on my home screen to fill the entire screen, providing me with a map of my
current location – but also providing me with a conceptual mapping of my interaction
with the device: touching = opening.
But as we move from describing my interactions with the object in my hand to my
experience of the information that this object provides, use of the term ‘affordance’
becomes more complicated, to the extent that embodied interaction with these devices
occurs at a level that is fundamentally distinct from the level at which algorithmic and
the computational actions occur. As Bucher and Helmond suggest, our interactions
with digital devices, software platforms, and networks of information require a “multi-
layered approach” that can adequately address our material and social interactions with
technology, and the relation between the two (2017: 242). As Bucher and Helmond
note: “the term ‘technology affordances’ establishes material qualities of technology as
(partly) constitutive of sociality and communicative actions” (237). Along similar lines,
Hutchby (2001) highlights the importance of focusing on “the material substratum
which underpins the very possibility of different courses of action in relation to an
artefact” (450). How, though, do we account for a material substratum that is, in fact,
digital – and with which human agents do not have any direct interaction?
While mobile devices have properties (as objects) that afford human users a range of
potential actions, they likewise possess a range of digital capabilities (as agents) that act
upon a digital environment. In contrast to user interaction, which remains embedded
within a material environment, the device itself acts upon a digital environment
through a distinct set of affordances, including: the execution of protocols, database
NUNES | The Affordances of Place
219
queries and retrieval, data processing, input/output functions, and network
transmission. This digital substratum impacts the different ways in which a user can
engage a device, yet the data-device coupling that gives rise to algorithmic and
computational action does not afford human users the capacity to act directly upon a
digital environment. As the device acts upon and within a digital environment,
however, it materializes and actualizes opportunities for user (inter)action that do
indeed allow users to coperceive themselves within an inhabitable space of
information. The material environment and what it affords does not change, but my
conceptual mapping of what the environment affords changes by way of this
information overlay, as does my sense of how I might actualize a potential set of
actions. If we understand affordances, as Hutchby (2001) defines the term, as
“functional and relational aspects which frame, while not determining, the possibilities
of agentic action in relation to an object” (444), we might consider the device itself as
the agent that engages the digital, which in turn materializes on its screen a new set of
functional and relational framings for human agents. The human-digital interface,
then, would map a doubly-mediated coupling of affordances, a relationship that draws
forth the possibilities of the digital into a range of possible human uses and actions.
I am thus proposing an environment-specific, actor-oriented account of “low-level
affordances,” an approach in line with Osiurak, Rosetti and Badets (2017) who, in their
study of tool use, attempt to restrict as much as possible the definition of the term to
Gibson’s “animal-relative properties of the environment” (406). In contrasting an
allocentric (or tool-centric) account of tool-object coupling with an egocentric (or
hand-centric) account of hand-tool coupling, they conclude that the term affordance
applies only to the potential for action mapped by way of a hand-tool relation. In short:
“an affordance exists because of the existence of a potential physical interaction
between an animal and the environment. They correspond to action possibilities
resulting from animal-relative, biomechanical properties” (Osiurak, Rosetti and
Badets, 2017: 409). From this perspective, affordances cannot be contextually
determined through object-centered relationships. To use their example: a shoe and a
hammer offer similar affordances to a human agent capable of grasping and striking a
nail; affordances thus describe a relation between agent and object, even though
hammers and nails exist within a contextual and designed relationship that shoes and
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
220
nails do not share (Osiurak, Rosetti and Badets, 2017: 411). But unlike a hammer, the
smartphone is indeed a “tool” with its own agency and potential for action within a
digital environment. In this context, then, we may well need to consider a device-
centric application of the term digital affordance, one defined not in terms of
biomechanical relations, but rather as an algorithmic, protocological relationship to a
digital environment that maps an intention-independent framework for potential
action, namely: data input, output, storage, retrieval, and manipulation/computation.
In this regard, Latour’s (2005) actor network theory (ANT) provides a useful starting
framework for understanding the complexity of agent-specific affordances,4 actions,
and interactions within these ecological systems, as well as how human and non-human
agents map affordances of place. As Parchoma (2014) notes, “within an ANT frame,
technological affordances can be examined for their enabling, restricting, and
regulatory roles, emerging from the networked effects of temporal relations among
physical-social, material-cultural, and human-technical phenomena” (367). This
framework would suggest not only a complex relationship between human and non-
human agents, but also an ecological understanding of how digital devices serve as an
extension of the human, and how the human likewise functions as a material extension
for the digital device. This framework also allows for what Best (2009) describes as
“relational affordances,” in which device and human alike “inscribe” each other as
agents within a set of interactive dispositions, within a complex, interactive system
(402). Drawing upon Latour’s concept of devices as “technical actors” (Johnson,
1988), Best describes how extension/embodiment is experienced by the user as a
change in potential action, one that “enables [the user] to act on the world – do
something to it – rather than just live in it” (405). We may likewise describe location-
aware mobile devices as digital agents oriented toward acting upon a “world” of data
that is “coupled” with embodied space by way of “place,” much in the same way that
human agency gains access to an augmented sense of place by way of the information
overlay materialized by these devices. This process is akin to what Latour (writing as
Jim Johnson) refers to as a translation of “scripts” between actors and their delegates,
human and nonhuman alike (Johnson [Latour], 1988: 308). At this point of double
coupling and double articulation, material and digital agents alike embed their actions
within this scene of material-informatic translation.
NUNES | The Affordances of Place
221
Mobile devices offer this sort of “double coupling” between user and environment
most notably, perhaps, through their location-aware properties – a double articulation
that materializes and makes visible place-specific information as a frame for human
agency. For location-aware mobile devices and their human users, “place” provides a
particularly rich nexus for this exchange. The body-centric and data-centric
affordances that describe my doubly articulated relation with digital and material
environments make clear how affordances operate within a network of agents, and the
transformative impact of this complex media ecology on the spaces of everyday life.
Our sense of place – and the affordances of place – change because we have more
access to information about that place. Physical space, and what it affords, remains
unchanged; at the same time, I now find myself embodied within an inhabitable
information space. I open Google Maps and a blue dot appears on the screen, pulsing
at a rate of about 20 beats per minute – somewhat faster than a resting respiration rate,
and about half as fast as a resting heart rate. The pulsing signifies a “live” signal,
providing user feedback on the status of the system5 as well as their own status as a
data point located within the data set that is communicating within a GPS network: a
real-time presence, both on the screen and in the world. A shaded area, which pulses
at the same rate, signifies directionality – my heading. Moving the device creates a
conceptual mapping between my embodied directionality and my orientation on the
map. My body provides two different sets of interactions with the device. By touching
the screen, I can alter my relation to the image, but my touch does not alter the location
of the blue dot indicating my position. Only by altering my body’s location and
orientation in lived space can I change the location of the blue dot on the screen. In
effect, I am bi-located: co-perceiving myself simultaneously in the information space
on the screen and in the embodied space of the material world.
Place, then, offers a mapping of potential action onto environment for two distinct
agents – one human and one digital. Place-as-information affords algorithmic
processing for the location-aware device-as-agent, but in doing so, the device creates
a material environment for human agency within a lived space of information. When
using a map application such as Google Maps, the map positions me in a set of
relations that are both informatic and materially embodied. I see myself in a spatial
relation to the world around me, and I understand the world around me by way of this
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
222
information overlay – the names of streets, the location of rivers, public monuments
and private businesses, etc. Likewise, my wayfinding is dictated by my body’s
interaction with the world and the corresponding translation of these actions as data
input for my device, which is then acted upon by the device to produce a
representation of position on a dynamically changing map upon the screen of my
smartphone. My body couples with both device and physical space, creating a complex
mapping that materializes affordances of place that did not exist for me in the physical
world, but which are now articulated through the actions of a digital agent. The device
alters our wayfinding – and therefore our sense of place – through this actualization
of information, producing not an “augmented reality,” but rather a set of augmented
affordances by way of this double coupling of agents and agencies.
Affordances as relational couplings, then, mark points of articulation in a material-
digital assemblage. The potential for action thus points in two directions: the
algorithmic, acting upon data that is derived from material, embodied action; and the
embodied, acting upon a material environment that is, in turn, informed by algorithmic
processing. Following Rutz (2016), we might think of this point of double coupling as
a site of “mutual incursion” – a Deleuzean assemblage marked by “exchange and
assimilation processes between human and machine” (74). “Algorithmic agency,” then,
would operate in two (or more) directions – as the artist (as in Rutz’s example) engages
in “trajectories” of data processes derived by algorithmic action at the same time that
the algorithm engages in material processes derived by the embodied action of the
artist and/or audience (Rutz, 2016: 76-80). This emphasis on the interrelationality of
artist/experimenter and algorithm in machinic assemblages calls attention to how
“both sides engage in boundary operations that are best described as reconfiguration,
operations where many elements and relations, representations and concepts remain
intact but a few critically change” (Rutz, 2016: 82).
In this sense, each agent (material or digital) operates at a point of incursion in a
material-digital assemblage. Each acts upon an environment within its own relational
structure of potential actions, but each also marks “boundary operations” in the form
of a doubly articulated agency. This same analytical framework would hold for both
“low-level” as well as “high-level” understandings of affordances, which must still
come to terms with this boundary event between two or more agents engaged in
NUNES | The Affordances of Place
223
relational couplings across material and digital environments. Van Dijck (2013) in part
addresses this issue through a discussion of “user agency,” which maps a complex
relation between algorithms, protocols, interfaces, and human interaction. This
complexity plays out, he notes, in the blurred boundary, for example, between “human
connectedness” and “automated connectivity” of how user agency engages “the
social” in social media (11-12). In a similar move, Bucher (2012) discusses Facebook’s
“algorithmic friendship” as “a relation between multiple actors, not only human
individuals, but also nonhuman software actors” (480). Combining Deleuze and
Guattari’s (1987) concept of assemblage with Latour’s (2005) actor network theory,
Bucher argues that “friendship” on Facebook is expressed as an assemblage of both
human and non-human actors, articulated in moments such as when the platform
assists the end user in importing contact lists into Facebook (482). Likewise, since the
algorithms that rank News Feed content determine which friend relations are, in effect,
rewarded with attention and hence reinforced, these digital acts function in a way that
determines future friendship interactions (484). Bucher concludes: “Friendships online
thus need to be understood as a socio-technical hybrid, a gathering of heterogeneous
elements that include both humans and non-humans … Thinking of friendship as an
assemblage – a relational process of composition – offer[s] a way to critically scrutinize
how software participates in creating initiating, maintaining, shaping, and ordering the
nature of connections between users and their networks” (Bucher, 2012: 489).
We can extend Bucher’s discussion of “programmed sociality” to an account of
programmed spatiality, in which the production of space involves both human and
nonhuman actors. Place-as-data/data-as-place marks a double articulation, a boundary
operation in two directions: the digital action of digital agents is dependent upon
materially embodied human agents, who in turn act upon an embodied environment
in ways that are equally dependent upon the digital action of digital agents. As Gibson
himself notes in a discussion of “places and hiding places,” what an environment
affords to an actor is not without the influence of “place learning” and “social
perception” (136). In this instance, however, place learning and social perception occur
through a “relational process of composition” across two environments (digital and
embodied) and a multitude of actors. Embodied actors engage a programmed and
material environment articulated not only by the algorithmic agency of a data-driven,
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
224
human-device coupling, but also by way of incursions in the other direction, as
multiple human agents translate their lived experience of place into data that will, in
turn, make information space inhabitable.
While Bucher and Helmond (2017) question the degree to which Gibson’s concept of
invariant affordances would apply to the “increasingly dynamic and malleable nature
of [social media] platforms,” an environment-specific, actor-centric account of
affordances and agent-enabled action would, in fact, acknowledge how agent-action
coupling plays itself out in invariant ways within material and digital environments for
different sets of interdependent agents, distinct from variable features of interface
design (248). By way of contrast: Karahanna et al. (2018) distinguish features from
affordances by claiming that features “enable” an application’s affordances.6 Thus,
they claim: “social media offer the affordance to connect with others, enabled by, for
example, features such as ‘friending’ on Facebook and ‘following’ on Twitter”
(Karahanna et al., 2018: 739). In their attempt to generate a comprehensive taxonomy
of social media affordances, Karahanna et al. identify three egocentric affordances –
self-presentation, content sharing, and interactivity – along with four allocentric
affordances – presence signaling, relationship formation, group management, and
browsing others’ content (744-745). In an environment-specific, actor-centric account
of affordances, however, “self-presentation” and “content sharing” of a human agent
would collapse into a single affordance: the potential to respond to a prompt. Signifiers
for these prompts vary by feature – they are indicated by way of icons and
photographic images, but also by text marked in bold, underlined and/or alternate
color. Features signify, and thereby structure by design the user’s conceptual mapping
of affordance to action. Anyone who has double-tapped on a Facebook photo in an
attempt to “like” the image has experienced the degree to which features of response
vary across applications, but this error is at the same time indicative of the invariant
affordance that both Facebook and Instagram present: the potential to respond to
output on the screen. As a potential for action, “providing input” offers a set of
affordances that is distinct from “response,” one that again maps in feature-specific
ways (Instagram’s “+” icon at the bottom of the screen vs. Facebook’s prompt
“What’s on your mind?” at the top of the screen). Note that while embodied actions
of the user may appear similar in both instances (tapping on a screen), the environment
NUNES | The Affordances of Place
225
differs considerably: in one instance, I am responding to the device’s output of content,
while in the other, I am responding to a request for input from the application itself.
In addition to “responding to a prompt” and “providing input,” we might add to this
tentative list of affordances “tracking,” or what Schrock (2015) identifies as
locatability,7 to the extent that the device provides me with a representation of my own
location as both input and output, prompting me to various actions and engagements.
While not meant to provide an exhaustive list, these examples of an actor-centric
approach to affordances suggest a way of mapping invariant potential action within
material and digital environments for human and digital agents alike, as well as
accounting for complex interactions between agents and agencies.
If features map affordances for human agents acting on and within an inhabitable
information space, they likewise provide a mapping for digital agents acting upon the
human user as a material extension of their potential for digital action. This insight
aligns with Bucher and Helmond’s (2017) insistence on “the multi-directionality of
agency and connectivity” and “a socio-technical sensibility towards the distributed
agency of humans and nonhumans” (249).Vaast et al. (2017), for example, argue that
“connective affordances” emerge through social media use as a result of “mutually
dependent yet distinct patterns of feature use among emerging roles … What is
afforded to one role depends upon how other roles use the technology” (1199). I
would suggest that these same features-mapped connective affordances are likewise
providing a mapping for digital agents as they make use of human agents as networked
human actors, extending their digital agency into an embodied space, much as mobile
devices extend human action into a digital realm. When human agents create data
through embodied action, those data sets then provide a basis for algorithmic action
by a digital agent. The materialization of this action as output on a smartphone screen
provides an articulation from one layer to another, giving rise to a boundary operation
between immaterial information and the embodied environment of the human agent
about to act upon this information.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
226
Figure 1: Google Maps screen capture. Map data: Google, copyright 2019. Photo: Mark Nunes
NUNES | The Affordances of Place
227
This account of environment-specific action and boundary operations aligns well with
Don Ihde’s post-phenomenological account of agency (Ihde, 2002; 2009; 2011). Ihde
(2011) suggests an “inter-relational ontology” to understand how “human-technology
relations” can be understood as a “mutual co-constitutional process” (18). Ihde (2002)
notes that “in this interconnection of embodied being and environing world, what
happens in the interface is what is important” (87). This “area of interaction or
performance” marks a “symbiosis of humans plus their artefacts in actional situations”
(Ihde, 2002: 92-93). This “hybrid agency” occurs in multiple human-technology
relations, but is particularly notable when humans find their embodied actions coupled
with computational environments (Kang, 2011: 111). As Kang notes, “embodiment
serves as an interpretive framework through which computable information and its
impact on human perception are understood as a continuous, co-constitutive relation
rather than as separate, independent processes” (2011: 112). Within such a framework,
affordances would operate as relational, bidirectional, ecological expressions of action,
in which environment becomes a complex assemblage of potential interactions that
are co-constitutive of agents and action.
In distinguishing his post-phenomenological account of human-technology relations
from both assemblage theory and actor network theory, Ihde (2002) asserts: “There is,
indeed, a limited set of senses by which the nonhumans are actants, at least in the ways
in which in interactions with them, humans and situations are transformed and translated” (100).
Critical to Ihde’s (2002) account, however, is an understanding that this relation is
neither “innocent” nor “neutral” – and more often than not asymmetrical in
translation and subsequent transformation (100). As Klinger and Svensson (2018)
note, even if we are to understand algorithms as “actions,” those performances, of
course, will still reflect “the norms and processes of media production, distribution,
and usage, as well as how programmers and users perceive these norms and processes
that go into the design/programming process” (4658-4659). They note, “to argue that
algorithms have agency on their own, agency that is independent of human activity …
occludes the power inscribed in the algorithm as structure” (4667). “Algorithmic
agency” bears the marks of a mutual incursion prior to the use of any end-user to the
extent that algorithms, programs, and protocols bear the agency of the programmers
that script these calculative parameters, but which is likewise distinct from a digital
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
228
actor’s algorithmic, data-oriented action, which “is less human and more shaped by
the big/thick/trace data that they filter, sort, weigh, rank and reassemble into some
sort of outcome” (4659). By focusing on “ideals” and “commercial imperatives” as
well as “technological affordances,” Klinger and Svennsson call attention to how social
and economic forces, programmed into these platforms, shape how these embedded
acts translate elements within a digital environment: an incursion that converts
“humans actively and intentionally spending time on these communication platforms”
into “traces that are subsequently (and algorithmically) mined in order to surveil users
with commercial intent, to target advertisements and so forth” (4662-4663).
Clearly, on a corporate-designed and commercially supported mobile application such
as Yelp, for example, dominant consumption-driven ideals and commercial
imperatives embedded within algorithmic agency exert a powerful influence on how
human actors engage with the affordances of place. At the same time, the application
also foregrounds how human agency is critical to digital agency, to the extent that users
are responsible for creating the database of reviews that the application allows other
users to access. In this regard, the relationship between data input and data
manipulation as a programmed form of “place learning” is quite complex. When I
open the Yelp app, signifiers guide me toward points of interaction, highlighting
various features that enable its location-aware search functions. A search bar at the top
of the screen affords input; beneath that, the app affords the potential to respond to a
prompt by selecting a search category. My engagement with this environment is driven
by design, by the prompts of data actors aimed at leading me into a set of
protocologically and ideologically delimited relations with both the device in my hand
and the lived space in which I find myself embodied and present. If, for example, I
respond to the prompt to “filter” a search by category, I can “select” amongst several
signifiers, but it is the application that acts upon the database; hence, it is the device,
operating on the scripts derived from the application, that is agent in this digital action.
At the same time, the affordances of place provide the digital agent with a means of
calling out to users to provide data on their current location, perhaps even at the
moment they are sitting at a restaurant waiting for their bill. In effect, the digital device
is constantly acting upon me through both active prompting and passive tracking; as a
digital agent, it is taking advantage of this double coupling to translate my embodied
NUNES | The Affordances of Place
229
experience within a physical environment to generate data that can be acted upon
within a digital environment in a variety of ways, a number of which are captured to
serve commercial imperatives well beyond the reach of end users.
Figure 2: Yelp screen capture. Yelp, copyright 2019. Photo: Mark Nunes
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
230
As Wendy Hui Kyong Chun (2016b) notes, “capture systems” operate at an interface
between data and action: “In a capture system the base unit is an action, or a change
of state, rather than an entire person” (60). Here, we see how interfaces draw upon the
human actor as environment, and, in this boundary moment, elicit data, but at the
same time elicit embodied, habitual activity: “capture systems are all about habitual
actions. They seek to create new, more optimal habits; they record habitual actions in
order to change them” (Chun, 2016b: 61). Chun (2016b) develops the degree to which
habit structures human action: a “productive nonconscious” distinct from any
conceptualization of a rational-subject-as-sovereign-actor (7). Critical to this argument
is a notion that habit occurs relationally between an individual and an environment
that is both social and non-social: habit as habitus (7). For Chun (2016b), habitual action
is equivalent to inhabiting a set of practices that position users “within” socio-
technological environments. In this regard, her discussion helps to complicate how we
understand affordances as relational interfaces between agent and environment to the
extent that “habit is ideology in action” (Chun, 2016b: 9). Yelp, for example, captures
my action as it prompts me to engage. Regardless of what I am doing with Yelp, I am
functioning not so much as a data subject, but rather as a set of relations within a data
environment for a digital agent. As Chun (2016a; 2016b) notes, digital agents act in
part to transform the actions of individuals into nodes and edges within a set of data
correlations: a translation from a singular “me” into a correlational “YOU.” For better
and for worse, “singularity is fundamentally plural” (Chun 2016a: 378). Chun (2016a)
argues that “what matters are relations not between things that happen repeatedly or
successively to one individual, but rather correlations between actions by different
‘neighbors’ over time and space” (374). If habit is ideology in action, then my
engagement within a programmed sociality by responding to a prompt amounts to an
Althusserian “hailing” into a set of relations as both embodied actor and constellation
of data within material and digital environments (Chun, 2016b: 120-122).
My smartphone maps territories in which I am constantly reminded that others have
already been here, be that through restaurant reviews on Yelp, or travel time estimates
on Google Maps. Our sense of place is always haunted by data, an overlay that is both
here and not here – data that declares others have been here as well. To play on words,
mobility offers a kind of digital echo-location. I am located within a territory by the
NUNES | The Affordances of Place
231
echoes of others who have already come this way. The echoes of others prompt me
to add my own voice, or the absence of any voice likewise calls on me to input data
corresponding to my location. In effect, then, Yelp does more than map commercial
imperatives and the ideals embedded in revealing hidden local gems for tourists and
travelers: it reconceptualizes my relation to place and locale, and does so by
transforming the place of information within the spaces of everyday life through my
dual role as human agent and human extension of a digital agent. In design, the human
actor operates as source for captured data that is then articulated within data sets and
represented back to users. Human actors engage in the interface with the
understanding that their relations are expressed as data acts. The user is not, then, a
“data subject” to themselves; rather, their production of node identities and edge
relations through intentional and captured acts allows the user to orient toward a
becoming-data relationship, much as the social graph allows digital actors to orient
toward a becoming-human of data.
Hidden deep within the features of Yelp is “Monocle,” which pushes this
embeddedness furthest by using augmented reality (AR) features. 8 With Monocle
activated, my camera now shows me not only what I am seeing through my lens, but
also waypoints for nearby restaurants and businesses, geo-located and overlaid on my
screen. The AR features of Yelp, while relatively buried, do bring to the fore this
moment of double articulation, and the degree to which the device operates as a screen
of another sort, one placed “between” the digital and the material. “Looking through”
the screen of my phone, with my phone’s camera as a “monocle” onto a digital world
that is not directly accessible to me, makes this act of double articulation all the more
visible. At the same time, the experience of an information overlay is present for users
even without the AR experience, to the extent that augmented affordances of place
create for the human user an inhabitable information space in which information
materializes for human action through the hybrid agency of a digital actor. Rather than
slipping into a facile critique of how screens and devices take us “out of” the here-and-
now, I would suggest instead we consider how information becomes both habitual and
inhabitable through our engagement with location-aware devices – information that is
at the same time engaging the user in human and social interactions within their
material environment.9
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
232
Figure 3: PeakFinder screen capture. PeakFinder Ltd., copyright 2019. Photo: Mark Nunes
Figure 4: PeakFinder screen capture, with photo overlay. PeakFinder Ltd., copyright 2019. Photo: Mark Nunes
While Yelp may bury its Monocle feature deep within its menu structure, other apps
strongly foreground the ability of the device to allow users to “look through” a now-
materialized lens of information presented on our screens. Farman (2012; 2014) details
a number of examples of how AR on location-aware devices has been deployed to
create narrative overlays for walking tours and cultural heritage sites in cities, yet this
NUNES | The Affordances of Place
233
is equally true for apps that provide information overlays on natural landscapes, such
as stargazing and trailfinding applications. PeakFinder, for example, is a location-aware
app that positions users within a topographic map showing the names of mountain
peaks, the path of the sun from sunrise to sunset, and the user’s current longitude,
latitude, and compass heading. Unlike a mapping app, PeakFinder assumes that you
are looking “through” your screen and pointing the mobile device in the direction of
a peak, hence it only positions you on one side of the screen. PeakFinder bills itself as
an AR application, although again I would suggest that what it provides, more
precisely, is augmented affordances by way of the potential actions of a digital agent
to materialize information. I now have access to a database of geographic and
cartographic information (the location and elevation of peaks), but I can now also see
the human written on the landscape (the names of peaks). I experience the affordances
of place differently in that I now have information (literally) written over a landscape,
a topography in its most literal sense as a writing of place. The information overlay
alters not only my orientation toward the landscape; it also provides me with a potential
set of interactions that I would not otherwise have at my disposal. For example: the
app shows its user the sun’s path mapped out across the sky, as well as its time and
location for sunrise and sunset. While Peakfinder defaults to locating me in the here-
and-now, a settings feature allows me to alter the date, revealing to me the changing
course of the sun – and making visible (for example) the exact two days when the sun
will rise directly over a particular peak. As such, how I perceive the world offers an
opportunity to co-perceive myself within a new sense of place, with an altered range
of current and future action potentials.
Over a decade ago, I addressed the importance of understanding the “cyberspaces of
everyday life” as virtual topographies:10 performative speech acts that “write” space
through material, conceptual, and experiential processes. In a similar move, Ihde
(2009) describes what he calls a “material hermeneutics” – what is otherwise non-
perceivable is “translated by … instruments into bodily perceivable images,” a
“technological transformation of a phenomenon into a readable image” (56). Much as
Ihde (2009) suggests, I am arguing that the boundary operation of a human-device
coupling offers a “constructed and an intervening process that is deliberate and
designed [that] brings into presence previously unknown phenomena … by translating
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
234
what is detected into images that can be seen and read by embodied observers” (61).
Applications such as PeakFinder reveal the degree to which information can quite
literally overlay our embodied experience of space and place, reordering our
topographies to such an extent that we do not merely access information; we now find
ourselves embedded in it. It strikes me that the point of interaction between embodied
and digital agents is indeed reciprocal, though not necessarily symmetrical, neutral, or
innocent (to use Ihde’s terms). It is not just that PeakFinder provides a data overlay;
rather, one’s physical location equally provides a material overlay for the data mappings
of a digital agent. My double-orientation toward the device and the place where I find
myself serves as a doubled point of articulation in the production of space. Users act
upon a transformed material environment through a range of augmented affordances
– the product of digital action by digital agents in a digital environment – drawing out
data that we ourselves provide actively (through responses to prompts) or passively
(through capture of motion or habitual action) as material extension of digital devices
through our bodily orientations and dispositions.
Affordance as a concept provides us with a vocabulary for discussing the relational
coupling between user and device that is critical to ecological understandings of the
role and place of media in everyday life. Likewise, it allows us to acknowledge the gap
between two environments, one digital and the other embodied. As a material being, I
have no direct coupling with the digital, other than by way of the device. Yet, if we are
to accept Ihde’s (2009) post-phenomenological “interrelational ontology,” we must
also acknowledge that “the human experiencer is to be found ontologically related to
an environment or a world, but the interrelation is such that both are transformed
within this relationality” (23). This interrelational ontology implies that “there is a co-
constitution of humans and their technologies. Technologies transform our experience
of our world and our perceptions and interpretations of our world, and we in turn
become transformed in this process” (44). From the perspective of embodied
experience, the result is that “our sense of ‘body’ is embodied outward, directionally
and referentially, and the technology becomes part of our ordinary experience” of the
environment in which we act and interact (42). So, too, is our sense of place mediated
through this complex, yet ordinary experience of location-aware mobile devices.
NUNES | The Affordances of Place
235
As we attempt to understand this complex ecology of interacting agents and
environments, we are led to consider: what, then, serves as the interface, and for
whom? Is it the screen on my smartphone, or is it my corporeal presence, smartphone
in hand and eyes darting back and forth between the device and the world in which I
find myself? This embodied disposition marks a point of material, conceptual, and
experiential orientation toward an inhabitable space of information. Cyberspace is
indeed everywhere; yet it would perhaps be more accurate to note that our mapping
of place now assumes access to information, a digital overlay that is outside of our
environmental coupling, but discoverable through our interaction with mobile, digital
devices.
References
Anton, C. (2016) ‘On the roots of media ecology: A micro-history and philosophical
clarification’, Philosophies 1(2): 126-132.
Best, K. (2009) ‘When mobiles go media: Relational affordances and present-to-hand
digital devices’, Canadian Journal of Communication 34: 397-414.
Bucher, T. (2012) ‘The friendship assemblage. Investigating programmed sociality on
Facebook’, Television & New Media 14(6): 479-493.
Bucher, T. and Helmond, A. (2017) ‘The affordances of social media platforms, in: J.
Burgess, A. Marwick & T Poell (Eds.), The SAGE Handbook of Social Media. New
York: SAGE Publications. 233-253.
Chen, B. X. (2009) ‘Yelp sneaks augmented reality into iPhone app’, Wired 27
August. Available at: https://www.wired.com/2009/08/yelp-ar/ [Accessed 27
September 2018].
Chun, W. (2016a) ‘Big data as drama.’ ELH 83(2): 363-382.
--- (2016b) Updating to Remain the Same. Cambridge, MA: MIT Press.
Deleuze, G. and Guattari, F. (1987) A Thousand Plateaus: Capitalism and Schizophrenia.
Minneapolis: University of Minnesota Press.
Didur, J and Fan L. T. (2018) ‘Between landscape and the screen: Locative media,
transitive reading, and environmental storytelling’, Media Theory 2(1): 79-107.
Available at:
http://journalcontent.mediatheoryjournal.org/index.php/mt/article/view/37
[Accessed 13 June 2019].
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
236
Farman, J. (2012) Mobile Interface Theory. New York: Routledge.
---. Ed. (2014) The Mobile Story: Narrative Practices with Locative Technologies. New York:
Routledge.
Gibson, J.J. (1979) The Ecological Approach to Visual Perception. Hillsdale, NJ: Lawrence
Erlbaum.
Gibson, W. (2011) ‘William Gibson, the art of fiction no. 211’ (Interview by D.
Wallace-Wells), The Paris Review 197. Available at:
https://www.theparisreview.org/interviews/6089/william-gibson-the-art-of-
fiction-no-211-william-gibson [Accessed 27 September 2018].
Google, LLC (2018). Google Maps. Available at:
https://apps.apple.com/app/id585027354 [Accessed 13 June 2019].
Hutchby, I. (2001) ‘Technologies, texts and affordances’, Sociology 35(2): 441-456.
Ihde, D. (2002) Bodies in Technology. Minneapolis: U Minnesota P.
---. 2009. Postphenomenology and Technoscience: The Peking University Lectures. Albany, NY:
SUNY Press.
---. 2011. ‘Smart? Amsterdam urinals and autonomic computing,’ in: Mireille
Hildebrandt and Antoinette Rouvroy (Eds.), Law, Agency, and Autonomic Computing.
New York: Routledge. 12-26.
Jones, S. E. (2014) ‘Eversion’, in: The Emergence of the Digital Humanities. New York,
NY: Routledge, pp. 18-38.
Johnson, J. [Latour B.] (1988) ‘Mixing humans and nonhumans together: The
sociology of a door-closer’, Social Problems 35(3): 298-310.
Kang, H. (2011) ‘Autonomic computing, genomic data, and human agency: The case
for embodiment.’ in: Mireille Hildebrandt and Antoinette Rouvroy (Eds.), Law,
Agency, and Autonomic Computing. New York: Routledge. 104-118.
Karahanna, E. et al (2018) ‘The needs-affordances-features perspective for the use of
social media’, MIS Quarterly 42(3): 737-756.
Klinger, U. and Svensson, J. (2018) ‘The end of media logics? On algorithms and
agency,’ New Media & Society 20(12): 4653-4670.
Latour, B. (2005) Reassembling the Social: An Introduction to Actor-Network Theory.
Oxford: Oxford University Press.
McLuhan, M. (1994) Understanding Media. Cambridge, MA: MIT Press.
Miller, J. H. (1995) Topographies. Stanford: Stanford University Press.
NUNES | The Affordances of Place
237
Norman, D. (2013) The Design of Everyday Things. Revised and expanded edition. New
York: Basic Books.
Nunes, M. (2006) Cyberspaces of Everyday Life. Minneapolis: University of Minnesota
Press.
Osiurak, F., Rosetti, Y. and Badets, A. (2017) ‘What is an affordance? 40 years later’,
Neuroscience and Biobehavioral Reviews 77: 403-417.
Parchoma, G. (2014) ‘The contested ontology of affordances: Implications for
researching technological affordances for collaborative knowledge production’,
Computers in Human Behavior. 37: 360-368.
PeakFinder Ltd. (2018) PeakFinder AR. Available at:
https://apps.apple.com/us/app/peakfinder-ar/id357421934 [Accessed 13 June
2019].
Rutz, H. (2016) ‘Agency and algorithms,’ CITAR Journal 8(1): 73-83.
Schrock, A. R. (2015) ‘Communicative affordances of mobile media: Portability,
availability, locatability, and multimediality’, International Journal of Communication 9:
1229-1246.
Vaast, E. et al. (2017) ‘Social media affordances for connective action: An
examination of microblogging use during the Gulf of Mexico oil spill,’ MIS
Quarterly 41:1179-1206.
Van Dijck, J. (2016) The Culture of Connectivity. Oxford: Oxford UP.
Wise, J. M. (2014) ‘Hole in the hand: Assemblages of attention and mobile screens’,
in: A. Herman, J. Hadlaw and T. Swiss (eds.) Theories of the Mobile Internet. New York,
NY: Routledge, pp 212-31.
Yelp, Inc. (2018) Yelp. Available at: https://apps.apple.com/us/app/yelp-food-
services-around-me/id284910350 [Accessed 13 June 2019].
Notes
1 For a related reading of this cultural shift in computing as eversion, in the context of a discussion of the digital humanities, see Jones (2014). See also Farman (2012: 3-12).
2 As Norman (2013) himself notes, his use of the term “signifier” in this context is quite distinct from how the term appears in semiotics (14).
3 This would be an example of what Norman (2013) calls natural mapping, although effective mapping can also arise from arbitrary pairing between action and interaction, as long as the model is both discoverable and memorable. See Norman (2013: 22-23).
4 As Bucher and Helmond (2018) note, Latour himself acknowledges Gibson’s influence in developing his understanding of actor networks (16-17; Latour, 2005: 72).
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
238
5 For discussion of the role of feedback in sustaining conceptual models and mappings, see Norman
(2013: 23-25). 6 Karahanna et al. (2018) are by no means the only researchers who have attempted to catalog new
media affordances, but they provide a useful example, both in how recent their research is, and in their attempt to align their taxonomy with the work of other scholars. See Karahanna et al. (2018: 744-747) for a comparative alignment of their taxonomy with previous attempts to catalog social media affordances.
7 Schrock (2015) identifies “locatability” as a communicative affordance (alongside portability, availability and multimediality,) with a direct impact on communicative practices that includes “coordination,” “surveillance,” and “locational identity” (1235).
8 Monocle has always been a “hidden” feature, originally accessible only through a body-enabled “Easter egg” unlocked by shaking one’s smartphone three times (Chen, 2009). Yelp introduced Monocle in 2009, around the same time that Jones (2014) cites a rise in everyday experiences of “mixed reality” as well as representations of these experiences in works of literature and film; as such, we may add this feature to the list of “a variety of changes in technology and culture [that] converged and culminated in a new consensual imagination of the role of the network in relation to the physical and social world” that he associates with the eversion of cyberspace (25).
9 I am by no means alone in making this observation. For a detailed discussion, see Farman (2012: 35-55). See also, for a more recent contribution to this conversation, J. Didur and L. T. Fan (2018).
10 For more discussion, see Nunes (2006): 11-19. See also Miller (1995: 3-5) for a more general reading of topography as performative speech act.
Mark Nunes is Professor of Interdisciplinary Studies and Chair for the Department of Interdisciplinary Studies at Appalachian State University. He is author of Cyberspaces of Everyday Life (Minnesota, 2006) and editor for a collection of essays entitled Error: Glitch, Noise, and Jam in New Media Cultures (Continuum, 2011.)
Email: [email protected]
Special Issue: Rethinking Affordance
Forensic Aesthetics for
Militarized Drone Strikes:
Affordances for Whom,
and for What Ends?
ÖZGÜN EYLÜL İŞCEN
Duke University, U.S.A.
Media Theory
Vol. 3 | No. 1 | 239-268
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
Drawing upon the critical scholarship on drone warfare, this article argues that drones’ mistargeting of civilians is neither exception nor error but is instead intrinsic to the rationale behind militarized drone strikes. A historical overview of the cultural imaginaries and biopolitical formations corresponded to drone warfare reconfigures drone technology as an apparatus of racialized state violence. Therefore, an analysis of the affordances corresponded to drone technologies cannot be thought in isolation from the historicity of the material and discursive systems that underline those strikes. Forensic Architecture’s investigations of covert drone strikes address the material, media, and legal systems through which these strikes operate and thus intervene in the time-space relations that characterize the entangled politics of verticality and visuality. As a result, they invert the forensic gaze through an architectural mode of analysis and political commitment to “the right to look” in Nicholas Mirzoeff’s terms. Ultimately, their investigations are direct interventions into the operationalization of drone technology as a technical, discursive, and political apparatus.
Keywords
Drone Warfare, Affordance, Vertical Mediation, The Right to Look, Forensic Architecture
“… But the right to look came first, and we should not forget it”
(Mirzoeff, 2011: 2)
Drones, also known as unmanned aerial vehicles (UAVs), have become fetishized
technical objects. They are popularly known for their technological acuity, despite the
fact that they regularly fail (Chandler, 2017; Parks, 2017). Weaponized drones in
particular regularly crash or hit civilians1, 2. Drones’ aerial perspective and seeming
removal of human pilots from active conflict zones speak seductively to Modern
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
240
fantasies of omniscience and omnipotence (Kaplan, 2006: 401; Rhee, 2018: 141). In
actuality, drone technology is far from mastery, as it is contingent upon an expansive
assemblage of bodies, machines, algorithms, and signals as well as noise, ambiguity,
and delay. Militarized drone strikes operate through the further triangulation of: the
visualization methods of drone surveillance; the procedures of target-construction
(which rely on both human and artificial intelligence); and the materials (e.g. the air
that signals go through) and bodies (e.g. human laborers) involved in strike events
(Kearns, 2017: 14).
A variety of juridical arrangements, cultural imaginaries, and biopolitical formations
also accompany the operationalization of militarized drone strikes (Parks & Kaplan,
2017: 8). Individuals are produced as threats, and thus as legitimate targets, while the
spaces where this form of state-sanctioned violence occurs are rendered unruly, and
therefore threatening (Kearns, 2017: 15). And yet, drone surveillance does not lead to
greater precision, but rather to a prevalent misapprehension of both the technology’s
limits and the civilian casualties, whose actions and social customs are misread as a
terrorist activity (Rhee, 2018: 142). Drones do not help to cut through the fog of war
and the uncertainties that accompany war practices, but are instead enabled through
them. The oversight and secrecy of these operations, paired alongside the elasticity of
definitional terms, generate a form of intangibility through which militarized states
mobilize rationalization for drone violence (Kearns, 2017).
Militarized drone operations enact a form of necropolitical violence that produces
regions and populations where death is deemed acceptable. This is enabled through
the articulation of racialized distinctions, drawn between populations deemed worthy
of life and populations whose very livelihood is framed as a threat to the essential
health and safety of the former (Allison, 2015: 121). Working within this context,
drone operators do not only misidentify their targets due to the difficulties of
coordinating disparate flows of information in real-time, but also as a result of the
racialized modes of knowing that govern and that are actualized through militarized
drone strikes (Rhee, 2018: 135). This is made explicit, in part, through the aesthetics
of drone vision. The scale and delay of satellite images turn all bodies into indistinct
human morphologies that cannot be distinguished from one another. The
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
241
representation of bodies as depersonalized pixels as well as the corresponding erasure
of difference – ambiguity, complexity, and context – facilitates processes of
dehumanization (Wall & Monahan, 2011; Parks, 2017; Rhee, 2018). These erasures
generate a racialized homogeneity that collapses all individuals into an indistinguishable
threat. The results are further concretized through the drone’s technological apparatus
itself, as local and non-Western characteristics are rendered illegible through the
decidedly Western and Eurocentric socio-technological codes available to drone
operators (for example, in relation to social customs and clothing) (Rhee, 2018: 141).
In Jennifer Rhee’s terms, drones are not designed to see humans, but rather to surveil
the already racially dehumanized (161). This racializing logic persists as certain people’s
territories, bodies, movements, and information are selected for monitoring, tracking,
and targeting regularly enough to become “spectral suspects”3 (Parks, 2017: 145).
Drawing upon the critical scholarship on drone warfare, this article argues that drones’
mistargeting of civilians is neither exceptional nor erroneous but is instead integral to
the operation and rationalization of militarized drone strikes. Instances of mistargeting
should therefore not be overlooked merely as failures of design in need of fixing;
rather, they are aligned with and actively materialize historical and structural issues
associated with the colonial legacies and racialized logics that underlie the development
and current applications of drone technology. The operationalization of a drone strike
is predicated on telecommunication networks and ground stations (which require
access to land) as well as ground surface, air, spectrum, orbit, labor and energy (Parks,
2017: 137). Thus, militarized drone strikes do not only operate through technologies
of vision, navigation, and pattern recognition, but also rely upon a set of political,
territorial, and juridical reconfigurations, which make the rationale of drone technology
far more diffuse than the straight line between an aircraft and a target (Weizman, 2014:
369).
Complicating the vertical field further, Lisa Parks reconfigures drones as technologies
of “vertical mediation,” capable of registering the dynamism of materials, objects, sites,
surfaces or bodies on Earth. Parks’ conceptualization of “vertical mediation” refers
not only to “the capacity of drone sensors to detect phenomena on the Earth’s
surface,” but also to “the potential to materially alter or affect the phenomena of the
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
242
air, spectrum, and/or ground” (135). The drone’s mediating work occurs extensively
and dynamically through the vertical field, moving from geological layers and built
environments to the domains of spectrum, the air and the outer limits of orbit. By
emphasizing verticality, Parks underscores the materializing capacities and effects of
drone operations that reorder, reform, and remediate life on Earth in the most material
ways. This is how drones establish, materialize, and communicate “vertical
hegemony,” the ongoing struggle for dominance and control over the vertical field
(Parks, 2018: 2)4.
Although he did not share a similar emphasis on the politics of verticality5, James J.
Gibson’s ecological understanding of affordance might prove a useful entry point for
addressing the vertical reconfigurations of militarized drone strikes. According to
Gibson, the world is not a physical girding or a container of bodies in space, but is
better understood through the complexity of environmental relations and the notion
of the medium (Parikka, 2010: 169)6. For Gibson, environmental interfaces, such as
the earth, act as groundings for an organism’s action ([1979] 2015: 119-120). We act
and perceive at the level of mediums, surfaces, and substances, which is to say, in terms
of affordances. An object is not that which is “of itself,” but is conceived instead as
that which it might become in correspondence with other elements. Gibson underlines
how these affordances are relative to the physical properties of both the environment
and the organism in question, thereby emphasizing the relationality of affordances
(121). This relationality, however, is not only comprised of physically instantiated
objects. Indeed, Eric Snodgrass proposes the term “compositional affordances,” to
underscore how:
[a]ffordances (e.g. of skin, silicon, the electromagnetic spectrum) form and
are informed by the inclusions and exclusions of further intersecting
discourses (of politics, computer science, economics), the expressions and
processual actualization of which can be seen in the situated, executing
practices of any given moment (2017: 25).
Thus, Snodgrass’ reformulation of affordance in compositional terms positions it as a
form of mediation that cuts across material and discursive systems and the intersecting
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
243
registers of power that become manifest in such moves. From this perspective, drone
technology can be reconfigured as discursive and political apparatus as much as a
material one.
What emerges from this context is a series of questions concerning how material and
discursive systems shape the possibilities and actualizations of certain affordances over
others; which is to ask how the perceived affordances of a given technology emerge
within and help to intensify, maintain, or negotiate existing regimes of power. As the
asymmetrical relations that militarized drone strikes operate through might suggest,
this is ultimately a question of “affordance for whom?” In this regard, this paper argues
that the notion of affordance cannot be thought in isolation from the historicity of a
given technical object and its operationalization, which is never merely technical but
always already highly political. Correspondingly, it attempts to develop a position from
which to assess and actualize what computational media might afford in terms of
confronting state-sanctioned forms of drone-enabled violence. The potential for and
terms of critical intervention will be explored through an analysis of multiple case
studies undertaken by the multidisciplinary and collaborative research group Forensic
Architecture. Forensic Architecture’s investigations of covert drone strikes7 address
the material, media, and legal systems through which those strikes operate, and thus
intervene into the time-space relations that characterize the politics of verticality and
its entanglement with the one of visuality.
Drones today: The entanglement of preemptive logic and
spaces of exception
The U.S. has been conducting overseas drone strikes since October, 2001. In the
aftermath of 9/11, during the presidency of George W. Bush, the American
administration launched a secret program that put hundreds of unmanned surveillance
and attack aircraft into the skies over Iraq and Afghanistan (Satia, 2014). Since then,
the highly secretive Central Intelligence Agency and Joint Special Operations
Command have carried out hundreds of strikes in countries outside U.S. active war
zones, including Yemen, Pakistan, and Somalia, while Israel, an American ally, has
been conducting drone attacks over Gaza since 2004. Advocates argue that the drone
program reduced the need for messy ground operations like those associated with the
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
244
2003 U.S. invasion of Iraq. However, those militarized drone operations killed or
injured hundreds, if not thousands, of suspected “terrorists” and civilians, many of
whom have never been counted or identified.8
After Barack Obama came into office9, drone use increased dramatically with the
expansion of signature strikes based on a “pattern of life” analysis (Chamayou, 2013:
46). Signature strikes target groups of adult men, who are believed to be militants
affiliated with terrorist activity, but whose identities are not confirmed. These strikes,
made without knowing the precise identity of the individuals targeted, rely solely on
the target’s tracked behavior and how the corresponding pattern aligns with the
“signature” of a predefined category that the U.S. military deems to be a suspected
terrorist activity. The preemptive logic underlying these strikes assumes that people
can be targeted not for the crimes that they have legitimately committed, but rather
for actions that may be committed in the future. As Grégoire Chamayou emphasizes,
this marks a shift from the category of “combatant” to “suspected militant” (145). The
decision to target is based on the identification of a behavior or a pattern of life that
merely suggests a potential affiliation with terrorism. Thus, the “predictive” algorithms
used for determining targets underscore the preemptive logic of drone warfare as
symptomatic of the more general phenomenon of preemption – which often operates
as a racializing technology within the context of the Global War on Terror (Miller,
2017: 113).
American and Israeli administrations rely on the indefinite elasticity of the terms that
define a legitimate target. This currently brings most civilians living in so-called
“troubled zones” under a constant state of surveillance and threat of drone strike
(Chamayou, 2013: 145). It is through a long history of colonial law in the Middle East
and South Asia that such “frontier” and tribal zones are produced as places where the
sovereignty of its people is intentionally overlooked, delineating a “politically
productive zone of exception” (Burns, 2014: 400).10 As Madiha Tahir argues, regional
governance and U.S. drone warfare undertaken in tribal zones are extensions of British
colonial administration and policing, and intrinsically tied to the governance on the
ground including the spatial organization (2017: 221). Far from being in a state of
“lawlessness,” tribal zones are instead subject to what Sabrina Gilani calls “an
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
245
overabundance of law” (2015: 371). This “respatialization” has produced what Keith
Feldman refers to as “racialization from above,” recasting “Orientalist imagined
geography” through new scales of relation and division (Parks & Kaplan, 2017: 4)11.
Therefore, the time-space relations that characterize drone warfare underscore the
politics of verticality and its historical underpinnings.
Historical underpinnings of drones as an “apparatus of
racialized distinction”12
The weaponized drone aircraft is not a mechanism of violence that came into being in
a sudden moment of techno-military innovation. Instead, “drone bombings emerge
and thus can be also critiqued as the latest episode of a more protracted process of
state violence and domination” (Afxentiou, 2018: 302). While a thorough review of
these histories exceeds the scope of this article, it is worth highlighting a few key
historical episodes that informed the discursive and political emergence of aerial
technologies, shaping their realization as racialized technical apparatuses.
Tracing the colonial histories of aerial technologies, Priya Satia (2014) proposes a
continuity between the British rule in Iraq in the 1920s and the American invasion of
Iraq (with the UK as its ally) in the 2000s. The British Mandate used aerial control and
bombardment in early 20th century Iraq, where surveillance and punishment from
above were intended as permanent, everyday methods of colonial administration
(Satia, 2014: 2). The region was defined as somewhere “out of senses,” which created
an epistemological and political problem out of an unknown that needed to be kept
under control. Satia argues that a cultural understanding of the region, shaped by
unruly and illegible geographical conditions and a coinciding set of orientalist ideas,
guided the invention and application of British aircrafts at the time of the British
Mandate. Racist and imperialist understandings of cultural difference shaped the
practical organization of surveillance in the Middle East, giving rise to and in turn
legitimizing its violent excesses (Satia, 2014: 11).
According to Satia, Royal Air Force officers justified the brutality of the interwar air
control scheme in Iraq by relying on racist assertions. For example, F. H. Humphreys,
the head of the British administration in Iraq, cautioned against distinguishing between
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
246
non-combatant and combatant civilians as “the term ‘civilian population’ has a very
different meaning in Iraq from what it has in Europe ... the whole of its male
population are potential fighters as the tribes are heavily armed” (Humphreys, cited in
Satia, 2014: 10). The idea that colonized populations were always already liable to being
bombed was therefore a result of what was understood to be their inherently
“disobedient” nature, meaning that air control did not simply denote a reactive military
action, but also a preventative measure intended to keep colonial subjects under
control; maintaining a state of horror became an effective means of preserving colonial
order in the long term (Afxentiou, 2018: 312-313).
The U.S.’ current deployments of militarized drone operations indicate an effective
reproduction of these historical and material conditions of colonial violence. Even
more broadly, contemporary tactics of “Global Counterinsurgency” call upon
computational methods to racialize and categorize certain regions as threatening, weak
or failing states requiring permanent control (Mirzoeff, 2011: 307; Vasco, 2013: 90).13
As Timothy Vasco argues, this formalization of space and the bodies within it (which
he refers to as “reconnaissance-strike complexes”) performs ‘the labor of
simultaneously separating friend from enemy, here from there, us from them, while at
the same time exposing the latter to the self-evidently necessary violence of a drone
strike in which the presence of a secured, singular, and universal power like the United
States is fully realized’ (90).
Referencing Nishant Upadhyay’s notion of “colonial continuum,”14 Rhee explains that
“drone strikes are evoked as events of exceptional violence that occur overseas, rather
than part of a continuum of state-sanctioned racial violence that occurs in the West
and is, as Upadhyay notes, both normalized and foundational to the production of the
West” (148). Indeed, Rhee draws a connection between overseas drone strikes and the
histories and present realities of state-sanctioned violence within the U.S., thereby
positioning the historical and continued influence of colonialism across nation-states
and regions.15 Ultimately, Rhee argues that militarized drone technology works to
affirm the continued dominance of the Western, post-Enlightenment subject (of
reason, autonomy) as an ontological and epistemological center, while rendering other
populations disposable, exploitable, or exposable to racialized violence (2018: 136)16.
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
247
According to Rhee, racial dehumanization – various inscriptions and erasures of the
human – is embedded in both the present drone technology/policies as well as in the
earlier histories of cybernetics (2018: 137). As Peter Galison details, cybernetics, as a
war science, was an entry point to the machine-human systems that were already
shaped around racialized discourses (1994). The founding cybernetician Norbert
Wiener’s work during the Second World War was dedicated to anti-aircraft defense
systems which aimed to track and predict the flight patterns of enemy pilots. As
Galison demonstrates, however, enemies were not all alike (1994: 231). On one hand,
there was the Japanese soldier who was barely human in the eyes of the Allied Forces.
On the other hand, there was a more enduring enemy, a “cold-blooded and machine-
like” opponent composed of the hybridized German pilot and his aircraft (231).
Galison calls this enemy the ‘cybernetic Other’, arguing that it led the Allied Forces to
develop a new science of communication and control in line with the fantasies of
omniscience and automation.
As a legacy of the Cold War period17, cybernetics became a framework through which
the idea of the human was increasingly conceptualized. Wiener and his compatriots’
efforts to predict the future moves of the enemy airplane became an effort to compute
human action, and, ultimately, an aspiration to develop communication between a
range of entities – humans, animals, and machines (Halpern, 2005: 287). Thus, early
computational machines proposed that human behavior could be mathematically
modeled and predicted. Rather than describing the world as it is, their interest was to
predict what it would become, and to do it in terms of homogeneity instead of
difference: “This is a worldview comprising functionally similar entities – black boxes
– described only by their algorithmic actions in constant conversation with each other,
producing a range of probabilistic scenarios” (287). According to Orit Halpern, the
early cybernetics, as well as the information theory it inspired, relied on a “not-yet-
realized aspiration to transform a world of ontology, description, and materiality to
one of communication, prediction, and virtuality”18 (285).
Drawing upon methods of techno-feminist critique 19 , Lauren Wilcox argues that
cybernetic conceptualizations of “the human” that seek to promote an “other than”
or “more than” human reifies a particular normative version of humanity, which in
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
248
turn enables distinctions between more or less worthy forms of life (2017: 15).
According to Katherine Chandler (2018), much of the analysis of drones forefronts
the technical systems that undergird these “unmanned” and autonomous aircraft,
while dismissing the decisive role that humans play in their operation. Drones are
either positioned as “superhuman,” referencing drones’ capacity for performing
various tasks “better” than humans, or they work to “de-humanize” drone warfare, by
ostensibly replacing the humanity of the operator or targeted person with a set of
technical operations (Chandler, 2018). In either case, the analysis of drone aircraft as
an assemblage of human-media-machine is reduced to a fascination with the
technology’s capacity for replicating and improving upon human actions, which is also
imagined as technologically inevitable.
Following Donna Haraway’s pioneering work in A Cyborg Manifesto, Chandler
reinterprets drones as cyborgs in order to reformulate binary worldviews embedded
within the dominant rhetoric of “unmanning”:
Indeed, today’s drones might be cyborgs, a point that underscores the
text’s cautionary reminder that the synthesis between human and machine
it celebrates is first and foremost a product of the Cold War military-
industrial complex. Yet cyborgs and drones remain bastards, never
acknowledged for their mixtures (2016: 3).
By complicating the cyborg nature of drones, Chandler demonstrates how drones are
not dualistic, but rely instead on a dissociative logic that disconnects the parts – human
and machine – that their operations actually link together (2016: 4). To illustrate this
point, Chandler examines the jet-powered drone aircraft known as the ‘Firebee’, which
was developed during the Cold War period to be deployed by the U.S. Army as a
training target for aerial combat. Borrowing from the cybernetic discourse of the
period, drone aircraft were presented as automata despite the fact that humans
remained imperative to their operation. Chandler’s analysis shows how the Firebee’s
control system produced confusion between who or what responds to external
conditions: “Written in the passive voice, the “black box,” not a human operator,
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
249
transmits command signals to the drone’s “electronic brain,” while suggesting the
system’s apparent autonomy” (9).
For Chandler, the drone is a cyborg, and yet, the connection between operator and
aircraft is obscured, as it is understood simply as inputs and outputs filtered through a
black box. Despite the syntheses that constitute the basis of drone operations, popular
accounts are therefore able to dissociate human and machine, war and home, and
friend and enemy. Indeed, the networked operations of so-called unmanned aircraft
undo all these binary categories. In return, as Chandler argues, “the term cyborg
reminds us that the problem is not the drone aircraft per se, but the ways drone systems
tie into ongoing practices of patriarchal capitalism, the legacy of colonialism, and
techno-determinism” (2016: 19). Accordingly, any effective challenge to weaponized
drone technology as an apparatus of racial distinction must explore the dissociative
logics that animate and justify the racialized violence of militarized drone operations.
Vertical mediation and the politics of visuality
Computationally-informed technologies of visualization, like drone imagery, operate
through the material surfaces of the Earth and the physicality of the electromagnetic
spectrum as well as the embodied grounds of human perception. These same
technologies in turn render the earth and bodies intelligible as they are mapped,
calculated, and managed. Gibson’s ecological understanding of mediation emphasizes
the extensivity and dynamism of the vertical operations of computational media. And
yet, it needs to be reformulated in order to appropriately grasp the multifaceted
struggles for the dominance and control over the vertical field; which is to say, the
politics of verticality. I would therefore like to rethink Gibson’s ecological
understanding of mediation through Lisa Parks’ definition of drone technology as a
mediating machine that appropriates the vertical as the medium of its movements,
transmissions, and inscriptions. Inspired by the work of Sarah Kember and Joanna
Zylinska20, Parks defines vertical mediation “as a process that far exceeds the screen
and involves the capacity to register the dynamism of occurrences within, on, or in
relation to myriad materials, objects, sites, surfaces, or bodies on Earth” (135):
As a drone flies through the sky, it alters the chemical composition of the
air. As it hovers over the Earth, it can change movements on the ground.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
250
As it projects announcements through loudspeakers, it can affect thought
and behavior. And as it shoots Hell fire missiles, it can turn homes into
holes and the living into the dead. Much more than a sensor, the drone is
a technology of vertical mediation: the traces, transmissions, and targets of
its operations are registered in the air, through the spectrum, and on the
ground (Parks, 2017: 135-136).
Parks mobilizes the term verticality to highlight the infrastructural and perceptual
registers through which militarized drone operations remediate life on Earth in an
intensely material way. This is also why she looks at the forensic cases of drone crashes,
where the drone’s relation to the material world becomes intelligible, and thus
contestable. Parks’ emphasis on verticality resonates with the works of those media
theorists who have employed Gibson in their work. For instance, Matthew Fuller
asserts that “ecology” is the most expressive language with which to indicate the
massive and dynamic interrelation of processes and objects, flows and matter in the
field of media theory (2015: 2). As Fuller emphasizes, technology is both a bearer of
forces and drives as much as it is made up of them; it is thus constituted by the mutual
intermeshing of a variety of technical, political, economic, aesthetic and chemical
forces, which pass between all such bodies and are composed through and among
them (56).
Similarly, Snodgrass reformulates Karen Barad’s “material-discursive” approach, 21
which underlines the mutually constituted and generative forces of “matter and
meaning”, to coin the term “compositional affordances” (2017:13). According to
Snodgrass, compositional affordances directly inform the process and practice of
making any given computational media executable:
These affordances can include those of skin, silicon, the electromagnetic
spectrum, and further on into issues such as discursive norms within areas
such as computer science, economics and politics, all of which can
potentially participate in informing, to various degrees, the question of
what is executable in any particular instance and how an executable
process might specifically be composed (2017: 13-14).
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
251
More importantly, Snodgrass brings the question of power into the picture by asking
how particular affordances lead to particular enactments of power. This is ultimately a
matter of asking for whom such networked and computationally-afforded practices
work, and which bodies, relations and forms of expression are included and excluded
through such practices (2017: 14). In response, Snodgrass emphasizes the politics of
visuality that shape the underlying material-discursive networks through which
computational media operate. For instance, he analyzes the techniques of control and
accompanying migration politics that European countries have enacted over the
Mediterranean Sea in order to manage and control both this body of water as well as
the vessels and bodies that travel across its space (235).22 A variety of intersecting legal,
economic, technological and enforcement-oriented practices, paired alongside and
realized through the affordances of matter (of water, boats, the electromagnetic
spectrum), shape the ongoing migration situation taking place within the
Mediterranean Sea.23 Snodgrass underlines how the affordances of particular forms of
imagery, such as those that enable sea navigation (i.e. satellite mapping and GPS
tracking of the territory) and those that circulate as a part of the racist media spectacle,
enable the articulation of discriminatory discourses and troubling migration politics
(253). This is how the entangled relationship between the cruel abstractions of
surveillance technologies and racialized practices of media industry helps militarized
states to enact and naturalize their violence.
As Rhee puts it, race is embedded in the history of surveillance technologies; which is
to say, surveillance, as a technology of racial sorting and subjugation, shapes the
dehumanizing tendencies of drone technology (2018: 164) 24 . According to Judith
Butler, the visual field is never neutral to the question of racial violence since seeing is
not a matter of direct perception but “the racial production of the visible, the workings
of racial constraints on what it means to ‘see’” (Butler, 1993: 16). For example, drone
vision turns all bodies into indistinct human morphologies that cannot be
differentiated from one another. This pattern, however, does not render everybody
equal because data are made visible in ways that can be made productive within existing
regimes of power (Parks, 2017: 145).
Indeed, strategies of racial differentiation are restructured along the vertical axis of
power since drone surveillance monitors and targets certain territories and people with
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
252
a greater frequency and intensity. The abstractions of surveillance technologies and
vision are violent, not only because of the destructive consequences of those
abstractions, but also the racialized knowing that shapes the operationalization of
those abstractions in the first place. Militarized drone strikes enact an “exclusionary
politics of omniscient vision,” through which ambiguous visual information is
operationalized within “functional categories” that “correspond to the needs and
biases of the operators, not the targets, of surveillance” (Wall & Monahan, 2011: 240).
Tyler Wall and Torin Monahan have coined the term “drone stare” to mark a
corresponding type of surveillance that abstracts people from contexts, thereby
reducing variation and noise (243). “Governmental technologies” and “political
rationalities” shape the process of target identification by turning the information on
potential target’s behaviors, and by extension the human targets themselves, into
analyzable patterns (Shaw, 2013: 548-549); this is a reduction that ultimately transforms
them into what Giorgio Agamben has termed “bare life”25. In the age of “big data,”
uncertainty is presented as an information problem, which can be overcome with
comprehensive data collection and statistical analysis that can identify patterns and
relations between persons, places, and events (Wang, 2018: 238).26
The abstractions and erasures that underlie drone surveillance and vision rely on what
Donna Haraway calls the “god-trick” of Western scientific epistemologies; they
reproduce the illusion of being able to see everywhere from a disembodied position of
“nowhere” (Wilcox, 2017: 13). Such dominant epistemologies underline long and
complicated histories of militarism, colonialism, capitalism, and patriarchy. As Rhee
argues, drone warfare points to a broader racial violence at work that affirms the
continued dominance of the figure of the Western, post-Enlightenment Subject, while
rendering other populations governable and disposable. Indeed, the Other occupies a
space in which there is “nothing to see” (Mirzoeff, 2011: 1). According to Mirzoeff,
the nonhuman/non-European became a space in which there was “nothing to see,”
not only through the invisibility or dehumanization of the colonized, but also through
the idea of man’s superiority – promoted by the ideal of conquest of nature (Immanuel
Kant) and of sovereign (Thomas Hobbes) (2011: 218).
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
253
As Mirzoeff highlights, visuality is a technique for the reproduction of the imaginaries
through which the state-capital nexus justifies and maintains itself.27 Interestingly, the
opposite of the “right to look” is not censorship, but visuality:
This practice must be imaginary, rather than perceptual, because what is
being visualized is too substantial for any one person to see and is created
from information, images, and ideas. This ability to assemble a
visualization manifests the authority of the visualizer. In turn, the
authorizing of authority requires permanent renewal in order to win
consent as the “normal,” or every day, because it is always already
contested. The autonomy claimed by the right to look is thus opposed by
the authority of visuality. But the right to look came first, and we should
not forget it (2).
In the case of militarized drone strikes, the oversight and secrecy of the operations
generate instances of absence and intangibility through which militarized states
attempt to legitimize drone violence (Kearns, 2017). The ability to hide and deny a
drone strike is not an insignificant side effect of this technology, but is instead a central
part of its official campaigns. As Parks argues, it is precisely the issue of not being able
to verify or confirm the identities of suspects that fuels counterterrorism as a dominant
paradigm and drone warfare as its method of response (2017: 146). According to Roger
Stahl, drone or satellite imagery manifests a way of seeing not only as a tool of strategic
surveillance but also as a prism through which state violence publicly manifests. This
way of seeing thus orients (the Western) publics’ relation to the state military complex
through an array of signs, interfaces, and screens (2018: 67).28
On the left of Fig.1 (below), there is an enlargement of a satellite image at the presumed
location (DigitalGlobe, March 31, 2012). On the right, there is the hole in the roof
through which the drone missile entered the same building (MSNBC Broadcast, June
29, 2012). The team was unable to identify the hole in the roof because it was smaller
than the size of a single pixel. Case: Miranshah, FATA, Pakistan, March 30, 2012.
https://forensic-architecture.org/investigation/drone-strike-in-miranshah
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
254
Figure 1. A still image taken from the video report prepared by Forensic Architecture. A still image taken from the video report prepared by Forensic Architecture in a collaboration with
SITU Research.
It has become increasingly difficult to detect these practices of violence traversing
across multiple scales and durations, which points to what Eyal Weizman calls
“violence at the threshold of detectability” (2017). The relationship between image
resolution and missile size allows official institutions like the CIA “to neither confirm
nor deny the existence of or nonexistence of such targeted assassination”29. As Fig.1
illustrates, the material and architectural signature of a drone strike (a hole on the roof
that the missile went through) disappears under the threshold of detectability as the
intricate particularities of physical damage are erased when rendered through the
standard resolution that undergirds satellite imaging technologies and therefore also
the publicly available images that they produce. This mode of erasure calls attention to
instances of state secrecy as well as to the states’ efforts to exact violence and control
over the means through which its own violence is publicly documented and rendered
accessible. The pixelated resolution of these technologies and images is not only a
technical result of optics and data-storage capacity, but it also determined legally with
reference to security-oriented rationales: it is not only important details of strategic
sites that are camouflaged in the 50cm/pixel, but the consequences of violence and
violations as well (Weizman, 2017: 29). In other words, the denials of drone strikes are
not only rhetorical gestures, but also amount to an active production of territorial,
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
255
juridical, and visual characteristics that make this deniability possible. As Kearns puts
it, “residue signifies processes of state violence that are ongoing in the present but that
remain absent from the public sphere” (16).
This is why, as Mirzoeff emphasizes, the “right to look” is not merely about seeing.
Rather, the right to look claims autonomy, not in the form of individualism or
voyeurism, but as a claim to political subjectivity and collectivity. As Derrida captured
through his conceptualization of the “invention of the other,” a recognition of the
other is required in order to have a position from which to both claim a right and
determine what is right (Mirzoeff, 2011: 1). This claiming enacts a mode of subjectivity
that has the autonomy to arrange the relations of what is seeable and sayable. For
Mirzoeff then, the right to look is not merely about seeing, but instead realizes a mode
of subjectivity that is better able to confront the police who say to us, “move on, there’s
nothing to see here”30 (1). In this regard, Forensic Architecture mobilizes acts of
witnessing, documenting, and evidence-making as counter-visual practices that are
capable of inverting the asymmetrical relationship between individuals and militarized
states. It is at this juncture that artistic collaborations might be able to generate critical
insights and meaningful actions for enacting the right to look.
Forensic architecture: The right to look and counter-
visuality
Figure 2. A still image taken from the case documentation by Forensic Architecture.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
256
Fig.2 is from a session in which the eye witness helped with a digital reconstruction of
the scene of the strike in a 3D-model. Düsseldorf, May 21, 2013. Case: Mir Ali, North
Waziristan, October 4, 2010. Photo: Forensic Architecture. https://forensic-
architecture.org/investigation/drone-strike-in-mir-ali
Directed by Eyal Weizman and based at the University of London, Goldsmiths,
Forensic Architecture is a collective of architects, software developers, filmmakers,
investigative journalists, artists, scientists and lawyers. In the case of covert drone
operations, their aim has been to describe, document, and prove the effects of these
strikes on the ground. In each case, they cross-reference different types of data
available to them, including satellite imagery, media reports, witness statements, and
on-the-ground images when and if they could obtain them. In turn, they have provided
their analysis to different groups who have used it to help seek accountability for drone
strikes or who are involved in pursuing legal processes against states using or aiding
drone warfare.
In the case of covert drone operations, the violence against people and their
surroundings is often redoubled by the violence against the evidence (Weizman, 2014:
11-12). The material ruins are usually the only visible traces of a covert drone strike,
and yet, as the earlier discussion exemplified, they are often at the threshold of
detectability. People are invisible in publicly available satellite images, which are
degraded, for reasons of privacy and security, to a resolution at which the human body
from the aerial view disappears within the square of a single pixel (Weizman, 2017: 25-
30). As a result, the space and occurrence of strike events need to be reconstructed
based on different kinds of evidence, including evidence collected from a satellite
image to an eye-witness report (Fig.2). A forensic-architectural problem arises here,
forcing an examination of “the relation between an architectural detail, the media in
which it could be captured, a general policy of killing, and its acts of denial” (27).
Similar to Gibson’s ecological approach, Forensic Architecture reanimates material
residue in an effort to expand the focus of analysis from the object to the field, which
is characterized as “a thick fabric of lateral relations, associations and chains of actions
between material things, large environments, individuals, and collective actions”
(Weizman, 2014: 27).
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
257
In order to process this expanded field of information, Forensic Architecture takes
advantage of new methods of evidence collection and develops relevant means of
verification. They achieve this by harnessing the affordances that computational and
networked media offer to such investigations. For instance, they use 3D modeling as
an optical device through which to evoke the eye-witnesses’ memories of the strike
event and reconstruct the scene despite the lack or messiness of the evidence.
Significant in this case is their appropriation and repurposing of the technologies of
measurement that are primarily designed and used within the military-industrial
complex. It is these reoriented technologies that enable their direct, critical, and
creative interventions into broader techniques and applications of evidence. They
present their formulated evidence at public fora, such as international courts and art
exhibitions, while expanding the perceptual and conceptual frames of these
institutions31. Thus, Forensic Architecture inverts the forensic gaze by intervening in
the means and practices of evidence collection, collation, and exhibition, which, when
activated within political, legal, and media systems, work to expose coinciding forms
of racialized technologies of state violence. By ultimately claiming “the right to look”
as Nicholas Mirzoeff has articulated, they contest the politics of visuality and erasure
through which militarized states attempt to legitimize drone warfare. Therefore, their
investigations act not only as disclosures of covert drone operations, but also serve as
a direct intervention into the very operationalization of drone technology as a
technical, discursive, and political apparatus.
Forensic Architecture’s investigations underscore how ecological analysis helps to
demonstrate the territorial, urban, and architectural dimensions of drone warfare;
which is to say, its vertical mediations. As Weizman highlights, we can no longer rely
on what is captured in single images, and should instead call upon what he refers to as
“image complexes” (2015) (Fig.3). A time-space relation between hundreds of still
images and videos generate multiple perspectives of the same incident, including views
from the ground, air, and outer space. The act of seeing through this form of “image
complex” is a multifaceted construction of a limitedly accessed strike event. Thus,
architecture becomes useful not only as an object of analysis but rather as an optical
device – as an additive and materially grounded way of seeing. For instance, their
investigations develop frame-by-frame analysis or panoramic views of multiple visual
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
258
materials where the angle of shadow/sun or a subtle surface disturbance detected on
an image helps them to locate the strike, model the building, and reconstruct the
trajectory of the missile. These images mark the intersection of ‘image complexes’ and
the ‘images’ imprinted upon and through architectural materials, resulting in the
emergence of what Weizman refers to as “architectural-image complexes.”
Figure 3. A still image taken from the video report prepared by Forensic Architecture. A still image taken from the video report prepared by Forensic Architecture in a collaboration with
SITU Research.
Animating the shadows cast of different days and at different times helped the team
to model the scene through the shadows visible in the satellite and video images,
thereby collaborating the volumes as well as determining the approximate time – 3pm
– that the video was shot. Case: Miranshah, North Waziristan, March 30, 2012. Photo:
Forensic Architecture. https://forensic-architecture.org/investigation/drone-strike-
in-miranshah
The practice of detecting and making sense of the material registers of strike events,
especially when mediated through architectural-image complexes, can benefit a lot
from the artistic insight. As Lawrence Abu Hamdan, one of the artists involved with
the group puts it, environmental thinking plays a key role in the architectural mode of
inverting the forensic gaze:
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
259
There are ways that truth claims, which are expressed or made differently
to how the law or science delineate the truth, can be folded into the
production of an artwork. And that’s a very important distinction to make,
especially because law and science draw outlines around their objects. For
example, law says: ‘This here is a millimeter-thin wood veneer that covers
this cupboard as an object that is separated from the world in which it’s
surrounded.’ Whereas I think that an artistic process of telling the truth is
the opposite: it’s about blurring the line between the veneer, the door, the
space and its reflections, taking into account its sound and all the other
phenomena around it. This way of working is the extension of the way
artists approach their work as a spatial and environmental practice; so that
a video artist knows the electricity cable going to the screen is an important
part of the work, or a painter knows the light conditions of the room are
an element of the work. We are trained in this environmental way of
thinking (Abu Hamdan, 2018).32
Abu Hamdan’s emphasis on environmental thinking recalls Gibson’s ecological
understanding of perception or what he calls “ecological optics”. In contrast with
analytical and physical optics, which reduce objects and surfaces within the
environment to points and atoms, ecological optics consider the reciprocal dynamics
of environmental relations as well as the movement of the observer ([1979] 2015: 59;
80-81). Similarly, Forensic Architecture’s investigations retrieve the thickness of
surfaces and bodies involved in the strike event, thereby rendering it tangible, and thus
contestable 33 . In contrast to the abstractions and erasures through which drone
technology operates as a racialized apparatus, Forensic Architecture’s counter-visual
practice restores the context within which the operationalization of militarized drone
strikes is embedded. According to Weizman, they are “building narratives, not only
dismantling state ones, by cross-referencing different kinds of aesthetic products such
as images, films, haptic materiality, memory, language and testimony.” 34 ; 35 The
forensic-architectural model is not composed of a series of visual perceptions in a
given physical space, but is instead formed by a set of relations that combine
information, imagination, and insight into a rendition of physical and psychic place. In
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
260
contrast to the authoritarian and objective discourse of science, “counter-forensics”36
is politically committed and motivated by the sense of solidarity (Weizman, 2014: 13).
Through their revisitation of the critical affordances of ecological thinking, Forensic
Architecture heightens the investigative capacity of architectural methods and artistic
insight. These are further elevated through their production and presentation of
evidence through the form of public address and political claims. Unlike recent trends
within the field of human rights and international law, Forensic Architecture’s
investigations do not identify a solid object as the provider of a stable and fixed
alternative to the human uncertainties and anxieties that are part of the practice of
testimony and evidence. According to Weizman, forensic aesthetics:
[…] is not simply a return to a pre-Kantian aesthetics in which the sensing
object was prioritized over the sensing subject – rather, it involves a
combination of the two. Material aesthetics is merely the first layer of a
multidimensional concept that Thomas Keenan and I called forensic
aesthetics. Forensic aesthetics is not only the heightened sensitivity of matter
or of the field, but relies on these material findings being brought into a
forum. Forensic aesthetics comes to designate the techniques and
technologies by which things are interpreted, presented, and mediated in
the forum, that is, the modes and processes by which matter becomes a
political agent (2014: 15).
Forensic Architecture expands the material residue over the thick fabric of relations
between material things, discursive practices, and collective actions. In other words,
their investigations acknowledge the historicity of discursive and technological systems
(legal, media, etc.) within which the material residue is generated, documented, and
represented. In this vein, Forensic Architecture’s investigations activate the political
affordances of the material trace. These affordances do not only retroactively restore
the critical potential of matter, but the material traces that they hinge upon also become
viable grounds through which to transmit information about strike events to the
public, thus generating a possibility for a collective action.
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
261
What is ultimately at stake here is Forensic Architecture’s activation of the right to
look. As noted before, the right to look is a claim to a political collectivity by
reorganizing the fields of what is seeable and sayable, thereby opposing the authority
which seeks to legitimize its domination with the practice of visuality. As Mirzoeff
emphasizes, visuality supplements the violence of authority by presenting authority as
self-evident, that “division of the sensible whereby domination imposes the sensible
evidence of its legitimacy” (Rancière, [1998] 2007: 17 as cited in Mirzoeff, 3)37. In
contrast, counter-visuality opposes the “unreality” created by the authority and
proposes a real alternative.
Finally, Forensic Architecture repurposes computationally-informed technologies for
connecting singular events to larger patterns of contemporary warfare. For instance,
their investigations reveal connections between spatial patterns of drone strikes and
the increased number of civilian casualties that are concretized within the militarized
states’ reorganization of urban spaces and policing strategies. 38 ; 39 Indeed, their
investigations do not only render the events and sites of covert drone operations
visible but also trace the continuum along seemingly dissociated spatial and temporal
relations underlying the contemporary technologies of visuality, surveillance, and
violence that are operational to the current neoliberal governance at a global scale. For
example, militarized drone strikes are based on predictive algorithms that are not
unlike those used in the technical analysis of the financial stock market or
environmental degradation, all of which interpret and display future outcomes by
analyzing past patterns40.
Surely, all these incidents are not the same; however, when mapped together, they
demonstrate that any effective analysis of militarized drone technology as a complex
technical, discursive, and political apparatus must explore the interconnections –
spatial, vertical, and historical – that it exposes or produces. In this regard, the “right
to look” in Mirzoeff’s terms takes the form of this very mapping that resituates
affordances of militarized drone technology within larger flows of aesthetics, violence,
and capital. Ultimately, the affordances of any given technology cannot be thought in
isolation from its compositional affordances, namely the affordances which it is
enabled by and which it helps to make possible; which is to say, its historicity.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
262
Conclusion
By drawing upon the critical scholarship on drone warfare, this article argues that
drones mistargeting civilians is neither exception nor error but central to the operation
of and rationale behind militarized drone strikes. The historical overview of its cultural
imaginaries and biopolitical formations reconfigures drone technology as an apparatus
of racialized state violence. Thus, militarized drone strikes operate through a set of
political, territorial, and juridical reconfigurations, which make the rationale of drone
technology far more diffuse than the straight line between an aircraft and a target
(Weizman, 2014: 369). Here, I find Lisa Parks’ notion of “vertical mediation” and Eric
Snodgrass’ understanding of “compositional affordances” useful to reformulate the
concept of affordance as a form of mediation that cuts across material and discursive
systems that animate militarized drone operations today. There arises the question of
how material and discursive systems shape the possibilities and actualizations of certain
affordances over others, which is to ask, how affordances of a given technology
emerge within and help to intensify or negotiate the existing regimes of power.
In the case of covert drone operations, the violence against people and surroundings
is redoubled by the violence against the evidence (Weizman, 2014: 11-12). The material
ruins are usually the only visible traces of a covert drone strike but they are often at
the threshold of detectability. Thus, Forensic Architecture inverts the forensic gaze by
intervening into the means and practices of evidence within political, legal, and media
systems that animate this specific form of racialized technology of state violence. By
claiming “the right to look” in Nicholas Mirzoeff’s terms, they contest the politics of
visuality, through which militarized states attempt to legitimize drone warfare.
Therefore, their investigations act not only as disclosure of covert drone operations,
but also a direct intervention into this very operationalization of drone technology as
a technical, discursive, and political apparatus.
References
Afxentiou, A. (2018) ‘A History of Drones: Moral(e) Bombing and State Terrorism,’
Critical Studies on Terrorism, 11(2): 301-320.
Allinson, J. (2015) ‘The Necropolitics of Drones,’ International Political Sociology, 9(2):
113-127.
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
263
Burns, J. (2014) ‘Persistent Exception’ in: Forensic Architecture, ed., Forensis: The
Architecture of Public Truth. Berlin: Stenberg Press, pp. 400-408.
Butler J. (1993) ‘Endangered/endangering: Schematic racism and white paranoia,’ in:
R. Gooding-Williams, ed., Reading Rodney King, Reading Urban Uprising. New York:
Routledge, pp.15–22.
Chamayou, G. (2013) Drone Theory, trans. Janet Lloyd. London: Penguin Books.
Chandler, K. (2016) ‘A Drone Manifesto,’ Catalyst: Feminism, Theory, Technoscience, 2(1):
1-23.
Chandler, K. (2018) ‘Drone Errans,’ Presented at Institute for Cultural Inquiry, June
12, 2018, Berlin, Germany.
Fuller, M. (2005) Media Ecologies: Materialist Energies in Art and Technoculture. Cambridge:
MIT Press.
Galison, P. (1994) ‘The Ontology of the Enemy: Norbert Wiener and the Cybernetic
Vision,’ Critical Inquiry 2(1): 228- 266.
Gilani, S. (2015) ‘“Spacing” Minority Relations: Investigating the Tribal Areas of
Pakistan Using a Spatio-Historical Method of Analysis,’ Social and Legal Studies, (1):
359-380.
Gibson, J.J. ([1979] 2015) The Ecological Approach to Visual Perception. New York: Taylor
and Francis Group.
Gregory, D. (2017) ‘Dirty Dancing: Drones & Death in the Borderlands’, in: L. Parks,
and C. Kaplan, eds., Life in the Age of Drone Warfare. Durham: Duke University Press,
pp. 23-58.
Halpern, O. (2005) ‘Dreams for Our Perceptual Present: Temporality, Storage, and
Interactivity in Cybernetics,’ Configurations, 13(2): 283-319.
Kaplan C. (2006) ‘Mobility and War: The Cosmic View of US ‘Air Power,’’ Environment
and Planning A, 38(2): 395–407.
Kearns, O. (2017) ‘Secrecy and Absence in the Residue of Covert Drone Strikes’,
Political Geography 57: 13-23.
Miller, A. (2017) ‘(Im)material Terror,’ in: L. Parks, and C. Kaplan, eds., Life in the Age
of Drone Warfare. Durham: Duke University Press, pp. 112-133.
Mirzoeff, N. (2011) The Right to Look: A Counter-history of Visuality. Durham: Duke
University Press.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
264
Parikka J. (2010) Insect Media: An Archaeology of Animals and Technology. Minneapolis:
University of Minnesota Press.
Parks, L. & Kaplan, C. (Eds.) (2017) Life in the Age of Drone Warfare. Durham: Duke
University Press.
Parks, L. (2017) ‘Vertical Mediation,’ in: L. Parks, and C. Kaplan, ed., Life in the Age of
Drone Warfare. Durham: Duke University Press, pp. 134-158.
Parks, L. (2018) ‘Introduction,’ in: Rethinking Media Coverage: Vertical Mediation and the
War on Terror. New York: Routledge, pp. 1-24.
Rhee, J. (2018) ‘Dying: Drone Labor, War, and the Dehumanized,’ in: The Robotic
Imaginary: The Human and the Price of Dehumanized Labor. Minneapolis/London:
University of Minnesota Press, pp. 133-174.
Satia, P. (2014) ‘Drones: A History from the British Middle East,’ Humanity: An
International Journal of Human Rights, Humanitarianism, and Development, 5(1): 1-31.
Shaw, I. G. R. (2013) ‘Predator Empire: The Geopolitics of U.S. Drone Warfare’,
Geopolitics, 18: 536-559.
Snodgrass, E. (2017) Executions: Power and Expression in Networked and Computational
Media. [Dissertation Series: New Media, Public Spheres and Forms of Expression]
Malmö: Malmö University Press.
Stahl, R. (2018) Through the Crosshairs: War, Visual Culture, and the Weaponized Gaze.
Rutgers University. New Brunswick: Rutgers University Press.
Tahir, M. (2017). ‘The Containment Zone,’ in: L. Parks, and C. Kaplan, eds., Life in the
Age of Drone Warfare. Durham: Duke University Press, pp. 220-240.
Vasco, T. (2013) ‘Solemn Geographies of Human Limits: Drones and the Neocolonial
Administration of Life and Death,’ A Journal of Radical Theory, Culture and Action,
6(1): 83-107.
Wall, T., & Monahan, T. (2011) ‘Surveillance and Violence from afar: The Politics of
Drones and Liminal Security-Scapes,’ Theoretical Criminology, 15(3): 239-254.
Wang, J. (2018) ‘Racialized Accumulation by Dispossession in the Age of Finance
Capital: Notes on the Debt Economy,’ in J. Wang, Carceral Capitalism. Cambridge:
MIT Press, pp. 99-150.
Weizman, E. (2014) ‘Introduction’, in: Forensic Architecture, ed., Forensis: The
Architecture of Public Truth. Berlin: Stenberg Press, pp. 9-32.
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
265
Weizman, E. (2015) ‘Before and After Images’, Loose Associations (1). Accessed on
September 3, 2018 at:
https://thephotographersgalleryblog.org.uk/2016/05/22/the-image-complex/
Weizman, E. (2016) ‘Interview with Eyal Weizman’, International Review of the Red Cross,
98(1): 21-35.
Weizman, E. (2017) Forensic Architecture: Violence on the Threshold of Detectability. New
York: Zone Books.
Notes
1 According to the statistics of the Bureau of Investigative Journalism, the U.S. drone operations in Yemen, Pakistan, Somalia, and Afghanistan caused between 769 and 1,725 civilian deaths since the bureau began recording data. For further information on drone numbers, see the related site of the Bureau of Investigative Journalism: https://www.thebureauinvestigates.com/projects/drone-war
2 There is a huge disparity in civilian death tolls between the U.S. official reports and other resources (e.g. reports prepared by other states, NGOs, journalists, and independent investigators), which is largely caused by the U.S. method of counting who is an enemy combatant. According to U.S. drone policy, a “military-aged male” is defined as any man who is an adolescent or older. Any military-aged male who is killed in a drone strike is classified as an enemy combatant unless posthumous evidence to the contrary is provided. However, the U.S. has no procedure in place to determine whether someone who was killed by its drone strikes was a civilian or an enemy combatant. The remains of the dead are often unidentifiable due to the intensity of the damage caused by missiles. For further information, see: Living under Drones: Death, Injury, and Trauma to Civilians from US Drone Practices in Pakistan (International Human Rights and Conflict Resolution Clinic, Stanford Law School and Global Justice Clinic, NYU School of Law, 2012). As of March 2019, the U.S. President Donald Trump revoked the Obama-era requirement that U.S. intelligence officials publicly report the number of civilians killed by U.S. drone strikes outside its active war zones.
3 With this term, Parks refers to the process of visualization of data (e.g. temperature) “that take on the biophysical contours of a human body while its surface appearance remains invisible and its identity unknown” (145). Parks examines aerial infrared drone imagery, which is able to isolate suspects according to the energy emitted by their bodies. In this regard, Parks argues that visual surveillance practices are extended beyond epidermalization while operating within a radiographic episteme and at spectral levels.
4 In her examination of U.S. hegemony after 9/11, Parks coins the term “vertical hegemony” which “involves efforts to maneuver through, activate technologies within, occupy, or control the vast stretch of space between the earth’s surface and the outer limits of orbit as well as the kinds of activities that can occur there” (2018: 3). The struggle for vertical hegemony is based on the predominant assumption that controlling the vertical field that satellite, aircraft, and broadcasting operate through is equivalent to controlling life on Earth.
5 Gibson’s theory of ecological perception was rooted in his wartime research with aircraft and pilots while appointed at the U.S. army. During the Second World War, Gibson became interested in pictures and films as a psychologist concerned with training young soldiers to fly airplanes (Gibson, [1979] 2015: 261-262).
6 Parikka connects Gibson’s “ecology of visual perception” to the works of media/cultural theorists as well as philosophers, whose works contribute to what is characterized as “milieu-medium theory” (2010: 169-171).
7 Forensic Architecture conducted detailed case study analyses of the U.S.’ and Israel’s drone strikes that took place in Pakistan, Yemen, and Gaza: Datta Khel, North Waziristan, March 16-17, 2011; Mir Ali, North Waziristan, October 4, 2010; Miranshah, North Waziristan, March 30, 2012; Beit Lahiya, Gaza, January 9, 2009; Jaar and al Wade’a, Abyan Province, Yemen, 2011. For further information on Forensic Architecture’s investigations of covert drone strikes as well as their broader investigations of
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
266
air strikes, please refer to the relevant pages of their website: https://forensic-architecture.org/category/airstrikes
8 The information is gathered from the following websites of Forensic Architecture: https://www.thebureauinvestigates.com/projects/drone-war and the Bureau of Investigative Journalism https://www.thebureauinvestigates.com/projects/drone-war
9 The current U.S. President Donald Trump inherited the framework of drone operations outside the declared battlefields from his predecessor, Barack Obama. Nonetheless, strikes doubled in Somalia and tripled in Yemen in 2017, the first year of Trump’s presidency:
https://www.thebureauinvestigates.com/stories/2017-12-19/counterrorism-strikes-double-trump-first-year
10 According to Derek Gregory, the U.S. has capitalized on and contributed to a series of overt legal maneuvers through which the FATA has been constituted as what Giorgio Agamben has called more generally a “space of exception”: a space in which “a particular group of people is knowingly and deliberately exposed to death through the political-juridical removal of legal protections and affordances that would otherwise be available to them” (2017: 28-29).
11 Feldman, K. (2011) ‘Empire’s Verticality: The Af/Pak Frontier, Visual Culture and Racialization from Above’, Comparative American Studies, 9(4): 325-241.
12 Allinson, J. (2015) ‘The Necropolitics of Drones,’ International Political Sociology, 9(2): 120. 13 Referencing Foucault, Mirzoeff argues that the goal of counterinsurgency is not to generate stability.
Instead, it normalizes “the disequilibrium of forces manifested in war,” not as politics, but as “cultural,” the web of meaning in a given place and time” (307).
14 Upadhyay, N. (2013) ‘Pernicious Continuities: Un/Settling Violence, Race and Colonialism,’ Sikh Formations 9(2): 263-268.
15 Here, Rhee refers to Wall and Monahan’s emphasis on the commonality of the strategies and disproportionate targeting between the U.S.’ domestic war on crime (e.g. New York Police Department’s stop-and-frisk program) and the global war on terror.
16 Here, Rhee draws upon the works of Denise Da Silva and Sylvia Wynter as well as Nishant Upadhyay (135-136).
17 Cybernetics’ premises of control and predictability cannot be thought in isolation from the constant threat of nuclear warfare during the Cold War. According to Joseph Masco, the U.S. Global War on Terror mobilized a wide range of affective, conceptual, and institutional resources established during the Cold War (e.g. existential danger and state secrecy). See: Masco, J. (2014). The Theater of Operations: National Security Affect from the Cold war to the War on Terror. Durham: Duke University Press.
18 Here, Halpern uses the term “virtuality” in terms of possibility rather than a simulation. 19 For this particular point, Wilcox draws upon Katherine N. Hayles’ book How We Became Posthuman:
Virtual Bodies in Cybernetics, Literature, and Informatics (University of Chicago Press, 1999). 20 Kember, S. & Zylinska, J. (2012) Life after New Media: Mediation as a Vital Process. Cambridge, MA: MIT
Press. 21 Barad, K. (2007) Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning.
Durham, NC: Duke University Press, p. 3. 22 Snodgrass draws upon the Forensic Oceanography project, undertaken by Charles Heller and
Lorenzo Pezzani who are part of the Forensic Architecture team. For further information: https://www.forensic-architecture.org/case/left-die-boat/
23 Rancière, J. (2011) “Ten Theses on Politics,” Translated by Rachel Bowlby, Davide Panagia, and Jacques Rancière, Theory and Event 5(3): 217.
24 Here, Rhee refers to Simone Browne’s book Dark Matters: On the Surveillance of Blackness (Durham, NC: Duke University Press, 2015), which demonstrates how the history of surveillance is entangled with the history of transatlantic slavery and the continued targeting of blackness.
25 See Agamben, G. ([1995] 1998) Homo Sacer: Sovereign Power and Bare Life (trans. Daniel Heller-Roazen). Stanford: Stanford University Press.
26 As Wang argues, data is interpreted and visualized not as a reflection of empirical reality; rather, data extraction and visualization actively construct the reality and predict the future, which has material consequences in the present.
27 According to Mirzoeff, “visuality’s first domains were the slave plantations, monitored by the surveillance of the overseer, the surrogate of the sovereign. This sovereign surveillance was reinforced by violent punishment and sustained a modern division of labor. From the 18th century onward, visualizing was the hallmark of the modern general, since the battlefield became too extensive and
İŞCEN | Forensic Aesthetics for Militarized Drone Strikes
267
complex for any one person physically to see” (2011: 2). Visualizing became a task for maintaining the authority of the visualizer above and beyond the visualizer’s material power. Also see: Mirzoeff, N. (2014) ‘Visualizing the Anthropocene,’ Public Culture, 26(2): 213-232.
28 By the Gulf War of the early 1990s, the view through high-tech weapons, such as smart or seeing bombs, became publicly available and came to dominate the Western perception of wars at a distance. As Roger Stahl emphasizes, the “weaponized gaze” restructured the civic sphere as an extension of the military, governing the relationship between civil and military spheres in the West (2018). Eventually, this gaze has evolved into “a powerful means through which the military-industrial complex apprehends civic consciousness” (3).
29 Taken from the video prepared by Forensic Architecture: https://www.forensic-architecture.org/case/drone-strikes/
30 See Rancière, J. (1998) Aux bords de la politique. Paris: Gallimard. 31 Even though it is beyond the scope of this paper, it is important to highlight the changing dynamics
of the presentation and reception of Forensic Architecture’s reports. As they move from one region/institution to another, the audience, actions, and meanings they engage with also keep changing and lead to different political implications.
32 The quote is taken from an interview with Lawrence Abu Hamdan, conducted by Mohammad Salemy, published on April 6, 2018 and accessed on March 6th, 2019. Available at: https://ocula.com/magazine/conversations/lawrence-abu-hamdan/
33 Drone images, captured by drones and transmitted to the operators, are exemplary of what Harun Farocki calls “operational images,” referring to the images that do not represent an object but instead constitute a part of an operation. As Farocki suggested these images are devoid of social intent, not meant for a reflection. Similarly, Paul Virilio traces a co-constitution of militaristic and cinematic ways of seeing in the 20th century with the rise of aviation technologies, subsumed by his notion of “logistics of perception”. In this regard, Forensic Architecture performs a counter-visual practice in Mirzoeff’s terms that contests and converts the operational aesthetics of militarized drone vision.
34 https://frieze.com/article/id-rather-lose-prizes-and-win-cases-interview-eyal-weizman-turner-prize-nominated-forensic
35 Even though this article’s focus is on the politics of visuality, both Weizman and Abu Hamdan’s emphasis on the multimodality of the aesthetic mode of analysis speaks to how violence and investigations of that violence can operate through means other than the visual; like sound. For further examination of the role of sound in drone warfare, see: Schuppli, S. (2014) ‘Uneasy Listening,’ in: Forensic Architecture, ed., Forensis: The Architecture of Public Truth. Berlin: Stenberg Press, pp. 381-392.
36 Keenan, T. (2014) ‘Photography and Counter-forensics,’ Grey Room, no. 55. 37 Rancière, J. ([1998] 2007) The Future of the Image (trans. G. Elliott). New York: Verso. 38 In a collaboration with the Bureau of Investigative Journalism, Forensic Architecture developed a
platform, which provides a spatial analysis of the drone strikes in the frontier regions of Pakistan between 2004 and 2014. This mapping shows that as buildings become the most common targets for drone operations, the casualties have predominantly occurred inside them, thereby indicating a relation between target type, location and casualty count: http://wherethedronesstrike.com/
39 Eyal Weizman discusses in depth how a city can operate as an apparatus with which warfare is designed and conducted in the case of Israel’s Architecture of Occupation of Palestine. This is also where he tackles the politics of verticality: Weizman, E. (2007) Hollow Land: Israel’s Architecture of Occupation. New York, NY: Verso.
40 The associations, which algorithms work through, escape the laws of cause and effect, as they rely on correlational patterns, and thus operate in a fluid state of exception. Predictive algorithms encompass the financial sector, the military-security nexus, and the entertainment industry. See Abreu, Manuel. “Incalculable Loss”, The New Inquiry. August, 2014. https://thenewinquiry.com/incalculable-loss/
Özgün Eylül İşcen is a PhD candidate in the Program of Computational Media, Arts and Cultures at Duke University, United States. Her dissertation examines the historical and current applications of computational media within the context of the Middle East and underlines wider flows of technology, culture, and capital. She received her BA in Sociology from Koç University,
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
268
Turkey, and MA in Interactive Arts and Technology from Simon Fraser University, Canada. Email: [email protected]
Special Issue: Rethinking Affordance
Take Back the Algorithms!
A Media Theory of
Commonistic Affordance
SHINTARO MIYAZAKI
Academy of Art and Design FHNW, Switzerland
Media Theory
Vol. 3 | No. 1 | 269-286
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
This essay critiques the ‘black-boxing’ of many computational processes, which are argued to result in a kind of ‘unaffordability’ of algorithms. By engaging with current theoretical debates on ‘commoning’ – signifying a non-profit-oriented, solidarity-based approach to sharing, maintaining, and disseminating knowledge and experience – the essay offers a formulation of commonistic affordance in algorithmic contexts. Through the discussion of widely used computational tools such as the Viola-Jones object detection framework, radical steps towards a ‘making affordable’ of algorithms are outlined, and the widespread corporate propertisation of computation processes is contrasted with a speculative vision of algorithmic commoning.
Keywords
Commoning, Affordance, Viola–Jones object detection algorithm, Practice-oriented critical media studies
Introduction
Millions of humans are living, communicating, and working in recursively nested body-
mind-media-ecosystems, comprised of information, data, and sensor networks,
algorithmic systems, communication protocols, media gadgetry, physical
infrastructures such as cities, and landscapes co-inhabited by species such as bacteria,
plants, and animals. The ubiquitous potentials for interaction, use, and influence
unfolding between these entities, their environments, and the structures that they are
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
270
both actively influencing and being passively influenced by, are often framed by what
has, since the late 1970s, been called “affordance” (Evans et al., 2017) – a concept that
soon became popular, particularly in fields such as user experience and interaction
design. A simple example might serve to illustrate the implications of this transposition
of the affordance concept to digital contexts: While a door handle is tangible, realizing
its affordances through sensorial experience, many of the critical processes that
characterise our interactions with and experiences of algorithmic infrastructures –
processes that we are surrounded by and upon which we are becoming increasingly
dependent – are increasingly designed to be imperceptible. Not only are these technical
processes increasingly embedded within socio-economic contexts, such as those
driven by the neoliberal obsession with the maximization of profit, but they are also
increasingly designed to be unchangeable. The German media scholar Friedrich Kittler
has called this situation “protected mode” (1997);1 here Kittler is referring to the
architecture of modern central processing units (CPUs), where access to CPU memory
storage is restricted to internal system applications, so that certain functionalities
remain hidden from the user.2 “Protected mode” as a concept is applicable to all sorts
of situations occurring while digital technologies unfold, and where access and agency
is restricted for the sake of security and performance optimization. Such protections
represent serious obstacles to any efforts at self-deterministically changing the body-
mind-media-ecosystems that any individual is living in.
This article therefore begins by arguing for the necessity of granting access to the inner
workings of our body-mind-media-ecosystems and their many affordances. This is an
urgent matter, especially for configurations in which such systems foster power
imbalances, discrimination, and exploitation. The slogan “Take Back the Algorithms!”
thus stands for an attempt to transform some of the malicious affordances of our
algorithmically driven environments into more equitable ones. This, I argue, can only
succeed when done collectively, as a form of commoning – a concept that is used here
to signify a non-profit-oriented, solidarity-based approach to sharing, maintaining, and
disseminating knowledge and experiences of the algorithms that govern our body-
mind-media-ecosystems. I therefore formulate a practice-oriented media theory of
commonistic affordance below, which advocates for a broad approach to ‘making
affordable’. A commonistic affordance, in this sense, is one that enables commoning
rather than suppressing it. To design, plan, and realize any commonistic affordances
MIYAZAKI | Take Back the Algorithms!
271
requires efforts to render intentionally concealed, blurred, obfuscated, and protected
processes of measurement, counting, control, and surveillance (such as, for example,
algorithmically driven facial recognition) visible, understandable, accessible – and thus
more affordable. ‘Making affordable’ is therefore not merely an epistemological
endeavour, but an activity that opposes and counteracts attempts of commercial or
ideological enclosure. ‘Making affordable’ is thus not merely an isolated, singular
action, but rather involves persistent struggles against power imbalance. Commonistic
affordance is a key concept for this undertaking, and attempts to show alternatives to
the typically profit-oriented, exploitative, discriminatory ways in which, for example,
commercial software might pre-determine its offerings of interactive affordances.
Commonistic affordances emphasize accessibility and openness. They offer poetic and
utopian potentials for the body-mind-media-ecosystems that we inhabit, and with
which we increasingly struggle. In algorithmic contexts, commonistic affordances
escalate this potential for utopianism, since any running algorithm might (and should)
afford glimpses into the workings, processes, and operativity of a more desirable, a
more commonistic, future. This sort of recursive in-world modelling (i.e., the
modelling of algorithms by algorithms), that behaves in a non-profit-oriented, non-
exploitative manner, and which is instead community-oriented, also indicates the need
for a reconsideration of the environmentality of algorithms.3
Communities pursuing the self-organized sharing, organizing, and processing of
resources – such as energy, information, or material goods – are often called
commonist (as they are dealing with commons), while what they are doing together is
accordingly called commoning (Dyer-Witheford, 2007; Shantz, 2013; Bollier &
Helfrich, 2015). Commoning in the context of media technologies implies a closeness
with the Free-and Open-Source-Software (FOSS) movement, as it is based on the idea
that software and data are so-called ‘digital commons.’ While most digital resources
are usually owned – or at least controlled – by closed, exploitative, profit-oriented
corporate or quasi-corporate entities, digital commons are generated, organized,
processed, and shared by an open community of individuals and/or collectives.4 To
secure digital commons and open source projects from commercialization, appropriate
non-permissive licensing that prevents their commercial exploitation is crucial. Digital
commoning is not only informed by Anarcho-Marxist concepts and a general sense of
criticality towards the promises of innovation (in the form of new solutions and new
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
272
designs), but also needs to be highly self-critical with regards to its own contexts,
agencies, biases. It is furthermore necessary to generate moments, scenarios or
concrete utopia that are both anticipatory and practice-oriented (Bloch, 1986: 146;
Levitas, 1990: 18). Such concrete utopia – one might also call them heterotopia –
would allow for the regaining of at least some autonomy from the data extractivism of
exploitative, profit-oriented industries and forms of governance. Commoning is thus
also about pursuing an ideology that differs from that of the selfish search for ever-
growing profit. The implications of these attitudes for algorithmic contexts and the
discussion of the affordance concept will be detailed further below.
According to a special report produced by The Economist, the top winners after the
financial crisis of 2008 are, by and large, companies working with information
technology, including Apple, Alphabet, Microsoft, Amazon, and Facebook
(Economist, September 17th 2016: 3). Set against such a backdrop, commoning
involves taking back or regaining control over information technology, particularly
when it comes to matters of freedom of expression, racial discrimination, and various
kinds of unjustifiable inequality.5 The slogan “Take Back the Algorithms!” is therefore
inspired by Take Back the Economy, a post-capitalist creed co-written by feminist
economic geographers Julie Graham and Katherine Gibson (2013), which was itself
inspired by “Take Back the Night” – the name of an international non-profit
organization which, since the late 1970s, has sought to end all forms of sexual,
relationship, and domestic violence, with a particular focus on enabling women to
redeem control over their experience of public spaces at night. The present article
builds on the spirit of these slogans, not through a gesture towards victimization, but
instead through one of empowerment and liberation.6 As I will argue, to take back
algorithms implies programming without always immediately thinking about useful,
innovative, efficient or profitable applications. Even more importantly, it means to
make algorithms more ‘affordable’ (in the sense outlined above), so that everybody can
access and use them. Ideally, this implies a playful-yet-careful and self-reflective
practice that repositions itself continuously, in an effort to detect the hidden
affordances of algorithmic eco-systems.
MIYAZAKI | Take Back the Algorithms!
273
Making Algorithms Affordable
A successful taking back of algorithms from exploitative, profit-oriented organizations
and companies requires practices and actions which, metaphorically speaking, would
‘make them affordable.’ Algorithms are indeed mathematical, symbolic, and abstract
structures, but they should not be mistaken for algebraic formulae. The difference is
that instructions carried out by algorithms are non-reversible, whereas algebraic
formulae are always reversible. Mathematics as such has no real-world effect, while
algorithms are vector-dependent; they need time to unfold and thus they embody time
and have real-world impacts (Miyazaki, 2016: 129). Algorithms, therefore, are not only
already-situated in socio-economic contexts, they also strongly determine what we can
say, communicate, know, feel, see, and hear (Mitchell & Hansen, 2010: vii). Therefore,
algorithms quite literally put things forth, forward or further. Affordance, in this sense,
is the potential and capacity to move forward, to change things. Algorithms, when
stored and not-yet-unfolded, have affordances, since they are made of instructions to
structure and move hard-, soft- and wetware. Operated by semi-conductor-based chip-
architectures, they consist of orders that assign or shift values from one storage
location (address) to another. Making algorithms affordable under such considerations
implies foremost their liberation from their protectedness and “mute[d] efficacy,” as
Kittler formulated in the early 1990s (1997: 161). Here, ‘making affordable’ thus
derives a new meaning, namely that of making something graspable, tangible, usable, movable
and shareable. In this way, the output of algorithms also, quite literally, becomes
something that can be paid for.7
There are further potential entry points for a definition of ‘making affordable.’ Making
affordable also considers the role of mediation in the sense of filtering. In
computational contexts, making affordable additionally invokes circuit-bending as a
way of manipulating circuits and changing their taken-for-granted functions without
formal training or approval (Hertz & Parikka, 2012: 426). Code-bending as an
extension of circuit-bending invades concealed layers of algorithmic governance, often
symbolically and literally breaking apart a software system and playing with it without
formal expertise, manuals, or a predefined goal (Hertz & Parikka, 2012: 426). Making
affordable therefore opposes acts of simplification, reduction, enclosure, and
commercialization that are conventionally esteemed in human-computer interaction
and other design fields. Popular slogans like “Don’t Make Me Think,” by the user
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
274
experience (UX) designer Steve Krug (2000), gestures towards the fact that the ultimate
aim in such fields is the elimination of complex openness by making things easier to
understand. This is something that frequently occurs by way of black-boxing processes
that might disturb or confuse users. Making algorithms affordable then, aims to
develop a better understanding by following a different route, namely that of making
processes easier in order to then complicate them again, thereby unlocking potential
alternatives. Accordingly, ‘making’ here is also corresponded to a kind of un-making
(Gaboury, 2018).
In this sense, the affordances of algorithmic systems are not exhausted by their
intended and programmed functions. Instead, they can, potentially, afford much more,
such as unexpected glitches, new uses, and different types of users. The mastery of
tools, equipment, and media technology often includes the mastery of their
malfunctions8; making affordable, in this context, means to liberate such systems from
the constraints of fully predetermined ‘mastery,’ and instead enables users to become
independent agents in their interactions with the systems in question. Making
algorithms affordable, finally, is an activity that involves the ongoing struggle against
tendencies to enclose them, to make them privately owned, to increase their value, and
then to sell them. Activist and cultural studies scholar Max Haiven describes this sort
of theft as “Enclosure 3.0,” in which the technological capacities of computation and
algorithmic control emerge as a neoliberal form of enclosure that reaches expansively
across the globe and intensively into daily life and the “imagination” (2016: 280). To
make algorithms affordable is thus to un-make their capitalistic value, while at the same
time making them usable and applicable for as many users as possible, such that they
become ‘common.’
Machine Vision as an Example
Media artist and coder Adam Harvey’s series CV Dazzle (2010 – ongoing) serves as a
good example to further concretize and draw critical attention to both the troubling
algorithmic affordances of the contemporary field of computer vision, and to utopian
responses to the problematic implications of this technology. CV Dazzle concerns
processes of automated face-detection executed by algorithmically operating camera-
computer systems. The project serves to render otherwise invisible aspects of
surveillance technologies graspable, while also exploring alternative designs that are
MIYAZAKI | Take Back the Algorithms!
275
intended to counteract the surveillant gaze and to allow individuals to become self-
deterministically invisible. The project webpage9 describes several make-up techniques
that can make a face undetectable for algorithms, operating in correspondence with a
so-called cascade classifier that discriminates the data according to pre-coded
conditions and rules. These rules are, thankfully, included in the FOSS based Open
Computer Vision (OpenCV) library, and can therefore be used widely in many different
contexts including for artistic and activist purposes. Among many initiatives and
software environments that benefit from access to this library, a good example is
Processing, a popular cross-platform integrated software development environment
(IDE) designed to increase the accessibility of coding in art and design.
The so-called Viola–Jones object detection framework allows the automatic detection
of faces and other visual forms embedded in images (2001). This algorithmic
framework has been incorporated into many of the commercial webcams and
photographic cameras that were produced around 2010. Significantly, this algorithm is
not proprietary, and is available open source, with good documentation. What follows
here is a lengthy description of the algorithm’s crucial steps and processes.
Understanding and following the operations of an algorithm is an important and
necessary step for taking it back and making it affordable.
When detecting faces, the Viola–Jones object detection algorithm first uses a list of
Haar10 features such as those illustrated in Figure 1. These visual features are then used
as criteria for analyzing approximately five thousand photographs of faces, which
create a so-called “cascaded decision tree” provided with the OpenCV library. The
decision tree, also called a ‘classifier,’ is the result of a machine learning process that
combines adaptive boosting (Adaboost) with a so-called integral image algorithm or
summed-area table algorithm, a combination that accelerates and optimizes the
process. The creation of this ‘classifier’ constitutes a type of supervised machine
learning, since the training is done on pre-categorized data. Checking a list of Haar
features on a single image leads to a value expressing how many of the features matches
the list. First, the algorithm verifies all negative examples (non-faces), which results in
low numbers. Then, it checks all positive examples (faces), which results in high
numbers. A high number thus indicates a high likelihood that an image shows a face.
The algorithm now repeats this checking with as many features at different sizes and
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
276
positions as possible,11 leading to a set of threshold numbers that ultimately help to
decide whether an image is a face or a non-face. The features are then organized so
that there is a tree of decisions. This decision tree ensures that the best feature, which
detects whether an image is a face or not, is tested first, then the second-best feature
is tested, then the third, and so on.
Fig. 112
The Haar features, at least in the most common version of the Viola–Jones face
detection algorithm, are based on simple black-white contrasts (see Fig. 1). They are
useful for analyzing faces, but their operations are impacted by skin color. The
algorithm is therefore a case of programmed racism. 13 It couldn’t detect images
showing faces that have little or no light-toned elements. This was presumably not
only a result of training the classifier – the above described decision-tree – with a biased
set of images containing only a few or even no dark-skinned faces, but might have
been an amplification effect of the feature selection as such. A lack of white regions in
a face leads to failures in the detection. In some cases, this creates an algorithmic bias,
as the algorithm is more inclined to detect light-skinned faces, while not being
receptive to dark-skinned faces. Computer vision in this case is not neutral or
transparent, but has, as mentioned already above, a racialized filter. One might
provocatively write: Viola–Jones face detection as computer vision is also a racist
vision. As of 2019, the algorithm is still part of OpenCV.
MIYAZAKI | Take Back the Algorithms!
277
In responding to this issue, Harvey’s CV Dazzle shows playful ways to explore the
functionality and limits of the Viola-Jones algorithm. It makes it apparent that
computer vision, and algorithmic systems more generally, can yield serious instances
of discrimination, racial or otherwise, when not carefully designed. The project
explicitly refers to OpenCV, so that those who experienced the project in an exhibition,
on a webpage, or in a talk can easily learn more about the underlying technology. CV
Dazzle thus makes the Viola-Jones algorithm affordable not only epistemologically, but
also ethico-aesthetically by highlighting its flaws and malfunctions. Foregrounding
ethico-aesthetic affordances, in this context, extends a concept by Felix Guattari that
involves “speak[ing] of the responsibility of the creative instance with regard to the
thing created [...]” (1995: 107; Brunner et al., 2012: 42). Works and projects like CV
Dazzle, in combination with learning-based tear-downs of relevant algorithmic systems
and activist attitudes, will be crucial for taking back algorithms on a step-by-step basis.
The issue of algorithmic bias14 is not only addressed by artists. In December 2016, a
group of computer scientists and software engineers around Ansgar Koene from the
University of Nottingham, filed a so-called Project Authorization Request (PAR) for
a new IEEE (Institute of Electrical and Electronics Engineers) standard, and formed
the “IEEE P7003 Working Group – Standard for Algorithmic Bias Considerations.”
As formulated there, the standard is planned to provide programmers of algorithms
designing autonomous or intelligent systems with certified methods that afford clearly
articulated accountability and clarity regarding how algorithms are targeting, assessing,
and influencing their users and stakeholders.15 While this sort of effort in the realm of
engineering standards and policies is of course legitimate, we must nevertheless ask
how a more community-oriented approach could unfold.
Commonistic Affordance
To be clear: A commonistic affordance operates in the name of commonism. This
happens rarely, since we can assume that purposefully designed affordances will
operate, most of the time, to make profit. As French Marxist philosopher Henri
Lefebvre has noted, the rhythm of capital is one of production and destruction (2004:
55). While capitalists in the early 20th century ultimately controlled the rhythm of
factory machines, “vectoralists” (Wark, 2004) are now controlling the algorithms of
our body-mind-media-ecosystems. Notably, the term bias is etymologically derived
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
278
from the French term biais, meaning slope, i.e., a path that goes up or down. It thus
implies a gradient, a vector. Vectoralists are those who have the means of realizing the
value of these vectors, gradients and biases. They control “the vectors along which
information is abstracted, just as capitalists control the material means with which
goods are produced, and pastoralists the land with which food is produced” (Wark,
2004: para. 29). As the example of CV Dazzle beautifully shows, to design commonistic
affordances that allow us to pursue the idea of taking back algorithms implies reclaiming
the accessibility, detecting, amplifying, and playing with the poetic, socio-technical and
utopian potentials of the body-mind-media-ecosystems that we live with.
Making an algorithm such as the Viola–Jones object detection ‘affordable’ implies
furthermore what the philosopher Timothy Morton would call a “context explosion”
(Morton, 2018: 91); it does not merely involve directing our attention towards the
algorithm’s biases, alternatives, and playful usages (as a reflective artwork might), but
also towards its inner parts, which, again, embody more affordances. These parts are
built upon instructions that are, at the lowest level, built-in as micro-instructions on
the CPU or GPU-level. Querying the affordances of an algorithm thus leads to the
finding that these affordances are recursively intertwined as in a fractal shape. Making
algorithms affordable ideally implies working with algorithms on a daily basis:
algorithms should not be expensive things we dream of and desire but cannot afford.
An important pre-condition for this coming true is that an algorithm, such as the
Viola–Jones object detection, is foremost not proprietary, but is instead open source
and well documented.
Here, my example shows some flaws: two years after its first description in 2001, the
Viola–Jones object detection algorithm was open sourced and included with the
OpenCV framework (Kruppa et al., 2003). And yet, its license is still, from a
commonistic point of view, malfunctional. Although OpenCV is open source, its
licence is not based on the GNU General Public License, but on a so-called permissive
free software license, which does not prohibit an algorithm’s commercial application.
Even when the code is well documented and fully open sourced, to maintain its
commonistic affordance, an insistence on keeping it non-commercial is therefore
highly important. Additionally, it is not enough to just re-use the modules, libraries,
and demo examples of a set of algorithms, but a genuine desire to know, recognize,
MIYAZAKI | Take Back the Algorithms!
279
and to play with its inner workings and an increased sensitivity for its timing is required.
This requires approaches that go beyond a mere rational, abstract, and mostly textual
understanding. A more sensorial connection with the object of study is needed here.
Cultivating recursive practices and applying media technologies to understand other
media technologies might be a first step to increasing our conscious connectivity with
and environmentality of our body-mind-media-ecosystems, their algorithms, and
affordances. Can we hear computer vision? What would it feel like? What belongs to
the environment of a Viola–Jones object detection algorithm? Is the human reading
or watching Viola–Jones object detection at work also part of its environment?
Environmentality is a concept borrowed again from philosopher Timothy Morton,
who in the context of climate change defines it as a “becoming aware of something
that is just functioning, yet now we have global warming and pollution. We are aware
of it, because some kind of malfunction is taking place” (Morton, 2012: 97).
Remembering the concept of affordance as being developed originally in the context
of an “ecological approach to visual perception” (Gibson, 1986), which was then
famously turned into a design concept (Norman, 1988), indicates a taking-back of its
environmental aspects, which have been forgotten in the time between. As described
earlier, works like CV Dazzle, for example, can give us clues as to the malfunctioning
of rather new sorts of environments (compared to those of buildings, landscapes,
atmosphere, climate, etc.), namely those of algorithms, which are, increasingly,
intermingling with every other type of environment.
CV Dazzle increases our environmentality, our awareness, of something that
commonly remains unnoticed. Making an algorithm affordable in this sense means not
to regard it as a closed black box, but instead to try to learn about its inner workings
by connecting it with an “experimenter,” thus creating feedbacked couplings with it,
as the early cybernetician Ross Ashby had already envisioned in the 1950s (1956: 87).
Exploring the affordances of a method, an algorithm, or a digital technique also
involves exploring the full spectrum between what you are and what an algorithm is,
and what you and what this algorithm seem to be: what is Viola–Jones object detection
and what does it seem to be? Where are the limits of Viola–Jones object detection as
an entity? Do the images – the data – processed and learned influence the behavior
and effectivity of Viola–Jones object detection? Yes, certainly. Is its racism a feature
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
280
or a bug? Was it intentional, or more a result of a general tendency linked with the
cultural and epistemic backgrounds of Paul Viola and Michael Jones? More context
explosion: What kind of entity is performing the algorithm? The CPU? The monitor?
Our consciousness? Our affecto-somatic body? The operating system? The
semiconductor minerals inside the CPU? To take back algorithms is not merely a way
of asking questions and making things more complicated; it is also an offer for further
affordances and malfunctions to emerge. Whether these affordances are planned or
not is insubstantial. More important is whether they enable more solidarity and
commoning, rather than more competition, and whether they might lead to new
insights regarding how we can live together in a self-determined fashion and share
things, resources, knowledge, and affects. Entangled with this concretely utopian
approach is also the aspiration of organizing movements such as commonism in ways
that are inseparable from experimentation, design, and an acknowledgement of its
reciprocity to body-mind-media-ecosystems (Lovink & Rossiter, 2018: 171).
Thinking in ways that are concretely oriented towards utopian goals while also being
media-theoretically informed about commonistic affordances also implies that we need
to think about more solidarity with algorithms, which might be considered as
something akin to companions or co-species. ‘Solidarity’ is etymologically related to
the Latin term solidus, and refers to a kind of non-hollow whole, a solid, a body.
Solidarity in the 18th century was redefined as solidarité, signifying a joining together of
people with shared interests and mutual responsibility. Solidarity as it is meant here,
and also to be consistent, is not – or should not pursue – the enclosing of things into
a body, but rather the pursuit of a situation in which a body becomes porous, full of
holes, and connections. Sharing interests together, being mutually responsible for one
another, and thus making things affordable for each other implies an understanding
that we are all linked together, also in case of malfunction.
Increasing solidarity with machinic eco-systems – even if it is merely meant
metaphorically – implies generally more inter-growing between human, machinic,
organic, and other sorts of ecosystems. We need more environmentality not only
including our organic co-habitants, but extending to all kinds of non-human and non-
organic technological-entities chirping, screeching, wiggling, shaking, jiggling or
rocking and in more technical terms signalling in the informational-energy-fields that
MIYAZAKI | Take Back the Algorithms!
281
we are surrounded by. This means not merely an exploration of their structures,
software, hardware, and in-between layers, as mentioned earlier, but also an opening
of ourselves. We need to become more aware of our porosity (about our holes and
connections) and at the same time become more porous – more open in affective,
psycho-technological and perceptive meanings. This is not meant in the sense of a
Silicon Valley-inspired “radical openness” that has become integral to contemporary
capitalism, but in the sense of an even more radical opening of new channels to our
cognition and perceptions of algorithmic systems; this is an openness that includes
algorithms’ malfunctions and that is always oriented towards learning new things about
commoning, as part of a multiple, poly-structural body-mind-media-ecosystem.
Ultimately, this also implies a sort of increased and technologically augmented,
technically mediated, computerized engagement with all types of energy fluctuations
(bioelectric, electromagnetic, thermal, kinetic, gravitational, nuclear etc.), which should
be linked to docking stations on our bodies and into our thinking. Simply spoken: it
involves a playful exploration of alternative, sometimes poetically dysfunctional,
sensor-actor couplings, installations, or configurations. Most importantly, in doing all
these things, we should never forget to counter-act against movements that might
again enclose all these things opened before.
Exploring algorithmically automated decision-making processes on all scales of our
media culture, media scholar Florian Sprenger ingeniously remarks that, “[a]lthough
we might still be able to identify individual decisions, we will always be too late to the
scene, because their sheer number and speed exceeds our capacities” (2015: 113). Still,
since the increased connectivity between machinic systems, from which humans are
excluded, is unavoidable, it is critical that we ensure that it is never “too late” to
reconnect. Algorithms are usually perceptually beyond reach; making them ‘affordable’
is therefore crucial. For understanding the commonistic affordances of an algorithm
you need to play or co-operate with it and never leave again. Increasing solidarity with
machinic affordances through commoning also implies responsibility, active careful
engagement, and continued self-criticality. If you make something affordable, you are
responsible for it. This includes an attentiveness to the neo-liberalist tendency to
further enclose things in order to make profit. Competition and growth are tolerated,
but only as long as the rhizome or tumor is benign, and as long as it serves the idea of
mutual, even symbiotic, solidarity, living, and sharing together in a manner in which
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
282
all members of a community can live – even if such a goal is reached only after a long
series of conflicts and discussions. Potentials for such agonistics are of course always
intended (Mouffe, 2013).
Ultimately, discussion regarding commonistic affordance is never final. This article is
a non-solution. Commonistic affordance can never be fully articulated as it unfolds
along recursive trajectories. Affordances afford affordances in a never-ending différance
of concrete utopia. Commonistic affordances (of algorithms) are a hopeful signal
towards a future short-circuited with our now.
References
Ashby, W.R. (1956) An Introduction to Cybernetics. New York: John Wiley & Sons.
Bloch, E. (1986) The Principle of Hope. Cambridge, MA: MIT Press.
Bollier, D. & Helfrich, S. eds. (2015) Patterns of Commoning. Amherst, MA: Levellers
Press.
Brunner, C., Nigro, R. & Raunig, G. (2012) ‘Towards a New Aesthetic Paradigm.
Ethico-Aesthetics and the Aesthetics of Existence in Foucault and Guattari’,
RADAR. Musac’s Journal of Art and Thought. Vol. (1): 38–47.
Burckhardt, L. (2017) Design Is Invisible. Basel: Birkhäuser.
Chun, W.H.K. (2009) ‘Introduction: Race and/as Technology; or, How to Do
Things to Race’, Camera Obscura: Feminism, Culture, and Media Studies. 24(1): 7–35.
Dyer-Witheford, N. (2007) ‘Commonism’, Turbulence (June).
[http://turbulence.org.uk/turbulence-1/commonism]
The Economist (2016) Companies Special Report: The Rise of the Superstars. The
Economist, 1–14. Access:
http://www.economist.com/sites/default/files/20160917_companies.pdf
Evans, S.K. et al. (2017) ‘Explicating Affordances: A Conceptual Framework for
Understanding Affordances in Communication Research’, Journal of Computer-
Mediated Communication, 22(1): 35–52.
Gaboury, J. (2018) ‘Critical Unmaking. Toward a Queer Computation,’ in J. Sayers,
ed. The Routledge Companion to Media Studies and Digital Humanities. New York:
Routledge, 483–491.
Gibson-Graham, J.K., Cameron, J. & Healy, S. (2013) Take Back the Economy: An
Ethical Guide for Transforming Our Communities. University of Minnesota Press.
MIYAZAKI | Take Back the Algorithms!
283
Gibson, J.J. (1986) The Ecological Approach to Visual Perception. New York: Routledge.
Guattari, F. (1995) Chaosmosis. An Ethico-Aesthetic Paradigm (trans. Paul Bains & Julian
Pefanis). Bloomington: Indiana University Press.
Haiven, M. (2016) “The commons against neoliberalism, the commons of
neoliberalism, the commons beyond neoliberalism,” in S. Springer, K. Birch, & J.
MacLeavy, eds. The Handbook of Neoliberalism. New York: Routledge, 271–283.
Hertz, G. & Parikka, J. (2012) ‘Zombie Media: Circuit Bending Media Archaeology
into an Art Method,’ Leonardo, Vol. 45(5): 424–430.
Hörl, E. (2018) ‘The Environmentalitarian Situation,’ Cultural Politics. Vol. 14(2): 153–
173.
Kittler, F.A. (1997) ‘Protected Mode,’ in J. Johnston, ed. Literature, Media, Information
Systems: Essays. Amsterdam: G & B Arts International, 157–168.
Lefebvre, H. (2004) Rhythmanalysis: Space, Time and Everyday Life, [Élements de
rythmanalyse, Paris: Édition Sylleps 1992]. London/New York: Continuum.
Levitas, R. (1990) ‘Educated Hope: Ernst Bloch on Abstract and Concrete Utopia,’
Utopian Studies. Vol. 1(2): 13–26.
Lovink, G. & Rossiter, N. (2018) Organization After Social Media. Colchester: Minor
Compositions.
Mitchell, W.J.T. & Hansen, M.B.N. (2010) ‘Introduction’, in W. J. T. Mitchell & M.
B. N. Hansen, eds. Critical Terms for Media Studies. Chicago: University of Chicago
Press, vii–xxii.
Miyazaki, S. (2016) ‘Algorhythmic ecosystems. Neoliberal couplings and their
pathogenesis 1960–present,’ in R. Seyfert & J. Roberge, eds. Algorithmic Cultures.
Essays on Meaning, Performance and New Technologies. New York: Routledge, 128–139.
Morton, T. (2012) ‘Mal-functioning,’ The Yearbook of Comparative Literature. Vol. 58:
95–114.
Morton, T. (2018) Being Ecological. Pelican.
Mouffe, C. (2013) Agonistics: Thinking the World Politically. London: Verso.
Noble, S.U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New
York: NYU Press.
Norman, D. (1988) The Psychology of Everyday Things. New York: Basic Books.
O’Neil, C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy. New York: Penguin.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
284
McGlotten, S. (2016) ‘Black Data,’ in E. Patrick Johnson, ed. No Tea, No Shade. New
Writings in Black Queer Studies. Durham: Duke University Press, 262–286.
Shantz, J. (2013) Commonist Tendencies: Mutual Aid Beyond Communism. New York:
Punctum Books.
Sprenger, F. (2015) The Politics of Micro-Decisions. Edward Snowden, Net Neutrality, and the
Architectures of the Internet. Lüneburg: meson press.
Viola, P. & Jones, M. (2001) “Rapid object detection using a boosted cascade of
simple features,” in Proc. IEEE Computer Society Conference on Computer Vision and
Pattern Recognition. IEEE, pp. 511–518.
Wark, M. (2004) A Hacker Manifesto. Cambridge: Harvard University Press.
Acknowledgments
This article has been written in the context of “Thinking Toys or Games for Commoning (project nr. 175913, 2018-2021)” funded by the SNSF – Swiss National Science Foundation. I am especially thankful to Yann Patrick Martins our in-team-programmer for his valuable suggestions.
Notes
1 Its German original was published in 1991. 2 This is an idea that we connect more generally to the “invisibility of design,” as formulated by Lucius
Burckhardt already in the late 1970s (2017). 3 Environmentality is a useful concept here in that it can describe the wider implications of commonistic
affordances, as discussed in more detail below. 4 A commendable FOSS community is the p5.js community. See https://p5js.org/community/ 5 I will unpack some of these aspects further below. 6 Historically regarded, it should be remembered that geometry, arithmetics, music, and astronomy,
together with rhetorics, logics, and dialectics, where the seven fields of the liberal arts taught at universities in Western Europe since at least five hundred years ago, and that aspects of power and control linked to mathematics gained momentum not until the dawn of statistics as an applied science strongly linked to the rise of statehood and theories of governance in the 18th century. Notably, the term statistics etymologically is rooted in New Latin statisticum meaning “of the state.”
7 While framed primarily in epistemological rather than economical terms, ‘making affordable’ in this case also reminds one that epistemology and economy are always intertwined.
8 See, for example, Morton, 2012 for a similar idea. I will take up this concept further below again. 9 https://cvdazzle.com 10 Named after the mathematician Alfréd Haar. 11 In the case of the classifier included in the OpenCV library, there were 6,000 features. See (Viola &
Jones, 2001: I–515) 12 This image has been released into the public domain by its author, Prmorgan at English Wikipedia.
This applies worldwide: https://en.wikipedia.org/wiki/File:Prm_VJ_fig1_featureTypesWithAlpha.png 13 See, for example, Chun, 2009, McGlotten, 2016 and Noble, 2018 for the relations between race,
technology, data and algorithms.
MIYAZAKI | Take Back the Algorithms!
285
14 See, for example, the Algorithmic Justice League by Joy Buolamwini, or ORCAA, founded by above
mentioned mathematician Cathy O'Neil, which is a consulting company that helps companies and organizations audit their algorithmic risks.
15 See http://sites.ieee.org/sagroups-7003/files/2017/03/P7003_PAR_Detail.pdf
Shintaro Miyazaki is a Senior Researcher of the Institute of Experimental Design and Media Cultures at the Academy of Art & Design in Basel FHNW, Switzerland. He obtained a PhD in media theory at Humboldt-Universität zu Berlin (2012). His works oscillate between scholarly work and practice-based research projects, with a focus on media technology. His current interests include cybernetics, design theory, fictional world-building, machine learning, self-organization, commoning and non-solution-oriented co-design. Email: [email protected]
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
286
Special Issue: Rethinking Affordance
The Art of Tokenization:
Blockchain Affordances and the
Invention of Future Milieus
LAURA LOTTI
Independent Researcher
Media Theory
Vol. 3 | No. 1 | 287-320
© The Author(s) 2019
CC-BY-NC-ND
http://mediatheoryjournal.org/
Abstract
Ten years after the introduction of the Bitcoin protocol, an increasing number of art-tech startups and more or less independent initiatives have begun to explore second-generation blockchains such as Ethereum and the emergent practice of tokenization (i.e., the issuance of new cryptoassets primarily to self-fund decentralized projects) as a means to intervene in the structures and processes underlying the rampant financialization of art. Yet amidst the volatility of the cryptocurrency market, tokenization has been critiqued as a way to reinscribe and proliferate current financial logics in this new space. Acknowledging such critiques, in this essay I foreground the novelty of cryptotokens and blockchains by exploring different examples of how tokenization has been deployed in the art market-milieu. In spite of recent attempts to extend the scarcity-based paradigm to blockchains, I argue that cryptotokens do introduce differences in kind in the ways in which value generation and distribution are expressed and accounted for in digital environments. In this context, artistic approaches to tokenization can illuminate new aspects of the affordances of these technologies, toward the disintermediation of art production and its networked value from the current institutional-financial milieu. This can open up new ways to reimagine and reprogram financial and social relations, and gesture toward new opportunities and challenges for a practice of digital design focused on the ideation and realization of cryptoeconomic systems.
Keywords
contemporary art, technicity, networked cultures, financialization, tokenization, blockchain
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
288
1. Affording affordances: art between financialization and
tokenization
Ten years after the invention of the Bitcoin protocol, blockchain is eating the art
world – with an increasing number of startups, galleries, institutions and more or less
independent initiatives exploring second-generation blockchains such as Ethereum,
and the emergent practice of tokenization, as a means to intervene in the structures
and processes underlying the rampant financialization of art. Tokenization refers to
the issuance of smart contracts tokens, conventionally (but not necessarily) through
the ritualized event of an Initial Coin Offering (ICO), which allows access to the
existing or prospective value generated by a specific asset – such as gold, computing
power, storage, even artworks, and, more generally, an alluring value proposition for
a decentralized ecosystem. Promising the disintermediation of funding streams – and
more broadly, processes of value generation and transfer – from mainstream
financial-institutional channels and the creation of network effects around
independent projects, tokenization has been initially heralded as a new tool for
organizational and economic autonomy. However, amidst the volatility and
sensationalism of the cryptocurrency markets and yet-unresolved technical and
regulatory challenges, the practice has been subject to criticism due to its inability to
deliver on the promise of an alternative financial-organizational system that would
minimize the role of centralizing third parties (states, enterprises, banks, institutions,
boardrooms, platforms, exchanges) while fostering an open and decentralized web.
Instead, tokenization is often seen as a means to, in most cases, reinscribe and
proliferate current financial logics in the digital realm – by providing more granular
means for the monetization of digital interactions while leveraging the speculative
nature of markets.1
Acknowledging the extent to which the digital has become a prime site of economic
(in addition to social and cultural) production, my proposition in this paper is that we
should consider blockchains and tokens as novel technologies in the initial stages of
a process of individuation, rather than prematurely aborting the inquiry into their
affordances as already overdetermined by tendencies of financial accumulation with
which we are intimately familiar. I borrow the concept of individuation from Gilbert
Simondon’s genetic philosophy of technology (2013; 2017). This postulates that
LOTTI | The Art of Tokenization
289
technical objects evolve – “concretize” – analogically to living beings by discovering
a “recurrent causality” (a feedback mechanism) with an associated milieu (the
surrounding environment), with which the object exists in a relation of mutual
conditioning. The associated milieu enables the technical object to acquire an
“internal resonance” and convergence according to its own finality (Simondon, 2017:
26). As the technical object evolves, it attains higher degrees of concretization that
allows it to become multi-functional by extending and dynamically integrating its
associated milieu “into itself through the play of its functions” (Simondon, 2017: 50),
therefore gaining not only an internal consistency but also an external resonance.2
While Simondon was mainly referring to the physical environment surrounding a
technical object — such as the integration of the action of the river, as motor and
cooling agent, in the functioning of the Guimbal turbine (2017: 57) — in the
unfolding of this paper I will expand on the differences introduced by computational
systems in the concretization of a digital milieu through the work of Yuk Hui to
provide a reframing of the concept of affordance updated to today’s digital age.
Thus, the theoretical assumption of this paper is that we should understand the
token-based networks emerging around blockchain technology as systems in the
early stages of a process of individuation with an associated milieu yet to be
discovered fully. By emphasizing the concept and role of the milieu as opposed to
limited conceptualizations of the market,3 I want to stress that these new digital tools
should not solely be investigated according to economic notions inherited from the
industrial economy (such as scarcity and return on investment) but, from a more
ecological perspective, as means to potentially introduce new modes of organizing
processes of collective individuation different from those allowed by computational
capital as it currently exists. 4 Thus following this philosophical trajectory, the
challenge becomes one of learning to understand and reason with these tools in
order to be able to leverage their novelty, by informing and structuring new
dimensions of economic, social, and cultural exchange. In making this proposition, I
aim to open up new ways to think constructively about the creation not only of new
markets, but first and foremost new milieus, the ‘value(s)’ of which is/are indexed by
a circulating token.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
290
A thorough discussion of the philosophical underpinnings of my claim will not be
the prime focus of the present paper. Instead, I will illustrate this by discussing
examples of how tokenization has been deployed in the art world. Thanks to the
capacity for experimentation afforded from within its own domain of operations and
its proximity to the logic of networked value production and valuation practices, the
art field (as a limit case in a much broader spectrum of creative approaches to
blockchains) provides a fecund milieu within which to investigate and express the
imperceptible and yet concrete capabilities of blockchains and tokens. As a matter of
fact, since at least as early as the New Tendencies (1961-1973) and Cybernetic
Serendipity (1968) exhibitions, artists have played a crucial role in pushing the
boundaries of technological research by exploring, making felt and, at times,
‘misapplying,’ the affordances of new technologies (Scarlett, 2018). To be clear, I am
not interested in investigating such projects as artworks. Rather I will be analyzing
them as singular cases of Simondonian technical systems, foregrounding the ways in
which they differentially operationalize the affordances of blockchain, as a novel
technical form, through their applications and the relations they instantiate with a
nascent milieu. While it is true that in many cases the tokenization of art merely
reinscribes the scarcity-based approach inherited from the industrial economy to
information, I aim to show that artistic approaches to tokenization are able to
foreground the potentialities of these technologies to unlock new imaginaries for
systems of value creation. In so doing, they gesture to the social, financial and
aesthetic affordances that these tools may offer, not only to artists but more broadly
to networked producers seeking autonomy from current institutional-financial forms.
In the following section, I introduce the terms of the debate around the
financialization of art in relation to networks and blockchains. Subsequently, I
reframe the concept of affordance through the lens of Simondon’s philosophy,
extended through Yuk Hui’s theorization of digital objects, and foreground the role
of the associated milieu in relation to the concretization and individuation of a
technical system. Further, I will discuss examples of artistic engagements with
tokenization, tracing parallels and differences with current financial and
organizational forms. I then discuss in more depth the new possibilities opened up
by the structural and transactional affordances of tokenized systems –
LOTTI | The Art of Tokenization
291
“cryptosystems” – toward the structuration of new market-milieus. Ultimately, I
gesture towards the opportunities and challenges for emerging cryptocultures
engaged with the realization of such systems.
2. The tokenization of art, part 1: the financialization of
networks, and the (failed?) promise of the blockchain
The relation between art and finance has been widely debated, with a particular focus
on the key role of financial markets in shaping the cultural, political and social milieu
within which art operates (Wiley, 2018), and as the sources of cultural funding have
been put under unprecedented scrutiny by artists, cultural practitioners and
institutional players who are receivers of such funding (Corbett, 2018; Fraser, 2018).
In this context, the financialization of art is manifested, on the one hand, by the
expansion and professionalization of art investments (Velthuis and Coslor, 2012),
and, on the other hand, by its operational analogy with the logic of derivatives
markets, in view of the abstracted, networked processes that characterize art’s ‘value’
and valuation in its post-medium condition (Ivanova, 2016).
Arguably, the financialization of art can be seen as part of a larger socio-cultural
phenomenon, which consists in the spreading of patterns of financialization to the
socially networked sphere. This can be conceived in a two-fold manner: on the one
hand, it corresponds to the consolidation and increasing legitimacy (in the political
economy of the Web) of a business model and power order characterized by its
reliance on information trading as a key source of value generation, rather than by
material production, coupled with the establishment of dynamic forms of rent to
define Internet monopolies (see Marazzi, 2011; Pasquinelli, 2009; 2015). On the
other hand, it is manifested as a more insidious tendency by which the operational
mode of derivative finance has pervaded digital networked environments. This logic
can be described in terms of the abstraction of the forms and processes of value
creation from any material referents and the recombination and commensuration of
all forms of capital (affective, cognitive, cultural, social) to price, allowing for “the
continuity of circulation in and across immensurable difference” (Cooper, 2010: 179;
see also Bryan and Rafferty, 2006; 2010).5 In digital platforms, it has come to define a
new mode of governance in which social relations are organized, valued and
monetized through automated predictive models that bear little relation to the
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
292
underlying material reality of the users, enabling the ad-driven business model of
online platforms (on how this is deployed, for example, by Facebook’s social graph;
see Arvidsson, 2016).6
It is at this junction that the novelty of the blockchain inserts itself more forcefully –
not simply by proposing a form of digital money that is not stored in any banks’
servers but also by providing, for the first time, a model to enable the development
of open networks. As it is well known, as the last financial crisis was unraveling, the
Bitcoin protocol offered a tentative, yet concrete, alternative to the current
computational-financial paradigm by realizing the very first decentralized monetary
system that is native to digital environments. It did so, as I will discuss in more detail
below, by providing an elegant solution to the double spending problem – that is, the
problem of achieving provable scarcity in digital environments so as to realize a
monetary system. This is based on a shared data layer (the blockchain) that is
replicated and stored across all nodes in an open and distributed network, and a
cryptographic native token used to access the value produced by such network and
data (cf. DuPont, 2019 for an overview of the technology). In so doing, Bitcoin
retrospectively exposed the structural conditions that imperceptibly enable the
financialization of everything as in-built in the current internet stack, in which
networked (social, cultural and economic) value is generated through the freely
available communicative capabilities of the protocol layer (such as TCP/IP, HTTP,
SMTP7) and captured and re-aggregated as tradable information at the application
layer through the “programmability” of platforms (Helmond, 2015).8
In 2015, Ethereum extended Bitcoin’s vision with the generalization of a
cryptographically secure “transaction-based state machine” (Wood, 2018)9 that could
run arbitrarily complex computation and enable the creation not only of a
decentralized currency but of decentralized applications, ushering in a new wave of
experimentations with new socio-economic forms. Yet the very possibility to extend
the notion of digital scarcity to “anything that can currently be represented by a
computer” (Wood, 2018: 2) coupled with trans-border value transfers (often at a
fraction of the cost) and pseudonymous transactions has effectively reinforced
incumbent property and financialization forms into this new space, providing more
LOTTI | The Art of Tokenization
293
granular means for the transactionalisation of networked interactions, while
leveraging the speculative nature of markets.
In the art world, this has become evident with regards to the issue of collectability in
the age of networked markets and digital reproduction. On the one hand, the
tokenization of physical art objects reproduces the rent model characteristic of
financial capitalism and current Web platforms, promising a more streamlined
tracking of provenance, ownership and authenticity of such assets. A case in point is
Maecenas, a self-defined “decentralized art gallery.” Maecenas tokenizes artworks
into tradable, fractional ownership certificates that are auctioned on the open market,
and which can be acquired through Maecenas’ ART token. The artworks themselves,
meanwhile, are safely kept in freeports and never exhibited, making contemporary art
literally disappear, as J.J. Charlesworth quips (2017). 10 On the other hand, the
tokenization of digital assets imports the logic of scarcity inherited from the
industrial paradigm to the informational domain, in direct contradiction to the
fluidity, copyability and mutability of the digital medium, and against the ethos of
open source production. While successful examples of the commoditization of digital
art through blockchains already exist (companies such as Ascribe or Verisart have
been active in this space for years),11 this tendency has been recently accelerated by
the spreading phenomenon of cryptocollectibles – that is, tradeable, unique digital
images, such as Rare Pepe, Crypto Punks and the infamous CryptoKitties.
CryptoKitties are nothing more than ERC-721 tokens (an Ethereum standard
proposal for ‘provably rare’ digital assets) that visualize the uniqueness in the
contract itself. By storing metadata (such as an HTTPS12 link or IPFS13 hash) to each
token’s attributes on-chain, digital rarity is brought to online space for the first time.
While ERC-20 tokens, such as Maecenas’ ART, function well as settlement
mechanisms due to their interchangeability and divisibility, non-fungible tokens
(NFTs) are indivisible, non-interchangeable, and yet tradeable, ushering in new
possibilities for ‘rare digital art.’14
Consonant with the logic of derivative finance described above, both approaches are
based on the abstraction of the ownership claim from any referent (either material,
such as gold or fractions of unique artworks, for digital, such as the unique design of
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
294
a pixelated cat) and a more streamlined circulation thanks to the tendency toward
standardization of basic transfer functionalities in contracts. As Rachel O’Dwyer
rightly observes in the context of the infamous CryptoKitties: “Like money then, the
ownership claim lays claim to nothing more than the act of ownership itself. What’s
valuable is the information circulating around the good” (2018). Indeed, the token
slips back and forth between a representation of an asset and liquid currency, “whose
performance relies on the hype and information that circulates around the good”
(ibid.). In this sense, these approaches, it is argued, merely treat art as currency (see
Arcand, 2018) – a universal numeraire for the circulation of cultural capital, which is
abstract, transactional and which, in virtue of its detachment from the material reality
in which it is embedded, could also serve to increase one’s status, gain political
influence, commit tax fraud, or engage in money laundering.
While it is evident how abstraction and circulation have become the defining traits of
the logic of techno-financial capitalism in networked environments, perhaps an
interesting question to pose is not so much what blockchain can do for (the) art
(market), but what art can do for the blockchain, by leveraging such forms of
abstraction and circulation, in order to then open up the thinking to how in turn they
may affect the organization and evolution of the systems they portend. In order to
do so, let me first expand on the concept of the associated milieu in Simondon’s
philosophy, coupled with an explanation of his theory of technicity, in order to
rearticulate the concept of affordance within the broader scope of a genetic theory of
technology.
3. Technicity and milieu: In-forming affordance
As mentioned in the introduction, the novelty of Simondon’s philosophy lies in his
formal approach to the problem of individuation – that is, of how things come into
being – on the basis of a non-reductive theory of information, or universal
cybernetics. For Simondon, physical, psycho-collective and also technical entities
individuate through a relation of mutual conditioning with an associated milieu from
a “preindividual” field (2013: 31–32). For Simondon, individuation is the single
process underlying the ontogenesis of physical, biological and also technical beings,
and it is the sole process that allows for the conservation of being through becoming,
LOTTI | The Art of Tokenization
295
thus allowing for evolution (2013: 25). In so doing, Simondon reverses the
perspective by which the individual, as a constituted being, has always been studied,
replacing the notion of an ontology of being with an ontogenesis of becoming. In the
context of the concretization of technical objects, the preindividual milieu is
constituted by culture, understood as that which provides a regulative function on
the individuation of the heterogeneous collective constituted by humans, the
environment and machines.
A determinant factor in the concretization of technical objects is technicity. In
Simondon’s genetic theory of technology, technicity corresponds to a “tendency” of
concretization of a certain technical paradigm into objects (2017: 51). It is a
“determination of forms” (2017: 150). As Simondon explains, technicity manifests
itself in the practical use of tools. However, it precedes and exceeds the object as a
mode of relationality between the system constituted by human and world.15 It is
technicity that underlies the manifestation of technics and the concretization of a
technical paradigm into objects, providing the latter with a normative and
evolutionary power to affect the ensemble constituted by the relations between
humans and the world (Simondon, 2017: 74). Importantly, by positing technicity as
an originary mode of relation with the world, Simondon also reminds us that
technicity pre-exists economic determinations. It is technicity alone which defines
the conditions of possibility for the technological – and also social and economic –
affordances in the broader trajectory of the evolution of a technical lineage.16
While the technicity of an element reaches its full expression in the artisanal
paradigm of production, the technicity of the technical individual (the machine)
characterizes the industrial model of production. With the introduction of the
cybernetic “cognitive schema,” technicity has a tendency to reside in systems.
Cybernetics replaced the notion of a teleological mechanist progress with that of
feedback, providing a self-regulatory function toward “an active adaptation to a
spontaneous finality” (Simondon, 2009: 18; see also: Hui, 2017). Simondon
presciently noted that the openness of the “reticular structure” that characterizes
informational systems (beginning with telecommunication networks such as phone
cables and antennas) makes them open and participable.17 For this reason, it has the
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
296
potential to integrate the modulative function of the “technical reality” into culture
(Simondon, 2017: 21).18 Yuk Hui (2016) extends Simondon’s speculative theory of
“post-industrial objects” by articulating the existence of digital objects – examples of
which are data and metadata. In contrast to Simondon, Hui argues that digital objects
not only individualize, developing and integrating an associated milieu, but also
individuate through their capacity to dynamically restructure their relations with other
objects, systems, and users in their associated milieus (Hui, 2016: 57). The process of
concretization of digital objects effectively sublates the difference between the object
and its material support into a digital milieu, constituted by and through the relations
actualized in the multiple networks, protocols, standards, data and algorithms (Hui,
2016: 24). As Hui observes: “digital objects take up the functions of maintaining
emotions, atmospheres, collectivities, memories, and so on” (Hui, 2016: 57). In so
doing, they also integrate and converge other dimensions of being into their
functioning, such as economic and social systems (Hui, 2016: 57) and, in turn,
“inaugurat[e] a new set of operations under the names of social computing and crowd
sourcing” (Hui, 2016: 58). The development of the Internet and of the new practices
that it enabled, by pervading increasing aspects of the world, exemplifies this well.
From its genesis within academic circles, to the military-industrial complex and
parallel histories of hackers and cypherpunks, the commercialization of the Internet
in the nineties, and the more recent rise of the participatory web with social media
platforms, the evolution of networked communication technology has been
characterized by a progressive openness of the technology and participation by users.
However, this has so far mostly enabled more pervasive forms of control and
economic extraction.19
Simondon’s and Hui’s genetic philosophies of technology offer us key conceptual
tools to look at openness and programmability not merely as architectural features of
protocols and platforms but as characteristics of the concretising technicity of digital
systems – as open, modular and participatory, not only individualising but
individuating in and through the larger socio-cultural milieu by integrating in turn other
domains into their functioning. From this standpoint, it would be misleading to ask
to what extent platforms and blockchains engender and are subjected to the logic of
derivative finance and speculative distributed markets. Rather, Simondon and Hui
LOTTI | The Art of Tokenization
297
seem to suggest, the programmability and openness of informational systems – as the
prevailing technical tendency of the post-industrial paradigm of technological
development – have informed the evolution of contemporary financial logics in the
direction of abstraction, fragmentation and circulation, in parallel with the
transformation of economics into a “cyborg science” (Mirowski, 2002) since its
encounter with cybernetics.
In this context, we can think about the capabilities of blockchain technology as a new
stage in the concretization of the technicity of informational systems – what
Helmond (2015) characterizes as the “programmability” of platforms. My
proposition is that it is from the standpoint of the technicity of open digital objects
that we should understand the openness and programmability that characterize
networked systems (from protocols and APIs to open source development and
projects, including permissionless, peer-to-peer forms of value), and temporarily
suspend any judgment on the economic forms they engender, investigating instead
the relations that they enable with an associated milieu. This is as much an
ontological proposition as it is a method of synthetic enquiry: that we should
consider such new techno-economic structures and market formations (such as
platforms, blockchains, and tokenized networks) from the perspective of the
ontogenesis of the digital. In this context, this would entail studying the ways in
which blockchains and smart contracts actualize and synthesize relations (technical,
but also economic, social, cultural) within the broader scope of the lineage of the
openness and programmability of digital systems, and how these systems individuate
these very domains in turn, so as to open up lines of constructive enquiry into the
processes of feedback with their associated milieu.
Therefore, we should understand technicity as that which defines the conditions of
possibility for affordance in the broader trajectory of the evolution (individuation) of
a certain technical paradigm into object. The question, following this trajectory,
becomes one of how to integrate such technicity into culture so as to overcome the
alienation between human and machine established in the industrial mode of
production.20 From this standpoint, the concept of technicity as expressive of an
individuating technical form allows us to reframe our question in terms of the
relation between such novel digital objects and the associated milieu that they
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
298
engender, in comparison to the distributed architecture of contemporary financial
markets. In the context of a novel technical invention such as that of the Bitcoin
protocol, the milieu is yet to be fully discovered. It is “by way of the schemes of the
creative imagination”, Simondon remarks, that we can accomplish the “reverse
conditioning of time” required for the establishment of the conditions of possibility
for the creation of a future associated milieu (Simondon, 2017: 60).21 It is from this
angle that artistic approaches to tokenization can foreground certain affordances that
are exclusive to cryptotokens as a new kind of digital object and programmable value
form through the milieus that they envision, by refunctioning standards and best
practices existing in their ecosystem and operationalizing them towards the
structuration of new forms of value generation and distribution.
4. The tokenization of art, part 2: cryptoeconomics and/as
artistic practice
Alongside the above-mentioned examples of applied tokenization of physical or
digital art objects, a new breed of art-tech startups and initiatives is emerging,
exploring the affordances of blockchain tokens toward the realization of
decentralized autonomous organizations (DAOs); i.e., organizations in which
interaction among agents is mediated not by legal superstructures, but by rules
encoded in protocols, and in which the management of internal capital is mostly
automated (for a canonical categorization see Buterin, 2014). Ambitious projects
such as terra0, a scalable framework for augmented ecosystems, and 0xΩ, a
blockchain-based religion, are examples of how artistic engagement with smart
contracts and tokenized systems can shine new light on the organizational
affordances of these new digital objects. They do so by generating new imaginaries
that may be capable of engaging, in heretofore unprecedented ways, with some of
the most pressing issues of our times – such as environmental management and
coordination of belief systems.
terra0’s Flowertokens are an experimental test-case toward the realization of a
decentralized infrastructure for the self-management of natural resources (forests,
woodlands) through a combination of smart contracts, sensors, open-data oracles,
and AI bots.22 Like CryptoKitties, Flowertokens comply with the specifications for
LOTTI | The Art of Tokenization
299
non-fungible tokens but take the concept of cryptocollectibles offline, extending the
notion of decentralized verification not only from digital to physical assets but, more
boldly, to live assets; in this case, potted dahlias. While CryptoKitties derive their
rarity from their provably unique genetic makeup – which affects the ‘cattributes’ of
each kitty (byzantinekitty, 2018; CryptoKitties, n.d.) – the uniqueness of each
Flowertoken is provided by the metadata (growth rate and height) of each plant,
which is captured by an image processing software and transmitted to an oracle,
which then submits this information to the Ethereum blockchain. The project, which
was active from July to November 2018, consisted of an installation and a website,
from which users and visitors were able to buy and sell tokens, in addition to
monitoring the history and status of the plants. Anyone could participate in the
experiment and interact with the decentralized application through a Metamask
browser extension and Ethereum wallet. Flowertokens were launched on 23 July
2018 at the price of 0.09 ETH each (approximately the equivalent of 40 USD at the
time) in a limited number of 100 tokens corresponding to the dahlias available. In
spite (or because) of the experimental nature of the project, all available tokens were
purchased at least once, with some being offered for resale by the current owners at
prices between 0.3 and 12000 ETH. This, in a sense, registers the appetite and
market viability for such an innovative approach to ecosystem services
tokenization. 23 The project ceased all trading and moved to archive mode in
November 2018 due to lack of funding and resources (terra0, 2018). While it did not
manage to bootstrap itself beyond the gallery space where it was exhibited, its
visionary combination of remote sensing agents, machine learning, and blockchains
ushered in radically new possibilities to reinternalize the values of ecosystem services,
portending (and inspiring) the emergence of a “Nature 2.0” (McConaghy, 2018)
based on a human-machine symbiosis that would be at once economic, ecological
and, importantly, also cultural and social. From this standpoint, even as a sandbox,
Flowertokens can be seen as a step in creating the conditions for the realizability of
Nature 2.0 by concretizing new imaginaries, designs and logics for interoperable
cybernetic ecologies.
As terra0 experimented with non-fungible tokens standards for the tokenization and
automation/autonomization of natural resources management, 0xΩ deploys NFTs in
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
300
conjunction with token-curated registries (TCRs) for the creation of a blockchain-
based religion or, in the words of the creators, a “consensus-driven hyperstitional
engine for the creation of sacred objects” (2018). 0xΩ takes religion as a vehicle for
art patronage (each idea for a sacred object being a unique proposition for an
artwork, represented by a non-fungible token), and leverages distributed consensus
as a way to collectively curate registries of artefacts and associated beliefs. In this
context, TCR is one design pattern for so-called ‘cryptoeconomic primitives’ – that
is, generic building blocks for tokenized games that enable the coordination and
allocation of capital to achieve a shared goal via protocol-based incentives systems
(Horne, 2018). Specifically, a TCR is a kind of curation market – a cryptoeconomic
primitive that enables the decentralized curation of the content of a list or registry.
As developer and TCR pioneer Mike Goldin (2017b) explains: “Token-curated
registries are decentrally-curated lists with intrinsic economic incentives for token
holders to curate the list’s contents judiciously.”24 In 0xΩ‘s case, TCR allow token
holders to collectively curate shared beliefs and sacred artefacts (that is, artworks).
The initial proposal for a sacred object is auctioned off in the form of a non-fungible
token indexing a digital representation of the yet-to-be-realized sacred object. The
proceeds of the auction are used by a DAO to realise the idea and artefact that the
proposal is describing, leveraging the memetic capability of information to spread
such ideas through various channels and hiring artists tasked with the goal of
building the object. The unique token representing the yet-to-be-created object is
fractionalized in shares (which are, aptly, called prayers) that proselytes speculate
upon by trading them. The more the prayers circulate, the higher the transactions
fees will be (which are returned to the DAO). As a consequence, the economic
engine of 0xΩ becomes more and more robust. Here, speculation and the beliefs
that emerge around the proposals for ‘sacred’ artefacts drive the system to grow the
religion, leveraging distributed consensus and revisable governance as a way to
cultivate and express a collective consciousness.
What differentiates these projects from the previously mentioned approaches to the
tokenization of physical and digital art is their coupling of the affordances of
tokenization in terms of programmable and disintermediated issuance of units of
value with the nascent discipline of cryptoeconomics. Ethereum’s founder, Vitalik
LOTTI | The Art of Tokenization
301
Buterin, defines cryptoeconomics as a subset of economics that “uses cryptography
to prove properties about messages that happened in the past [and] economic
incentives defined inside the system to encourage desired properties to hold into the
future” (2017). Cryptoeconomics is an apt example of the ongoing process of
individuation of the blockchain ecosystem, generating new fields of knowledge and
practices that are exclusively made possible by this new technological substrate,
which entwines code and economics in unprecedented ways. Bitcoin wove
cryptoeconomic mechanisms into the core of its protocol, by hardcoding its
monetary policy into the software and tying the emission of new coins to the activity
of validators, or miners. By rewarding miners for validating blocks (and therefore
transactions) through a portion of newly minted coins, the Bitcoin protocol
essentially integrates the function of value production in the “executability” (Hui,
2017: 29) of the protocol.25
Smart contracts tokens extend this novelty to the application layer, by making the
executability of value production effectively programmable to a broader extent.26 As
Hui notes, a digital object (such as a smart contract) is first and foremost a logical
entity, “hence, it expresses a logical infrastructure as constituent of the digital milieu”
(2016: 57). As mentioned above, from the point of view of Ethereum, a token is
simply a contract that defines a mapping of addresses to integers that represent users’
balances (describing the initial state of the contract) and a set of functions to read
and update the state. As such, “sending a token” simply corresponds to calling a
method on a smart contract that has been deployed onto the Ethereum blockchain.
For instance, ERC-20 and ERC-721 Ethereum tokens are contracts that enable
standardized functions (such as getting total supply, getting an account balance,
transfers, delegated transfers and, in the case of non-fungible tokens, the possibility
to trace the external account owner of a specific token ID) to facilitate exchange.27
Yet by looking at the systems that these digital objects engender as a collection of
states and functions, it becomes possible to map out the recursive relations between
state changes and describe their relations mathematically, in the direction of the
creation of the above mentioned cryptoeconomic primitives (such as that used by
0xΩ). This opens up a whole new field of design focused on the realization of
cryptosystems – that is, systems in which the token “must work as a necessary
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
302
element of a self-sustaining system which is a public utility” (Goldin, 2017a). While
cryptosystems rely on the decentralized holding and circulation of their native tokens
as an intrinsic aspect of their success and long-term sustainability, a tokenized
economy (case in point: Maecenas) is not necessarily a cryptosystem. A cryptosystem,
whichever the kind, is not owned by anyone (or better, is reciprocally owned by all its
stakeholders), largely self-sufficient, and usable by any agent (human and non-
human) in an open context. A self-organizing forest and an emergent religion based
on a collectively curated set of beliefs are therefore apt examples of cryptosystems.
Cryptosystems are uniquely enabled by the affordances of the blockchain data
structure, which for the first time combines the immutability of a shared past,
cryptographically recorded on a distributed ledger, with the programmability of a
shared future through a system of internal economic incentives by encoding ‘skin-in-
the game’ at the protocol level for each and every self-interested actor (whether
human or machine) toward a common goal.28 From this standpoint, the affordances
of tokenization in terms of digital scarcity and pseudonymous unique transactions
must be understood as a means to move us toward the possibility of creating
cryptosystems through the design of cryptoeconomic, i.e., tokenized, games.29 These
are protocols for economic, social and cultural interaction, aimed at tightly aligning
incentives between ‘investors,’ ‘producers,’ and ‘consumers,’ and thus ultimately
blurring the boundaries between them as mutual stakeholders in the long-term
success and sustainability of a common project. Yuk Hui and Harry Halpin’s
observation in the context of social networks design resonates with the potentials for
the new interactive-transactive forms afforded by this newly emerging form: “A
project is also a projection, that is, the anticipation of a common future of the
collective individuation of groups. … By projecting a common will to a project, it is
the project itself that produces a co-individuation of groups and individuals” (2013:
115). Cryptosystems make explicit the sets of economic relationships and
hypothetical incentives that contribute to the scattered holding of a common will for
the concretization of a projection (such as ‘Nature 2.0’ or distributed revisable gods)
into a viable project. These new projections (or imaginaries) for common futures are
uniquely made possible by bridging – through an artful blend of design, computation
and economics – the affordances of these technologies with specific use cases,
LOTTI | The Art of Tokenization
303
whether they are forestry management or ideological convictions. This opens up new
perspectives that gesture towards new methodologies aimed at the articulation and
experience of (not-only-)human values. Of course, blockchains and cryptosystems
don’t make any of these systemic issues easy to solve. But they do make them
possible to think about, experiment with, and reason about in entirely new ways.
Thus, while platforms achieve network effects – the emblematic case of production-
through-circulation (of data and information) that characterizes digital economies, in
which the value increases through sharing and participating, as more people use the
platform – by way of siloing access to data, in the design of a cryptosystem the
abstraction and circulation of economic flows is more concretely integrated in the
very processes of production of the network’s value and public data storage,
converging onto the goal of controlled appreciation of the value of the token in the
ecosystem, by modulating its circulation. This is by no means a “good” or a “bad”
thing. It is a different logic of producing and distributing networked value – value
already accreted through digital interactions-transactions around a specific project-
projection – through the automation of the mechanisms by which participation in
the system is indexed, recorded, and rewarded (this could be voting on a proposal, or
providing computing or storage power to the network, or participating in the
curation of a list). And as such, it demands a new understanding of its schema of
functions, to begin to develop, through a “work of the imagination” (Simondon,
2017: 60), new associated milieus, and bootstrap such new cryptoeconomic
networks.
In this sense, if tokenization is merely an accelerated form of transactionalization,
projects such as the ones discussed here illustrate some of the ways in which
tokenization, coupled with cryptoeconomic mechanisms, may provide new
conceptual and practical tools that allow us to face, in novel ways, some the most
daunting issues of our times. They do so by leveraging the forms of abstraction and
circulation concretized by blockchain toward the realization of new milieus that
differentially integrate market interactions into their designs. This allows us to ask: as
distributed capital is encountering a new technological substrate, providing new
modes of value generation and distribution in digital environments, what else might
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
304
finance and the “cultures of financialization” (see Haiven, 2014) become? Below I
attempt to answer this question by illustrating some of the ways in which the above
described projects reproblematize and rearticulate some of the main issues currently
afflicting the field of art production – including the valuation, funding and collecting
of art in light of the increasing financialization of the field – gesturing towards some
of the ways in which the structural and transactional affordances of smart contracts
tokens have the potential to recode and transcode fundamental mechanisms of how
finance works.
5. Cryptotokens and finance: art as derivative
As the regulatory debates about the status of these new financial assets continue,
experts’ opinion regarding the valuation of cryptoassets is divided between
considering them either as a financial security or a store of value. This confirms the
ambiguous nature of smart contracts tokens and indicates the difficulty of framing
them according to any pre-existing category. While this essay is not the ideal context
for a comprehensive debate regarding the nature of these assets, it suffices to say
that, on the one extreme of the spectrum, tokens can be seen as pure, self-fulfilling
speculation – new kinds of derivatives contracts with no underlying asset (or, more
precisely, as contracts in which the underlying asset is constituted as a claim
regarding the uncertain value creation by the platform of which the token is a part).30
At the other extreme, views on cryptoassets as stores of value that emphasize
decentralization and security in a cryptosystem point to the synergistic relation
between the function of store of value and the utility of each token (i.e., that to
which the token gives access, or for what it is possible to exchange it) (Kilroe, 2017;
Wang, 2018). Thus, while cryptotokens’ underlying value at the time of issuance and
until realization remains unbounded – structurally, it cannot be known in advance –
by definition each token is paradoxically fully backed by its functionality, or, in other
words, by what it potentially affords.
One factor contributing to this seemingly unsolvable tension is the profound
structural difference between, on the one hand, the blockchain protocol as a
technological system of value creation, recording and transmission, and, on the other
hand, the current financial-computational apparatus. As a matter of fact, the
LOTTI | The Art of Tokenization
305
blockchain does not acknowledge the concept and mechanism of debt and fractional
reserve; it is an append-only ledger of blocks of valid transactions (transactions in
which the balance cannot ever go below zero), which are cryptographically validated,
time-stamped, and permanently and publicly stored in a decentralized network of
nodes. While it is structurally impossible to have unfunded exposure on a blockchain
(which is, as Bloomberg commentator Matt Levine reminds us [2018], one of the
goals of all finance), the aforementioned cases emphasize how tokens can enable the
expression of non-finite, polymorphous values such as art, ecosystem resources, or
memetic value. All of this can be expressed and registered in the appreciation of the
token through its circulation and the growth of its ecosystem. In the specific
instances of Flowertokens and 0xΩ, each token is backed by the speculative value of
an artistic proposition that becomes realized through the market process by being
acknowledged and valued by a network of peers according to a synthetic temporality
that short-circuits the loop between production, exchange and sheer speculation, and
which collapses their differences on the computational plane of the blockchain.
In this sense, the native tokens of Flowertokens and 0xΩ prefigure new kinds of
financial instruments capable of accounting, in a non-reductive way, for the
economic status of non-standard assets that are constantly generative of value even
while being traded as discrete ‘commodities’. This is true not only in the moment in
which they are transacted on the market as finite products, but from the very
moment in which they are produced – something that is characteristic of art,31 but
also of environment, education, and any kind of speculative, propositional, and
necessarily networked project. Furthermore, these tokens can unlock new
possibilities for new funding models and revenue streams for the arts: through the
crowdfunding of information and capital, Flowertokens take the art world as a test
bed and launch pad from which to generate new human-computational hybrids that
really exist. 0xΩ, in turn, takes religion as a vehicle for art patronage and leverages
distributed consensus as a way to collectively curate registries of artefacts and
associated beliefs.
In so doing, both projects also redefine the question of digital rarity and collectability
through the design choices characterizing the economic games they constitute and
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
306
through their engagements with smart contracts. This turns forms of collection
based on private property into forms of staking based on common access; turns
curation into a collective, gamified form of speculation; and turns viewership into
participation in the generation of artworlds or ecosystems. By blurring the
boundaries between art project and business model, this gestures towards how art
can operate differently – that is, how it can individuate in novel ways – through
encounters with this new technological infrastructure. By extension, the projects
discussed here can be seen as new kinds of financial instruments offered by the
artists to the art community at large, turning collectors and audience into investors
and players. These new financial instruments represent stakes in the future success of
each project (as a collective endeavour made possible by heterogeneous entities), and
constitute a way to hedge against the potential disruption of the current art
ecosystem, while actually performing it. Here, the act of staking (whether fiat money
toward the purchase of these tokens, or tokens toward a proposal) is a constructive
gesture toward the realization of the value of said project. As 0xΩ acutely shows,
speculation drives beliefs, and not the other way around.
In this sense, in response to the critique of the transactionalization of art through
blockchains, these projects point toward a rather different strategy, namely one that
already assumes their intensively financialized condition within art’s informational
milieu and embeddedness in processes of networked value production (economic,
cultural, social, aesthetic) (Moss, 2013). This recognizes that flows of capital,
information, status and aesthetic expression interrelate in tightly coupled and yet
dissonant ways. By expanding (and also partially perverting) the realm of application
of non-fungible tokens and cryptoeconomic primitives, the projects discussed deploy
new modes of explicitly conceiving and operationalizing themselves as derivatives,
setting a powerful example in the exploration of new approaches and methodologies
engaged with the realization of the autonomy of the field from its institutional-
financial milieu.
6. Conclusion: Towards the invention of new markets-
milieus
LOTTI | The Art of Tokenization
307
In this discussion I hope I have demonstrated that cryptoeconomic systems and their
native tokens (as a new asset class endowed with entirely new affordances) can
introduce a difference in kind (i.e. formally and structurally) regarding the ways in
which value generation and distribution are expressed and accounted for in digital
environments. Artistic approaches to the design of cryptosystems as a new, little
studied economic, social, and cultural phenomenon can shine new light on the
affordances of these computational objects and data structures by gesturing to the
articulation of associated milieus beyond the pre-established economic canons
inherited from the industrial economy.
The nature of the above-mentioned experiments remains propositional, since the
underlying technology is still in early stages of development. Yet, through the
concreteness of their designs and visions, they point to a wide spectrum of new
futures that could spring from their offers as emergent plotlines for new social
science fiction. 32 In this sense, they may have more to do with R&D in
cryptoeconomic design than with art galleries and art collection as such. But that is
part of my argument here. These projects are proofs of concept that demonstrate
new ways of articulating processes of value generation and distribution according to
new organizational patterns that put the sacred object, the forest, the art asset and
participatory practice in the foreground, leveraging speculation – as “anticipation of a
common future” (Hui and Halpin, 2013: 115) – and distributed consensus as a means
to operationalize the resources needed for the realization of said common project in
manners that have been unthinkable before the introduction of blockchains and
smart contracts. In so doing, they portend to new milieus that could be made
possible by the techno-economic affordances of blockchains and tokens. terra0
opens up new approaches to scalable economic-organizational hybridized
coordination for ecosystem management, for indexing, tracking, sustaining the
plurality of ecosystemic value. Similarly, 0xΩ is about new forms of participation,
social belonging, cultural-religious content production, the forging of new
communities through cryptoeconomically mediated interactions and tools and, more
broadly, about inventing social-financial practice anew from the ruins of today’s
financialized social networks.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
308
But the evental and/or not (yet) sustainable character of these aforementioned
projects also shows that blockchain is born in a pre-individual milieu that was already
financialized to begin with, so it inevitably inherits a historicity that is based on
financialization as a means to capital expansion. In this sense, the blockchain is not
automatically emancipatory; neither is it inherently connected to right-wing ideology
or heightened forms of neoliberal control. Blockchains, at least in their original
designs, provide a different technological substrate to capital that is open source,
immutable (in the past), and also programmable (in the future). Simply put, this is
why new kinds of assets can, potentially, be created and transacted according to
different rules – though the old rules can, to an extent, continue to apply.
Bitcoin and Ethereum have proved that behavior can be coordinated in a
decentralized fashion through digital objects, i.e., through computers (and humans,
in so far as the latter are partaking in the digital milieu; see Hui, 2016) contributing to
a consensus protocol. The challenge at the application layer becomes how to enable
participation in the “schema of actions” (Simondon, 2017: 236) of this new
technology beyond pre-established usages. While usage is first and foremost a
cultural matter (such as under the paradigm of work or marketing) and extrinsic to
technical becoming, its schema of actions is a function of its technicity. In this sense,
the emphasis on technical becoming and the genetic and participatory aspects of
technicity as it concretizes and exceeds objects opens up new ways to conceive of
interactions with such objects. These can then become available for means other
than the industrial imperatives of accumulation and overcoming the alienation
between human and machine. By attending to the technicity portended by the
blockchain, as a technical tendency to concretization, these projects set a path
forward for new practices of digital design that may respond to the challenges and
possibilities of new decentralized ecosystems of financial, social and cultural value.
They do so by gesturing to the creation of new user experiences capable of
advancing the evolution of said technical systems.
From the standpoint of a theory of technical individuation, the projects discussed
above also suggest that the financialization of art – and financialization in general,
the art market-milieu being a limit case in a broader landscape – is not a financial (i.e.
LOTTI | The Art of Tokenization
309
socio-cultural) problem alone. It is inextricably woven together with the specific
affordances of the digital objects and computational systems that enable all too
familiar practices of abstraction, quantification, recombination and extraction of
information as value in digital environments. That is to say, financialization can be
leveraged in generative and propositional ways through new technological
affordances; affordances that cannot be suggested through interface design and
wireframes, but only through the engagement with new interactive protocols, based
on tokens as conduits to the experience of a decentralized ecosystem.
In so doing, the examples under discussion in this essay also gesture towards some
exit vectors for a new politics that is commensurate with the opportunities and
challenges of the present techno-historical configuration – defined by the
convergence of financial capital, computation, and networked communication
systems – to be constructed in a collective, transversal manner. This begins by
attending to the integration and modulation of the functions of economic production
and circulation in the “executability” of these digital objects and systems (blockchain
data structures, protocol and application tokens, cryptoeconomic primitives,
distributed computing, etc.), and using (and abusing) these new techno-economic
affordances to confront existing systemic problems. It is for this reason that
experimentation with these new tools is crucial if we want to be able to leverage their
novelty or, at least, remain open to the “margin of indeterminacy […] that allows the
machine to be sensible to external information” (Simondon, 2017: 17), towards
further co-individuation within larger techno-socio-economic milieus.
While the success of such endeavors hinges heavily upon the capacity of the
ecosystem to overcome its own hype – including technical challenges of scalability
and interoperability, and the duplicitous hostility of legacy apparatuses – here the art
of experimentation lies in the expansiveness of the imaginary designs of the social-
economic-aesthetic games afforded by the underlying technological infrastructure.
What would it mean to conceive of design patterns that incentivize coordination and
allocation of capital to support the arts – and more generally processes of networked
value production – through new funding streams and models for self-sustainable
organizations that anyone can adopt? What would that art realized in such a context
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
310
be capable of? While the answer lies in one (or several) of the many futures that are
simmering and bubbling in cryptospace, at least now we have some tools to begin
playing with to find answers to these questions.
Acknowledgements
Many thanks to the editors of this special issue for their generous support throughout the revision process, and particularly to Martin Zeilinger for his thorough feedback in the editing phase.
References
Antos J. and McCreanor R. (2018) ‘An Efficient-Markets Valuation Framework for
Cryptoassets using Black-Scholes Option Theory’, Medium. Available at:
https://medium.com/therationalcrypto/an-efficient-markets-valuation-
framework-for-cryptoassets-using-black-scholes-option-theory-a6a8a480e18a
(accessed 10 March 2018).
Arcand R. (2018) ‘Unlimited Editions’, Real Life Magazine. Available at:
http://reallifemag.com/unlimited-editions/ (accessed 11 March 2018).
Arvidsson A. (2016) ‘Facebook and Finance: On the Social Logic of the Derivative’,
Theory, Culture & Society 33(6): 3–23. DOI: 10.1177/0263276416658104.
Barthélémy J-H (2012) ‘Fifty Key Terms in the Works of Gilbert Simondon’, in: De
Boever A, Murray A, Roffe J, et al. (eds) Gilbert Simondon: Being and Technology.
Edinburgh: Edinburgh University Press, pp. 203–231.
BlockchainHub (n.d.) ‘Blockchain Oracles’, Available at:
https://blockchainhub.net/blockchain-oracles/ (accessed 13 March 2019).
Browne R. (2018) ‘Bitcoin: Cryptocurrency market cap down 80 percent since
January peak’, CNBC. Available at:
https://www.cnbc.com/2018/11/23/cryptocurrencies-have-shed-almost-700-
billion-since-january-peak.html (accessed 18 January 2019).
Bryan D. and Rafferty M. (2006) Capitalism with Derivatives: A Political Economy of
Financial Derivatives, Capital and Class. Houndmills: Palgrave Macmillan.
Bryan D. and Rafferty M. (2010) ‘A Time and a Place for Everything: Foundations of
Commodity Money’, in: Amato M, Doria L, and Fantacci L (eds) Money and
Calculation. Economic and Sociological Perspectives. Houndmills: Palgrave Macmillan, pp.
101–121.
LOTTI | The Art of Tokenization
311
byzantinekitty (2018) ‘All About Cattributes’, CryptoKitties 411. Available at:
https://cryptokitties411.com/2018/04/07/all-about-cattributes/ (accessed 13
March 2019).
Buterin V. (2014) ‘DAOs, DACs, DAs and More: An Incomplete Terminology
Guide’, Ethereum Blog. Available at: https://blog.ethereum.org/2014/05/06/daos-
dacs-das-and-more-an-incomplete-terminology-guide/ (accessed 2 January 2016).
Buterin V. (2017) ‘Introduction to Cryptoeconomics’, YouTube. Available at:
https://www.youtube.com/watch?v=pKqdjaH1dRo (accessed 25 July 2018).
Buterin V. and Cowen T. (2018) ‘Vitalik Buterin on Cryptoeconomics and Markets in
Everything (Ep. 45)’, Medium. Available at: https://medium.com/conversations-
with-tyler/vitalik-buterin-tyler-cowen-cryptocurrency-blockchain-tech-
3a2b20c12c97 (accessed 23 September 2018).
Charlesworth J.J. (2017) ‘Will the Blockchain make art disappear?’, Art Review.
Available at:
https://artreview.com/opinion/ar_october_2017_opinion_will_the_blockchain_
will_make_art_disappear/ (accessed 15 September 2018).
Coindesk (2018) ‘State of Blockchain 2018’, Coindesk. 6 February. Available at:
https://www.coindesk.com/research/state-blockchain-2018/ (accessed 12
September 2018).
Cooper M. (2010) ‘Turbulent Worlds: Financial Markets and Environmental Crisis’,
Theory, Culture & Society 27(2–3): 167–190.
Corbett R. (2018) ‘Meet the Beretta Family, the Art-Savvy Gun Makers Who Back
the NRA and the Venice Biennale’, ArtNet News. Available at:
https://news.artnet.com/art-world/beretta-family-art-funding-1263140 (accessed
13 September 2018).
CryptoKitties (n.d.) ‘CryptoKitties’, Available at:
https://www.cryptokitties.co/cattributes (accessed 13 March 2019).
DuPont Q. (2019) Cryptocurrencies and Blockchains. Medford: Polity.
Enxuto J. and Love E. (2016) ‘Institute for Southern Contemporary Art’. Available
at: http://theoriginalcopy.net/isca/ (accessed 21 May 2017).
Fraser A. (2018) ‘It’s Time to Consider the Links Between Museum Boards and
Political Money’, ArtNet News. Available at: https://news.artnet.com/art-
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
312
world/how-are-museums-implicated-in-todays-political-mess-1278824 (accessed
13 September 2018).
Frier S. and Ponczek S. (2018) ‘Facebook Shares Recover from Cambridge Analytica
Scandal’, Bloomberg.com, 10 May. Available at:
https://www.bloomberg.com/news/articles/2018-05-10/facebook-shares-
recover-from-cambridge-analytica-crisis (accessed 10 September 2018).
Garriga M. (2018) ‘Maecenas successfully tokenises first multi-million dollar artwork
on the blockchain’, Maecenas Blog. Available at:
https://blog.maecenas.co/blockchain-art-auction-andy-warhol (accessed 15
September 2018).
Goldin M. (2017a) ‘Mike’s Cryptosystems Manifesto’. Available at:
https://docs.google.com/document/d/1TcceAsBlAoFLWSQWYyhjmTsZCp0X
qRhNdGMb6JbASxc/edit?usp=embed_facebook (accessed 3 August 2018).
Goldin M. (2017b) ‘Token-Curated Registries 1.0’, Medium. Available at:
https://medium.com/@ilovebagels/token-curated-registries-1-0-61a232f8dac7
(accessed 12 September 2018).
Haiven M. (2014) Cultures of Financialization: Fictitious Capital in Popular Culture and
Everyday Life. London: Palgrave Macmillan.
Helmond A. (2015) ‘The Platformization of the Web: Making Web Data Platform
Ready’, Social Media + Society 1(2). DOI: 10.1177/2056305115603080.
Hill R. (2018) ‘$0.75 – about how much Cambridge Analytica paid per voter in bid to
micro-target their minds, internal docs reveal’, The Register. Available at:
https://www.theregister.co.uk/2018/03/30/cambridge_analytica_contract_detail
s/ (accessed 10 September 2018).
Horne J. (2018) ‘The Emergence of Cryptoeconomic Primitives’, Medium. Available
at: https://medium.com/@jacobscott/the-emergence-of-cryptoeconomic-
primitives-14ef3300cc10 (accessed 9 March 2018).
Hui Y. (2016) On the Existence of Digital Objects. Minneapolis: University of Minnesota
Press.
Hui Y. (2017) ‘Preface: The time of execution’, Data Browser Vol. 6. Autonomedia,
pp. 23–31.
Hui Y. and Halpin H. (2013) ‘Collective Individuation: The Future of the Social
Web’, in Geert Lovink and Miriam Rasch (eds.) Unlike Us Reader: Social Media
LOTTI | The Art of Tokenization
313
Monopolies and Their Alternatives. Amsterdam: Institute of Network Cultures, pp.
103–116.
Ivanova V. (2016) ‘Contemporary art and financialization: Two approaches’, Finance
and Society 2(2), pp. 127–37. DOI: 10.2218/finsoc.v2i2.1726.
Kharif O. (2018) ‘Crypto Market Crash Leaving Bankrupt Startups in its Wake’,
Bloomberg.com, 6 December. Available at:
https://www.bloomberg.com/news/articles/2018-12-06/crypto-market-crash-is-
causing-startups-to-shutter-operations (accessed 18 January 2019).
Kilroe J. (2017) ‘Velocity of Tokens’, in: Newtown Partners. Available at:
https://medium.com/newtown-partners/velocity-of-tokens-26b313303b77
(accessed 23 September 2018).
Lee B. and Martin R. (eds.) (2016) Derivatives and the Wealth of Societies. Chicago:
University of Chicago Press.
Levine M. (2018) ‘Crypto Finance Meets Regular Finance’, Bloomberg.com, 24 January.
Available at: https://www.bloomberg.com/view/articles/2018-01-24/crypto-
finance-meets-regular-finance (accessed 25 September 2018).
Liston M. and Singer A. (2018) ‘Seven on Seven 2018’, Rhizome. Available at:
http://rhizome.org/editorial/2018/may/23/seven-on-seven-2018-the-10th-
edition/ (accessed 12 September 2018).
Lotti L. (2015) ‘“Making sense of power”: Repurposing Gilbert Simondon’s
philosophy of individuation for a mechanist approach to capitalism (by way of
François Laruelle)’, Platform: Journal of Media and Communication 6: 22–33.
Lotti L. (2018) ‘Fundamentals of Algorithmic Markets: Liquidity, Contingency, and
the Incomputability of Exchange’, Philosophy & Technology 31(1), pp. 43-58. DOI:
10.1007/s13347-016-0249-8.
Malik S. and Phillips A. (2012) ‘Tainted Love: Art’s Ethos and Capitalization’, in:
Lind M. and Velthuis O. (eds.) Art and its Commercial Markets: A Report on Current
Changes and with Scenarios for the Future. Berlin: Sternberg Press, pp. 209–240.
Marazzi C (2011) The Violence of Financial Capitalism. Los Angeles : Cambridge, MA:
Semiotext.
Martin R. (2015) Knowledge LTD: Toward a Social Logic of the Derivative. Philadelphia:
Temple University Press.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
314
McConaghy T. (2018) ‘Nature 2.0’, Medium. Available at:
https://medium.com/@trentmc0/nature-2-0-27bdf8238071 (accessed 12
September 2018).
Mirowski P. (2002) Machine Dreams: Economics Becomes a Cyborg Science. Cambridge:
Cambridge University Press.
Monégro J. (2016) ‘Fat Protocols’, Union Square Ventures. Available at:
http://www.usv.com/blog/fat-protocols (accessed 18 March 2017).
Morris D.Z. (2018) ‘Nearly Half of 2017’s Cryptocurrency ‘ICO’ Projects Have
Already Died’, Fortune. Available at:
http://fortune.com/2018/02/25/cryptocurrency-ico-collapse/ (accessed 18
January 2019).
Moss C. (2013) ‘Expanded Internet Art and the Informational Milieu’, Rhizome.
Available at: http://rhizome.org/editorial/2013/dec/19/expanded-internet-art-
and-informational-milieu/ (accessed 27 December 2013).
O’Dwyer R. (2017) ‘Does Digital Culture Want to be Free? How Blockchains are
Transforming the Economy of Cultural Goods’, in: Catlow R, Garrett M, Jones
N, et al. (eds) Artists Re:Thinking the Blockchain. London: Torque.
O’Dwyer R. (2018) ‘A Celestial Cyberdimension: Art Tokens and the Artwork as
Derivative’, Circaartmagazine. Available at: https://circaartmagazine.net/a-celestial-
cyberdimension-art-tokens-and-the-artwork-as-derivative/ (accessed 11
December 2018).
O’Reilly T. (2005) ‘What Is Web 2.0’. Available at:
https://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html?page=1
(accessed 20 January 2019).
O’Reilly T. and Battelle J. (2009) ‘Web Squared: Web 2.0 Five Years On’, Web 2.0
Summit 2009. Available at:
https://assets.conferences.oreilly.com/1/event/28/web2009_websquared-
whitepaper.pdf (accessed 21 January 2019).
Pasquinelli M. (2009) ‘Google’s PageRank Algorithm: A Diagram of the Cognitive
Capitalism and the Rentier of the Common Intellect’, in: Becker K. and Stalder F.
(eds.) Deep Search. The Politics of Search Beyond Google. New Jersey: Transaction
Publishers, pp.152-162.
LOTTI | The Art of Tokenization
315
Pasquinelli M. (2015) ‘The Sabotage of Rent’, Cesura//Acceso. Available at:
http://cesura-acceso.org/issues/the-sabotage-of-rent-matteo-pasquinelli/
(accessed 2 January 2019).
Patterson M. (2018) ‘Crypto’s 80% Plunge Is Now Worse Than the Dot-Com Crash’,
Bloomberg.com, 12 September. Available at:
https://www.bloomberg.com/news/articles/2018-09-12/crypto-s-crash-just-
surpassed-dot-com-levels-as-losses-reach-80 (accessed 13 September 2018).
Rare Digital Art Festival (2018) ‘Rare Digital Art Festival’. Available at:
https://raredigitalartfestival.splashthat.com (accessed 14 January 2018).
Scarlett A. (2018) ‘Realizing Affordance in the Techno-Industrial Artist Residency’,
Schloss-Post. Available at: https://schloss-post.com/realizing-affordance-techno-
industrial-artist-residency/ (accessed 22 September 2018).
Schneider T. (2018) ‘The Gray Market: How One Warhol Auction Embodies the
Blind Spots of Many Blockchain Art Startups (and Other Insights)’, ArtNet News.
Available at: https://news.artnet.com/opinion/gray-market-maecenas-
blockchain-auction-1308482 (accessed 26 June 2018).
Simondon G. (2009) ‘Technical Mentality’, Parrhesia 7: 17–27.
Simondon G. (2013) L’Individuation à la Lumière des Notions de Forme et d’Information.
Grenoble: Millon.
Simondon G. (2017) On the Mode of Existence of Technical Objects. 1st edition.
Minneapolis: Univocal Publishing.
Taleb N.N. (2018) Skin in the Game: Hidden Asymmetries in Daily Life. New York:
Random House.
terra0 (2018) ‘‘When bloom’: the end of Flowertokens, project archiving, and the end
of trading’, Medium. Available at: https://medium.com/@terra0/when-bloom-
the-end-of-flowertokens-project-archiving-and-the-end-of-trading-47d5bf1d379a
(accessed 13 March 2019).
Turner F. (2006) From Counterculture to Cyberculture. Stewart Brand, the Whole Earth
Network, and the Rise of Digital Utopianism. Chicago: The University of Chicago
Press.
Velthuis O. and Coslor E. (2012) ‘The Financialization of Art’, in: Knorr Cetina K.
and Preda A. (eds.) The Oxford Handbook of the Sociology of Finance. Oxford: Oxford
University Press, pp.471-487.
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
316
Wang Q. (2018) ‘The False Dichotomy of Utility and Store of Value’. In: Qiao Wang.
Available at: https://medium.com/@QwQiao/the-false-dichotomy-of-utility-
and-store-of-value-27fe12bf3bdb (accessed 23 September 2018).
Wiley C. (2018) ‘The Toxic Legacy of Zombie Formalism, Part 1: How an Unhinged
Economy Spawned a New World of ‘Debt Aesthetics’’, ArtNet News. Available at:
https://news.artnet.com/opinion/history-zombie-formalism-1318352 (accessed
31 July 2018).
Wood G. (2018) ‘Ethereum: A Secure Decentralised Generalised Transaction Ledger
- Byzantium Version: 39’. Available at:
https://ethereum.github.io/yellowpaper/paper.pdf (accessed 18 January 2019).
Zeilinger M. (2016) ‘Digital Art as ‘Monetised Graphics’: Enforcing Intellectual
Property on the Blockchain’, Philosophy & Technology, pp. 1–27. DOI:
10.1007/s13347-016-0243-1.
Notes
1 The volatility of the ecosystem is evidenced by recent statistics: in 2017, ICOs raised an equivalent of
5.6$ billion (of which 3.2 only in Q4) with more than 400 projects successfully funded (Coindesk, 2018). However, in early 2018 nearly 50% of those projects had already gone bankrupt (Morris, 2018). This is reflected in the dramatic correction of the market: while in January 2018 the total cryptomarket capitalization eclipsed $830 billion (Browne, 2018) by the end of the year it had plunged more than 80% – a collapse comparable to the dot-com crash of the Nineties (Kharif, 2018; Patterson, 2018).
2 Simondon’s genetic theory also proposes that, as technical objects concretize, they gain an increasing level of autonomy – from element to individual to system (ensemble) – culminating with the cybernetic paradigm of automation. Yet one needs to be careful not to conflate this evolution with mere historical development: “the technical object is not directly a historical object: it is subject to the course of time only as a vehicle of technicity, according to a transductive role that it plays with respect to a prior age” (2017: 76).
3 A discussion of theories of the market is beyond the scope of this paper. For a partial review see Lotti (2018).
4 For a philosophical treatment of fiat money in the context of the individuation of the capitalist system, see Lotti (2015).
5 Randy Martin, who first articulated the “social logic of the derivative”, describes it according to three features: first, it entails a condition of “fragmentation, dispersion, or isolation by allowing us to recognize ways in which the concrete particularities […] might be interconnected without first or ultimately needing to appear as a single whole or unity of practice or perspective”; secondly, it evidences “how production is inside circulation,” testifying to the generative role of volatility; third, it emphasizes “the agency of arbitrage, of small interventions that make significant difference, of a generative risk in the face of generalized failure but on behalf of desired ends” (Martin, 2015: 52; see also Lee and Martin, 2016).
6 This has been evidenced, for instance, by recent global events such as the Facebook-Cambridge Analytica scandal in March 2018, which revealed the connections between the social media giant and the political consulting firm, which bought the personal data of 87 million users of the former without their direct consent, to influence voters’ opinion in the last US Presidential elections through psychographic targeting. In spite of the increased public disdain toward Facebook’s ads
LOTTI | The Art of Tokenization
317
policy, which precisely allows for such fine-grained and wide-spread aggregation and trading of personal information, Facebook market value and user base have remained largely unaffected in the aftermath of the news, as Bloomberg reports, with Q1 2018 revenues beating analysts’ estimates and the number of new users continuing to rise (Frier and Ponczek, 2018). It is worth noting that users’ profiles were sold between 75 cents to $5 apiece (Hill, 2018).
7 TCP/IP (Transmission Control Protocol/Internet Protocol), HTTP (Hyper Text Transfer Protocol), SMTP (Simple Mail Transfer Protocol).
8 On the features of Web 2.0 see also: O’Reilly, 2005; O’Reilly and Battelle, 2009. On the architectural difference between Web 2.0 platforms and token-based networks see (Monégro, 2016).
9 While Bitcoin constitutes a simpler case of transaction-based state machine, in which the state is represented by its global collection of Unspent Transaction Outputs (UTXOs), in Ethereum’s world computer the global state consists in a mapping between addresses (unique identifiers) and account states, whereby the state “can include such information as account balances, reputations, trust arrangements, data pertaining to information of the physical world” (Wood, 2018: 2). The state is constantly updated through the transactions occurring in the network. In essence a transaction, such as transferring of an arbitrary amount of Ethereum tokens, is what generates a valid state transition.
10 Through this mechanism, Maecenas successfully executed the first smart-contract-run art auction at the beginning of September 2018, with the sale of fractional ownerships of Andy Warhol’s 14 Small Electric Chairs to 100 qualified participants, raising US$1.7m for 31.5% of the artwork at a valuation of US$5.6m (Garriga, 2018). Yet as Tim Schneider pointedly observed: “’platform’ is a synonym for ‘middleman,’ and middlemen are inherently contradictory to any sincere effort to decentralize anything—at least, if they’re charging a fee for their presence at the crossroads” (Schneider, 2018).
11 On the pitfalls of the application of the logic of scarcity to digital art through blockchain see: O’Dwyer, 2017; Zeilinger, 2016.
12 Hypertext Transfer Protocol Secure. 13 Inter-Planetary File System. 14 The emblematic Rare Digital Art Festival, which took place in NYC in March 2018 greatly
encapsulated this new tendency: “Rare digital art is a movement to take internet assets that have previously been infinitely copyable (songs, memes, etc.) and turn them into provably rare, tradable blockchain assets” (Rare Digital Art Festival, 2018).
15 As Simondon puts it, “It is insufficient, for understanding technics, to start from constituted technical objects; objects appear at a certain moment, but technicity precedes them and goes beyond them; technical objects result from an objectivation of technicity; they are produced by it, but technicity does not exhaust itself in the objects and is not entirely contained within them” (2017: 176).
16 “If technical objects do evolve toward a small number of specific types then this is by virtue of an internal necessity and not as a consequence of economic influences or practical requirements; it is not the production-line that produces standardization, but rather intrinsic standardization that allows for the production-line to exist. […] The industrialization of production is rendered possible by the formation of stable types” (Simondon, 2017: 29).
17 “If one seeks the sign of the perfection of the technical mentality, one can unite in a single criterion the manifestation of cognitive schemas, affective modalities, and norms of action: that of the opening; technical reality lends itself remarkably well to being continued, completed, perfected, extended” (Simondon, 2009: 24).
18 Simondon distinguishes between culture and technical culture. Culture, according to Simondon, is “that by which the human regulates its relation to the world and to himself” (Simondon, 2017: 227). The need for technical culture stems from the fact that “if culture doesn’t incorporate technology, this will imply obscure zones and [technology] would not be able to provide its regulatory normativity on the coupling of the human and the world” (ibid). As Jean-Hugues Barthélémy observes: “As one can see here, that which Simondon calls ‘technical normativity’ … is always, as such, a normativity of culture through technics – in other words, it is a normativity of culture thanks to ‘technical culture’” (Barthélémy, 2012: 210 emphasis in original).
19 See Turner (2006) on the relation between San Francisco Sixties counterculture and the emerging technological hub of Silicon Valley. Turner shows how the idea of the virtual community has given rise to the networked economy in view of the openness and participation of the early web.
20 According to Simondon, there cannot be such a thing as a subsumption of human beings and technology to capital. In Simondon’s universal cybernetics there is only place for humanity, nature,
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
318
and technics. For him, the problem of the alienation of the human from technology is not only a socio-economic matter, due to the privatization of the labor process, but more profoundly, a physical-psychological one, which started precisely with the mechanist era of technological development, which has hindered “a more profound and essential relation, that of the continuity between the human individual and the technical individual” (Simondon: 2017, 133).
21 It is worth reproducing the quote in full: “This is why we notice such discontinuity in the history of technical objects, with absolute origins. Only a thought that is capable of foresight and creative imagination can accomplish such a reverse conditioning in time: the elements that will materially constitute the technical object and which are separate from each other, without associated milieu prior to the constitution of the technical object, must be organized in relation to each other according to the circular causality that will exist once the object will have been constituted; thus what is at stake here is a conditioning of the present by the future, by that which is not yet. Such a futurial function is only rarely a work of chance; it requires putting into play a capacity to organize the elements according to certain requirements which act as an ensemble, as a directive value, and play the role of symbols representing the future ensemble that does not yet exist. The unity of the future associated milieu, within which the causal relations will be deployed that will enable the functioning of the new technical object, is represented, it is played or acted out as much as a role can be played in the absence of the true character, by way of the schemes of the creative imagination” (Simondon, 2017: 60).
22 In the context of blockchains and smart contracts, an oracle is a software agent that finds and verifies real-world events and submits this information to a blockchain to be used by smart contracts. Because a blockchain can only verify statements of truth that pertain to its internal environment (example: whether a transaction is valid or not), decentralized services that depend on occurrences that are external to the blockchain itself (such as the health of a forest, or internet-of-things devices, or prediction markets) by necessity rely on oracles (for an accessible explanation, see BlockchainHub, n.d.).
23 See: https://flowertokens.terra0.org/. 24 The first example of TCR is adChain, which applies the pattern to the creation of reputable lists of
publishers, aiming to solve some of the problems of the online advertising business. The pattern is also use by FOAM to curate Geographic Points of Interests for their spatial protocol for secure Proof of Location services. TCRs, and cryptoeconomic primitives more broadly, have gained increasing attention since the first proposals and implementations in the open source community, and precisely at a point at which the easy enthusiasm for the booming cryptomarket has begun to fade. Interestingly, it should be noted that, in virtue of their purely formal and necessarily open and relational character (which sets them apart from specific blockchain-based protocols), it is hard if not impossible to fairly monetize such patterns (Horne, 2018).
25 The coupling of a consensus algorithm (to determine how unknown peers can come to an agreement in a decentralized way) and a ‘proof’ of ‘participation’ in the network (e.g., proof-of-work, proof-of stake) provides a mechanism to programmatically modulate the monetary inflation rate to incentivize participation toward specific goals – guaranteeing the security of the network, redistributing value to reward specific behaviors, and also providing ways to fund the early stage of development of the protocol. For instance, Bitcoin and Ethereum attempt to achieve such goals through mining; new blockchains such as Cosmos and Polkadot aim to do so through various forms of staking. The Basic Attention Token provides instead an alternative attention economy by rewarding users with tokens for their attention in their browsing. Decred, Tezos, Zcash have mechanisms in place to self-fund the development of their projects through inflation funding (unlocking new coins, a portion of which is directly channeled to their development teams and/or treasuries). The examples are endless and vary according to the taxonomy of projects and tokens. Worth noting, debates around governance in this context are often concerned with the degree of revisability of the monetary policy of each protocol (an example of this is the “hard fork” between Bitcoin and Bitcoin Cash in mid-2017). What is important to emphasize is that, in so doing, these mechanisms allow untrusted and pseudonymous parties to collectively create a trusted network – not only of value exchange but, perhaps more importantly, of value creation, proposing a normative and genetic mode of relationality that is radically different from the financial logic of Web 2.0 platforms. From this standpoint, it would not be too far-fetched to claim that, as a medium of networked value production and funding stream, the Bitcoin blockchain inaugurated a mechanism by which the token indexes the production of the funding stream itself.
LOTTI | The Art of Tokenization
319
26 Helmond describes these different degrees of programmability in terms of “Level 1 access APIs”
and “Level 2 plug-in API” – the former enables access to a platform’s data and functionalities for external developers; while the latter allows developers to build their applications within a platform’s framework, such as the case of Facebook Canvas (Helmond, 2015: 5). Smart contracts can be said to approximate what Helmond describes as Level 3 programmability, by providing a decentralized runtime environment for applications. The possibility to expose internal states and functions for other developers to use, extend, and fork effectively blurs the boundaries between infrastructure and application. This obviously does not prevent an application from hosting data on proprietary servers (such as the unique designs of the infamous CryptoKitties) but provides a shared data layer for the validation and recording of the information strictly pertinent to the value proposition of the dApp.
27 ERC stands for Ethereum Request for Comment and correspond to standards documenting how a contract can interoperate with other contracts. The two most developed standards are ERC-20 for fungible tokens and ERC-721 for non-fungible tokens, discussed above.
28 In risk management, having skin in the game refers to the extent of which one is invested (with money and resources) in the success of a venture (‘game’). The phrase has been made popular by quant and scholar Nicholas Nassim Taleb, who colorfully exclaims: “It is not just that skin in the game is necessary for fairness, commercial efficiency, and risk management: skin in the game is necessary to understand the world. First, it is bull***t identification and filtering, that is, the difference between theory and practice, cosmetic and true expertise, and academia (in the bad sense of the word) and the real world. To emit a Yogiberrism, in academia there is no difference between academia and the real world; in the real world, there is” (2018 ebook version).
29 The notion of games is intimately related to economics in the genealogy of cybernetics – for instance, John von Neumann and Oscar Morgenster’s Theory of Games and Economic Behavior (1944) equates ‘numerical utility’ in games of strategy to the quantity of money (Mirowski, 2002: 127).
30 Within the framework of the notorious Black-Scholes for the pricing of options, cryptoassets analysts Antos and McCreanor (2018) argue against the view of cryptoassets as merely an “innovative form of equity” and instead propose that “the purchase of a cryptoasset is essentially a claim on uncertain value creation, as opposed to a claim on an underlying asset whose value by definition has an upper bound.”
31 “Art is produced as a commodity, it doesn’t become one when it is sold” (Enxuto and Love, 2016). 32 Vitalik Buterin on cryptoeconomics: “I’d be more interested in seeing social science fiction, […]
that explores also all of these complex ideas about how people can interact and how political systems can work, how economic systems can work, how they can fail. Particularly, how they can fail in ways that create interesting stories without anyone being literally Hitler” (Buterin and Cowen, 2018).
Operating at the threshold of academia and the start-up environment, Laura Lotti is a researcher in token economies and blockchain cultures. With a background in economics, media studies and philosophy, Laura completed her PhD at UNSW Sydney, with a dissertation investigating the techno-economic affordances of the Bitcoin protocol. She is currently engaged in different capacities in a few crypto-initiatives in the field of cultural production. Email: [email protected]
Media Theory
Vol. 3 | No. 1 | 2019 http://mediatheoryjournal.org/
320