Designing sound in cybercartography: from structured cinematic narratives to unpredictable...
-
Upload
carleton-ca -
Category
Documents
-
view
1 -
download
0
Transcript of Designing sound in cybercartography: from structured cinematic narratives to unpredictable...
PLEASE SCROLL DOWN FOR ARTICLE
This article was downloaded by: [Ingenta Content Distribution TandF titles]On: 22 November 2008Access details: Access Details: [subscription number 791939330]Publisher Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK
International Journal of Geographical Information SciencePublication details, including instructions for authors and subscription information:http://www.informaworld.com/smpp/title~content=t713599799
Designing sound in cybercartography: from structured cinematic narratives tounpredictable sound/image interactionsS. Caquard a; G. Brauen a; B. Wright b; P. Jasen b
a Geomatics and Cartographic Research Centre (GCRC), Department of Geography and EnvironmentalStudies, Carleton University, Ottawa, Canada b Institute for Comparative Studies in Literature, Art and Culture(ICSLAC), Carleton University, Ottawa, Canada
First Published:2008
To cite this Article Caquard, S., Brauen, G., Wright, B. and Jasen, P.(2008)'Designing sound in cybercartography: from structuredcinematic narratives to unpredictable sound/image interactions',International Journal of Geographical Information Science,22:11,1219— 1245
To link to this Article: DOI: 10.1080/13658810801909649
URL: http://dx.doi.org/10.1080/13658810801909649
Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf
This article may be used for research, teaching and private study purposes. Any substantial orsystematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply ordistribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contentswill be complete or accurate or up to date. The accuracy of any instructions, formulae and drug dosesshould be independently verified with primary sources. The publisher shall not be liable for any loss,actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directlyor indirectly in connection with or arising out of the use of this material.
Research Article
Designing sound in cybercartography: from structured cinematicnarratives to unpredictable sound/image interactions
S. CAQUARD*{, G. BRAUEN{, B. WRIGHT{ and P. JASEN{{Geomatics and Cartographic Research Centre (GCRC), Department of Geography and
Environmental Studies, Carleton University, Ottawa, Canada
{Institute for Comparative Studies in Literature, Art and Culture (ICSLAC), Carleton
University, Ottawa, Canada
(Received 10 January 2008; in final form 10 January 2008 )
In this paper we draw on the analysis of sound in film theory in order to explore
the potential that sound offers cybercartography. We first argue that the
theoretical body developed in film studies is highly relevant to the study of sound/
image relationships in mapmaking. We then build on this argument to develop
experimental animated and interactive sound maps for the Cybercartographic
Atlas of Antarctica that further explore the potential of sound for integrating
emotional, cultural and political dimensions in cartography. These maps have
been designed to recreate cinematic soundscapes, to provide contrapuntal
perspectives on the cartographic image and to generate an aural identity of the
atlas. As part of this experimental mapping, an innovative sound infrastructure is
being developed to allow complex sound designs to be transmitted over the
Internet as part of atlas content. Through this infrastructure the user can select as
well as contribute his own sounds. The overall cartographic message is becoming
less predictable, thus opening new perspectives on the way we design, interact
with, and modify sounded maps over the Internet.
Keywords: Cybercartography; Sound map; Internet sound infrastructure;
Geospatial narratives; Film theory
1. Introduction
In the contemporary world, sight continues to offer the primary means throughwhich we map our multisensory environment. As pointed out by Norman (2004,
p. 56) ‘the conventional map does its visible work without a murmur. And nobody
complains, although if in reality the world fell completely silent, we would stop and
shake out our ears in disbelief.’ While the conventional map remains silent and
visual, the importance of sound for better understanding, exploring and representing
geospatial processes is slowly making its way into mapmaking. While Murray
Schafer’s foundational maps of the soundscape (1977) remained—perhaps
ironically—primarily silent, map and sound are now being linked together in arange of ways for a variety of purposes. Synthetic voices are used with GPS maps to
give directions to car drivers or to visually impaired people. Didactic voices illustrate
maps on videos and DVDs just like animated maps illustrate narratives in
documentaries. Artists combine sound with maps to convey emotions and
*Corresponding author. Email: [email protected]
International Journal of Geographical Information Science
Vol. 22, Nos. 11–12, November–December 2008, 1219–1245
International Journal of Geographical Information ScienceISSN 1365-8816 print/ISSN 1362-3087 online # 2008 Taylor & Francis
http://www.tandf.co.uk/journalsDOI: 10.1080/13658810801909649
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
perceptions that are geospatial in character. In small communities, artists and
residents annotate maps of their neighborhood with audio and video files in order to
influence the way their community is represented, perceived and understood. With
recent Internet applications such as Google map and the resulting multiple ‘mash-
ups’, these local audio snapshots are now populating annotated maps of the world
widely available through the Internet. Citizens, artists, activists, journalists,
planners, or private companies are now able to mix sound with maps in many
different ways for many different purposes.
Paradoxically, sound remains largely under-theorized in cartography as noticed
by Glenn Brauen and Fraser Taylor (2007). Although sound has been present in
multimedia cartography for a couple of decades, the cartographic literature on how
to apply sound is not well developed. As Paul Theberge (2005, p. 398) points out:
the recent literature on sonification continues to address many of the same issues that
preoccupied its predecessors: these include discussions of the psychological and
cognitive aspects of sound, the mapping of data onto sound variables […], using sound
to provide feedback to enhance user interaction, processing sound to provide spatial
cues, and developing sonic resources for the visually impaired, among others.
Without undermining the value and the importance of these kinds of approaches, it
is also important to conceptualize sound as an opportunity to bring different spatial
dimensions into cartographic representations, including those that address emotion,
culture and memory. This potential can radically transform the design, use,
perception and function of sounded maps and contribute to the emergence of new
forms of cartographic expression.
In this paper we explore a range of these transformations through a multi-
disciplinary approach of the sound/image interaction in contemporary cartography.
The experience of sound in film is extremely well documented and provides us with
the theoretical framework for our research. In the first section we present some of
the sound/image relationships theorized in film studies and video games, focusing
more specifically on three major aural codes: voice, sound effects and music. In the
second section we present and discuss examples of the use of each of these aural
codes in the context of the design of the Cybercartographic Atlas of Antarctica.
Finally, in the third section we discuss more specifically issues and opportunities
provided by the delivery of sound maps over the Internet. This overall structure
highlights the idea that we are theorizing a range of uses for sound in cartography
ranging from a cinematic synchronized association of sound and image in
mapmaking toward a much more flexible and less controlled use of sound in
cybercartography. Just like the integration of sound in movies in the 1930s radically
transformed cinema, the integration of sounds in maps offers the possibility of
radically transforming the discipline of cartography. This paper aims to explore
some of the issues and potentialities related to these changes.
2. Sound theory in cinema studies and video games
There are a number of means through which one can analyze and theorize the use of
sound in maps. One that is particularly valuable draws on sound theory in cinema
studies. Indeed, the primary aim of this section is to explore the ways in which sound
functions in audiovisual media including cinema and television. While this brief
discussion is not meant to provide an exhaustive understanding of the soundtrack, it
1220 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
is important to highlight the various categories of sound that shape the narrative
function of cinema and, by corollary, interactive frameworks. Despite the relative
paucity of academic literature on the subject, this section will provide a basic archive
of theoretical models of sound in cinema, which can be applied to the expanding
field of sound maps.
The dynamics of contemporary audiovisual media is most easily understood if
one mutes the soundtrack to a contemporary film. When sound is evacuated, not
only does narrative comprehension become more difficult, but also the third
dimension of the audiovisual apparatus effectively collapses. Imagine watching Star
Wars (George Lucas, 1977) without hearing the portentous and cavernous voice of
Darth Vader, or experiencing the shower scene from Psycho (Alfred Hitchcock,
1960) without the violent strains of Bernard Herrmann’s musical score. When the
soundtrack is muted, not only are visual images deprived of their aural identity, but
aural images are erased entirely. These aural images are made up of sounds that lack
a concrete visual identity: they exist solely on the soundtrack in the form of music,
voice-over, or pure sonic atmosphere. Michel Chion has argued that the aural image
‘carries with it vision that are more beautiful than images could ever be’ (Chion
1994, p. 137). In Days of Heaven (Terence Malick, 1979), the sounds of wind, rain,
insects, and farm machinery are established on the soundtrack without any visual
representation. As well, the dense soundscapes of television shows such as Lost
(ABC) and Law and Order (NBC) add a sense of spatial depth at which visual
imagery can only hint.
While the academic study of cinema has traditionally favored the image, it is
important to consider the soundtrack as an equal partner to the visual apparatus. In
a certain sense, the visual bias in cinema studies is ironic, given that sound has been
a part of the cinematic experience since the inception of the medium in the late
nineteenth century. Even though synchronized sound only became an industry
standard in the late 1920s, Thomas Edison produced a short in 1894 that joined the
sound of a live violin with the image of the man playing the same instrument. In
addition, in the so-called ‘silent’ era, which spanned nearly 30 years, theaters rarely
played films silently. Instead, live musicians would provide background music, and,
in some cases, live narration and sound effects would join the otherwise silent
images. However, when one considers the cinematic medium, it is often with respect
to the image: audiences don’t ‘listen’ to movies, they ‘watch’ them. Moreover, the
degree to which film theory has relied on this visual bias is well noted in the annals
of the academy, yet only recently have theorists attempted to explore the
psychological, emotional, and narrative salience of the soundtrack. Gianluca Sergi
(1999) writes of the persistent visual bias:
One of the reasons why film is still regarded as a visual medium is not because of some
intrinsically universal quality but because of the image crafts’ ability to allow audiences
to appropriate and make their own the vernacular of those crafts. Thus, scholars and
critics feel right at home when using terms that professionals use: pan, close-up, frame,
dolly, p.o.v, etc. This is the antithesis of sound where, music aside, scholars, critics and
everyday audiences are at a loss as to how to talk about the most basic aspects of the
soundtrack.
To complicate matters, the contemporary film soundtrack is often a densely layered
palette of sounds, ranging from the familiar to the alien, from the sensual to the
abrasive. In many ways, sound assaults the listener.
Designing sound in cybercartography 1221
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
With this in mind, it is important to forge an understanding of the technical and
aesthetic framework that comprises soundtrack construction. How are sounds
organized? What role do sounds play in a narrative context? In order to properly
address these questions, we must deconstruct the soundtrack at a theoretical and
practical level. We must investigate the ways in which voices, music, and sound
effects shape the contours of the soundtrack.
The soundtrack is very similar to R. Murray Schafer’s conception of the
soundscape. Schafer (1994, p. 7) has argued that ‘The soundscape is any acoustic
field of study. We may speak of a musical composition as a soundscape, or a radio
program as a soundscape or an acoustic environment as a soundscape. We can
isolate an acoustic environment as a field of study just as we can study the
characteristics of a given landscape.’ The isolation of the soundtrack reveals a
number of sonic features that the analyst must investigate. There are dominant
sounds, fleeting sounds, and sounds that blend together. Sound can create depth and
space the way that the image can only suggest. The expansive landscape in Gerry
(Gus Van Sant, 2004) is characterized by the relentless hum of the wind, which
deepens the characters’ despair and isolation. Sound can provide tactile sensations
through an enhanced bass response, thereby ‘touching’ the listener from a distance.
The footsteps of the giant, imposing Orcs in The Lord of the Rings: The Two Towers
(Peter Jackson, 2002) are not only heard, but felt by the listener.
More fundamentally, the soundscape of audiovisual media is like any other text in
that it communicates a narrative. Indeed, Hollywood cinema has relied on the clear
communication of narrative information since the 1910s. In this respect, sound has
been subject to a hierarchy that depends on the clear transference of story
information. Voice retains the top position, while music and sound effects follow.
Since dialogue is primarily responsible for communicating story elements, it
dominates the soundscape in many situations. However, music and sound effects
serve an equal purpose in communicating specific narrative ideas. The examples
from Gerry and The Lord of the Rings demonstrate a key function of sound effects,
while the music in Psycho attests to the dramatic and emotional resonance provided
by musical underscore. Therefore, while the soundtrack may at times be in service to
a narrative, it actively participates in shaping the contours of the story through aural
textures.
The tone of the voice can impact the narrative more than the words it speaks. This
is most evident in the voice-over, a category of sound in cinema that is characterized
by a vocal entity that is off-screen. The most famous of voice-overs is the voice-of-
God in the documentary tradition. The voice-of-God is usually an outside observer
to the narrative action, perhaps the filmmaker. The voice-of-God can be stern,
dramatic, and imposing, and is conventionally male, thus reflecting gender biases in
society at large. It can also be gently authoritative, subtly subversive, and sometimes
eerily powerless. These qualities are a result of the timbral features of the voice, its
tone, and inflection. However, a voice-of-God narrator is not necessary in order to
tell a story. On-screen and off-screen voices can communicate information with
equal efficacy. Ray Liotta’s impassioned voice-over narration in Goodfellas (Martin
Scorcese, 1990) imparted not only vital story information, but also communicated
intensity, drama, and pathos that furthers the audience’s understanding of the film’s
thematic structure.
The role of music in the cinematic soundscape is often a complex stratagem of
dramatic and functional modes. The differences between diegetic and non-diegetic
1222 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
music provides a useful entry into this field. While diegetic music refers to music that
is presented within the actual space of the narrative, non-diegetic music is
traditionally understood as music that exists outside the fiction, and is often used
to provide commentary and dramatic resonance. In Jaws (Steven Spielberg, 1975),
the two-note double bass figure that signifies the presence of the shark is coded as
non-diegetic since the characters cannot actually hear this music. The Jaws motif
represents a tradition in film music that is often referred to as the ‘leitmotiv’. Scott
Paulin (2000) has written about the connection between Richard Wagner’s use of the
leitmotif principle in his operas and its implementation in Hollywood cinema. Paulin
(2000, p. 68) argues that ‘The notion that Wagner’s music is representational ‘‘in the
minutest detail’’ is a common misreading of leitmotif technique as unambiguously
denotative, whereas it can be more revealing to see the Wagnerian Motiv [‘leitmotif’
is a word not used by Wagner himself] as carrying a semiotic excess that resists strict
denotation.’ Paulin suggests that some commentators such as Kracauer and Harry
Potamkin criticized ‘overconcordance’, preferring anti-parallelist strategies of
accompaniment—a criticism that would extend to the sound period and the work
of Max Steiner, Bernard Herrmann (from the 1930s to the 1960s), and more recently
with the work of John Williams ‘being denigrated as ‘‘mickey-mousing’’ for its
cartoonish mimicry of the image’ (Paulin 2000, p. 69). However, Rick Altman and
Paulin suggest that this ‘redundancy’ in parallelism is not entirely accurate. Paulin
suggests that ‘Sound and image validate—not duplicate—each other, and together
disguise the material heterogeneity of the ‘‘whole’’. Sound/image parallelism
involves the mutual invisibility of visual ‘‘work’’ and inaudibility of soundtrack
‘‘work’’ in the service of creating a realist illusion’ (2000, p. 73). Thus, film music
extends the concept Wagner developed about a totality of opera.
Sound effects represent the last element of the sonic hierarchy that will be
examined here, but remain the most theoretically under-developed aspect of the
soundtrack. In a broad sense, sound effects constitute all the sounds in the diegetic
environment, from raindrops and thunder to city noise and distant conversations.
They hide in the corners of the theater at very low volume or overtake the sound
space in a thunderous cacophony. Chion (1994, p. 155) suggests that in
contemporary films the traditional voice, music, sound effect hierarchy is being
reworked:
With the new place that noises occupy, speech is no longer central to films. Speech
tends to be reinscribed in a global sensory continuum that envelops it, and that occupies
both kinds of space, auditory and visual. This represents a turnaround from sixty years
ago: the acoustical poverty of the soundtrack during the earliest stage of sound film led
to the privileging of precoded sound elements, that is, language and music—at the
expense of the sounds that were pure indices of reality and materiality, that is, noises.
Music can mask itself as sound effects, just as vocal performances could constitute
effects: in the last reel of Silence of the Lambs (Jonathan Demme, 1991) music and
atmospheric rumbling coalesce to create an eerie sound space in the basement of the
film’s serial killer. Chion (1994, p. 145) has further noted that ‘noises, those humble
footsoldiers, have remained the outcasts of theory, having been assigned a purely
utilitarian and figurative value and consequently neglected.’ Yet, the noises of the
environment provide the aural identity to the cinematic soundscape. As effects, these
sounds are not merely decorative but can serve a dramatic function, as noted with
the example from The Lord of the Rings. Ambient effects, those ephemeral sounds
Designing sound in cybercartography 1223
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
that pervade the soundspace, constitute a key feature of the soundtrack. Ambiences
fold themselves around the voice to enhance the reverberant space of the diegesis, or
they can signify the empty and desolate psychological landscape as in Days of
Heaven (Terence Malick, 1979).
The dynamic nature of contemporary video game soundtracks offers an intriguing
parallel to modern cinema sound. Game audio engineers and editors have integrated
the aesthetic models of construction and immersion established by film sound
editors in the 1970s, 1980s, and 1990s. The interactive environments of many games
feature immersive sound designs that attempt to situate the game player within the
game itself. Indeed, the experiential qualities that characterize games such as Medal
of Honor (1999, Electronic Arts) are bound to an industrial mode of representation
and production, which is rooted in Hollywood sound practice (for a comprehensive
history of Hollywood sound practice, see Altman 1992).
Despite a growing corpus of work on game audio, much of this emerging research
concentrates on game music, specifically the non-diegetic orchestral underscore that
accompanies games such as Medal of Honor (see Kassabian 2003, Troubie 2004).
Music constitutes only one component of the aural template, with dialogue and
sound effects rounding out the ‘Holy Trinity’ of game sound. In much the same way
that film sound is hierarchized based on narrative clarity, game audio follows a
similar design. Building sounds one layer at a time, game audio editors respect the
unwritten codes of Hollywood sound track construction, established during the
conversion period of the 1930s (Lastra 2000). The narrative importance of dialogue
to communicate story information ensures that voice dominates the sound track.
Localizable sound effects such as the footsteps of an approaching enemy constitute a
dominant role as well, leaving music and ambient sounds to ‘fill out’ the sound
space.
One of the key features to contemporary video games is aural immersion. Rob
Bridgett has written several articles on game audio immersion. He has suggested
that silence and ‘low-key sound effects’ in addition to ‘maximal’ volume can
augment a player’s perception of a game and elicit various emotional responses
(Bridgett 2007). His desire for dynamic range in game audio leads him to contrast
the subtle nuances of a film soundtrack to the ‘faster, louder’ ethos of game audio
designers.
To be sure, the implementation and design of an immersive ‘surround’ game
environment is not entirely similar to film sound. The interactivity of game play
necessitates a different approach by game audio editors, who must anticipate and
design a variety of potential sound scenarios. However, the simulation of a three-
dimensional sound field remains tethered to the cinema’s industrial and aesthetic
mode of production.
This brief discussion of the cinematic and gaming sound has attempted to explore
the various ways in which sound affects the form and function of narrative. Taken
as a soundscape, the soundtrack is more than a mixture of voices, music, and effects:
it is the aural landscape of the diegesis. It not only shapes the space of the image, but
informs the dramatic function of the narrative. The sound of cinema can be solid or
liquid, loud or soft, large or small, textured or flat, tactile or invisible. If it is muted,
however, it evaporates into the ether and the audience is left with a flat, two-
dimensional image.
Based on our reading of cinema and game sound design, we propose that sound
has the potential to communicate narrative information, to enhance emotional
1224 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
engagement, and to create environmental ambiences. Adding interactivity, the user
can manipulate the space of the sound map with the click of a mouse and navigate
through different soundscapes. Voices, music, and sound effects can serve a linear
narrative or can be organized and controlled by the user and/or author to suit
educational or entertainment needs.
3. Designing sound for the Cybercartographic Atlas of Antarctica
3.1 Cinema and cartography
Christian Jacob (1992, p. 97) envisions cartographic atlases as a way to reconcile the
general and the specific through a cumulative and analytical logic providing a distinct
view of the world that is both ‘‘intellectual and encyclopedic’’. As highlighted by
Theresa Castro (2005), this approach is made possible by the structure of atlases based
on the idea of cutting. In traditional atlases, geographical entities such as continents,
regions or countries are cut out and dissociated from the spatio-temporal continuum
in which they are in reality located. Castro argues that this cutting imposes a framing
and an assembly generating an impression of progression from the reader’s point of
view either at the level of the represented space or of the study/reading of the atlas as a
whole. This impression of progression is derived from the logic that prevailed during
the organization of the maps in the atlas. This logic, which is neither random nor
objective, generates specific rhythms as well as feelings of slowing down or accelerating
(Castro 2005, p. 3). These notions of framing, assembling, structure, rhythm and
progression lead Jacob (1992) to consider atlases as being cinematographic.
The emergence of electronic atlases and virtual globes is transforming the nature
of this cinematographic analogy. These new forms of atlases provide highly
interactive ways of panning and zooming, discarding any needs for spatial cutting.
They also integrate two novel dimensions inherently that are cinematographic:
animation and sound.
The development of animated cartography through the twentieth century has
been influenced by cinematographic techniques and concepts (see Campbell and
Egbert 1990). Animated maps appeared in cinema long before cartographers
became interested in them (Harrower 2004). It was only in the 1960s that
cartographers started to consider seriously the potential of animated maps as a
means for conveying spatial information more efficiently. Cinema provided a source
of inspiration for this purpose. For example, the set of cartographic dynamic
variables developed by David Dibiase et al. (1992) and completed by Alan
MacEachren (1994) were inspired from Christian Metz’s (1968) cinematographic
typology developed to characterize the temporal visual manipulations in movies
(MacEachren 1995, p. 237). While cinematographic concepts and techniques have
shaped animated cartography, the same does not hold true for sonified cartography.
We believe, none the less, that cinema could have had an influence on sonified
cartography. In fact, maps that have appeared in movies have often been associated
to different kinds of sounds. In Cartographic Cinema, Tom Conley (2006) provides
different examples of sound/map interactions in films revealing, in turn, unexpected
stories and meanings. The famous Carte du Pays de Tendre created by Madeleine de
Scudery (1654), for example, becomes a fascinating road map for an emotional
journey in the film Les Amants (Louis Malle, 1958). Associated with Brahms’ music
at the beginning of the movie, the memory of the map is triggered every time the
music is played during the movie, linking music and maps with memories and places.
Designing sound in cybercartography 1225
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
Conley also studies other narrative forms generated by the resonance of maps with
sound. In Roma citta aperta (Roberto Rossellini, 1945), a map speaks through the
god-like voice of the Nazi officer to become an authoritative instrument of control
and power. In Les 400 coups (Francois Truffaut, 1958), maps on the classroom walls
become symbols of the silent authority of the nation state, which resonates with the
notion of cartographic silence introduced by Brian Harley (1988) in cartography. As
illustrated by these different examples, film directors have combined music, voice,
silence as well as sound effects with maps in order to generate emotions and convey
complex ideas related to time and space. To date, such examples have not directly
inspired the still scattered field of sonified cartography.
3.2 Sound and cartography
As recently noticed by Brauen and Taylor (2007), despite the opportunities provided
by the Internet, the addition of sound to maps remains unusual. Indeed, even in the
context of Web cartography ‘sound is not really one of the options one thinks of
when discussing elements that could be part of a map’ (Kraak and Brown 2001,
p. 187). In his work on ‘Sound and Geographic Visualization’, John Krygier (1994)
already noted with regret the lack of sound in data display and geographic
visualization. To fill this gap, Krygier explored the abstract dimensions of sound
(e.g. pitch, loudness, timbre) and developed a set of abstract sound variables
dedicated to geographic visualization. This approach was widely inspired by work
done in psychology, cognitive sciences, and Human computer interaction (see
Brauen 2006 for an updated review of the use of sound in these fields). This work
contributed to our understanding of the use of sound for complex data exploration
and for improving user interaction by providing user feedback.
This kind of approach, which uses sound to map data or to provide spatial cues
and to enhance interaction, has dominated the field of sound cartography, leaving
uncharted cultural perspectives on the use of sound in cartography. In his work on
‘Sound Maps: Music and Sound in Cybercartography’, Paul Theberge (2005, p. 405)
acknowledges the importance of research on the psychological and cognitive aspects
of sound, but he also argues strongly in favor of a complementary approach based
on a thoroughgoing understanding of the cultural use of sounds in cybercarto-
graphy. In this paper we propose to explore further this cultural dimension. Before
presenting and discussing examples developed for the Cybercartographic Atlas of
Antarctica, we first review the general functions and uses of voices, sound effects
and music in cartography.
Voices are used in cartography, mostly for didactic and descriptive purposes.
Through commentaries, narrative voices can add layers of information to a visual
message. They can serve to highlight and discuss specific phenomena. They provide
a useful alternative means for conveying spatial information when the visual
message is difficult to grasp. Different strategies of associating speech and image
have been studied to improve the communication of the information in cartography
(see Monmonier and Gluck 1994). Common applications of speech in maps include
GPS navigational maps embedded in car and mobile maps to assist pedestrians (see
Gartner 2004) and visually impaired users in navigation tasks (see Vasconcellos and
Tsuji 2005).
Voices can be less functional and didactic; they can suggest instead of asserting,
evoke instead of communicating. They can embody emotion and perception related
to space and convey information in a subtle and intimate way. Voices can become a
1226 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
means of murmuring geospatial stories. These stories can even be different than the
one told by the visual elements. In his electoral sound map of Ottawa, Brauen (2006)
overlays the voices of the different leaders of political parties based on the
percentage of votes each party obtained in each electoral district. The ‘cacophonic
overlapping of political discourses’ (Taylor and Caquard 2006, p. 3) conveyed by
this map calls for less didactic and more evocative ways of using voices in
cartography.
In aural culture societies, voices take on another dimension. As illustrated by
Aboriginal songlines of Australia (see Chatwin 1987) or by aural ancestral hunting
routes used by Inuit communities to navigate through the pack ice (Aporta 2004),
geographical knowledge remains mainly aural in many indigenous communities. In
these contexts, voices become the main means for conveying fundamental notions
related to space. Some of these voices have been mapped, for instance in relation to
place names (see Mouafo and Muller 2002) in order to ensure the preservation and
the accurate transmission of this aural knowledge. The geospatial knowledge
embedded in songs has also been used for land claim purposes. In British Colombia
(Canada), certain First Nations’ groups have used ancestral ceremonial songs to
demonstrate their historical use of land and to claim ownership over territories in
opposition to official visual maps (see Sparke 1998). The lack of success of these
kinds of initiatives should not undermine the unique value of these musical voices
and the connection between culture, history and geography they perpetuate.
Aboriginal songlines of Australia as well as Canada’s First Nation songs provide a
different form of mapping in both formal and political terms: they destabilize
dominant visual and institutional representations of territories conveyed by official
visual maps. Mapping these musical voices can become a political form of
expression and resistance, just like any other form of cartographic practice.
Sound effects in cartography are usually seen as a way to heighten the realistic
representation of the world. Realistic use of sounds includes the representation of
soundscapes. The Toronto Island Sound Map, for example, links sound effects with
places and pictures, and provides a sense of the diverse soundscapes of the island
(http://www.yorku.ca/dws/tism/). The quantitative dimension of the soundscape can
also be mapped. For instance, it can serve to characterize the noise level in urban
environments and to evaluate the impact of planned infrastructures such as airport
or highways (see Servigne et al. 1999; Muller and Scharlach 2001). Sound effects can
also be used to attract attention in order to reveal invisible information. Birgit
Woods (2005) uses the sound of running water in an interactive 3D terrain of
Antarctica to stimulate the curiosity of the user and reveal the presence of an
invisible running spring hidden under the ice shelf. Sound effects can also be
evocative and embody a sense of space. They can contribute to improve our
understanding of places by enriching our multisensorial reading of space. As
Davidson and Milligan (2004, p. 524) argue, ‘Perhaps through an exploration of
diverse senses of space, we could become better placed to appreciate the emotionally
dynamic spatiality of contemporary social life.’ Sound effects, voices and music can
contribute to this multisensorial approach of space.
The use of music to improve the communication of information in multimedia
applications has been studied (see, for example, Flowers and Hauer 1995, Hansen
et al. 1999), but very little research has addressed the use of music in cartography.
This is a surprising gap because music is closely related to place and culture and, as a
result, can provide key elements to better understanding and representing places. We
Designing sound in cybercartography 1227
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
propose that music can fulfill similar roles within sound maps to what it plays in
interactive games and cinema. In addition to conveying cultural content, depending
on the music selected, music can help the user to establish an emotional connection
and enhanced sense of engagement with the subject matter of the map. Music has to
be handled carefully, none the less, to avoid ‘musical stereotyping or cliche […[, or a
form of easy exoticism’ (Theberge 2005, p. 405). When using music in cartographic
applications, one needs to take into account the complexity of the relationships
between music and place. For instance, in a globalized world, this complexity takes
on new characteristics with the circulation of popular music via technology and new
media and the global displacement of ethnic populations (Theberge 2005). The use
of music in cybercartographic applications should integrate these cultural,
technological and geographical changes.
The use of sound in general, including voices and sound effects could in fact serve
to convey critical, cultural and political perspectives on space. This is illustrated by
the ‘Folk Songs for the Five Points’ which combine multiple sound elements (see
www.folksongsproject.com). In this map, voices, music and sound effects are
recorded and located on a map in order to convey a sense of diversity in New York’s
Lower East Side. Through the use of spatial perception and political statement, and
interactive layering and looping, this sonified map provides an aural metaphor to
the mixed and multiple layers of identities, languages, cultures, and music
characterizing this part of the city. This kind of combination between maps, sounds
and technology remains unusual; yet it illustrates the potential sound holds for
stimulating different perspectives on places based on emotional, cultural, political
and aesthetic dimensions.
Yet sound design, including voices, sound effects and music, cannot, or rather
should not, be viewed as simply a convenient way of conveying more descriptive and
quantitative information about space and territories. Rather, the use of sound forces
us to rethink the very concept of the map as primarily a visual image of space that
serves as a simple conveyer of information. The integration of sound might involve a
deeper understanding of the cultural, geographical and political dimensions of
maps. Our goal in this research is not to argue that adding sound to maps is
necessarily better than existing visual cartographic techniques, rather it is to further
explore the potential of sound for integrating these dimensions in cartography and
contributing to a multisensorial approach to space.
3.3 Sound effects and narratives of Antarctica exploration
The Cybercartographic Atlas of Antarctica provides a relevant environment to
evaluate, explore and experiment with a range of roles for sound in mapmaking.
This Atlas aims to expose users, primarily high school students, to relevant and
engaging information about Antarctica (Pulsifer et al. 2005) following the principles
of cybercartography (Taylor 1997, 2003). This atlas is structured around content
modules containing cartographic, narrative, and multimedia elements for the
purpose of examining a particular question, topic, area, or phenomena related to the
Antarctic region such as Antarctica exploration or territorial claims (Pulsifer et al.
2007). Voices, sound effects and music have been integrated in different modules of
this atlas with different functions.
Antarctica was the last ‘uncharted’ continent: it took a great deal of exploration
and four centuries to whittle down ‘the fabled Great South Land’ associated to
Antarctica (McGonigal and Woodworth 2001, p. 384) and then to chart
1228 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
exhaustively the entire continent. This exploration of Antarctica has been made
possible through different phases associated to different modes of transportation
(Simpson-Housley 1992): the sailor’s perspective (from the sixteenth century to the
end of the nineteenth century); the land explorers (early twentieth century); the
pilot’s view from the air (in 1929 Byrd was the first to fly over Antarctica); and
the satellite view (in 1997 the RADARSAT Antarctic Mapping Mission 1 provided
the first high-resolution images of the entire Southern continent).
In the module of the Cybercartographic Atlas of Antarctica dedicated to
Antarctic exploration, an animated map has been designed to show successively the
different routes of exploration in the process of ‘unfolding’ Antarctica (see Pulsifer
et al. 2007). In this animated map, four sequences of sound effects provide an
understandable cartographic synthesis of the four exploration phases. These sound
effects set a tone for which phase of exploration the user is experiencing: the sounds
of waves and the creaking timbers of a ship signifies maritime exploration, the
sounds of wind and footsteps crunching through snow signifies terrestrial
exploration, the sound of a plane engine signifies aerial exploration and the sound
of an electronic communication signal signifies satellite exploration. These sounds
also provide tactile sensation through an enhanced perception of, for example, the
relentless hum of the wind which deepens the sense of remoteness and isolation
associated to the terrestrial exploration.
In the exploration module of the Atlas of Antarctica, early user comments
concerning the transportation sound effects convinced us that the sound effects did
not, for all users, stand on their own. This issue has been addressed in film theory.
Chion (1994, pp. 63–64) used the concept of synchresis to describe the ‘spontaneous
and irresistible weld produced between a particular auditory phenomenon and visual
phenomenon when they occur at the same time.’ He argued that, although not all
combinations of visual and auditory phenomena will combine through synchresis, a
surprising number of combinations, including many that would seem to be
contradictory, do create in the mind of the viewer/listener the conviction that the
sounds heard do indeed arise as a result of the visible phenomenon. A corollary to the
principle of synchresis is the fact that many sounds could be combined and made
meaningful through combination with a variety of visual phenomena and, further,
those sounds on their own may be ambiguous if left unconstrained, or not obviously
constrained, by visual phenomena. As a result, we integrated animated pictograms (a
boat, some foot steps, a plane and a satellite) into the exploration map timeline to
constrain the interpretation of these sounds (see figure 1) and to help the user associate
the sound effects to the transportation modes. Thus, the overall graphic representa-
tion had to be modified in order to convey the intended message. This illustrates the
idea that sound in map design cannot simply be seen as an element to add to the visual
map but rather as an element to take into account in the design of the overall sound
map. Sound should be understood in its interaction with graphic elements as well as
with other sounds and media in general. Introducing these graphic elements is also a
way to move toward a more narrative function of the map.
The design of this sound map highlights the temporal dimension of spatial
exploration. The different lengths of the different exploration phases are
emphasized. The maritime exploration clearly appears to be the longest one.
Beyond emphasizing the duration, it is hoped that the sound helps to capture the
attention of the user by creating narrative expectations. Just like in a movie,
something is likely to happen as the map unfolds. Here, sound effects create a
Designing sound in cybercartography 1229
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
tension that keeps the attention of the map user. The sounds become associated with
the unfolding story on the screen, characterized by the movement of the animated
icon. In this linear representation, the associated sound/animated icon might
provide an aural identity to the evolution of the cinematic soundscape of Antarctica
exploration. Just like in the film Les Amants (Louis Malle, 1958), where the memory
of the map is triggered every time the music is played during the movie (Conley
2006), the memory of this map might be triggered elsewhere in the atlas by playing
the same sound effect.
3.4 A contrapuntal use of voices to map territorial claims
As discussed in section 2, different voices can have different functions and the tone
of the voice can dramatically impact the narrative:
Figure 1. Antarctica exploration module: sound effects and animated pictogram serve torepresent the different phases of Antarctica exploration (from the ‘Island of Utopia’ to thehigh resolution satellite mapping). A prototype of this map is available at: https://gcrc.carleton.ca/confluence/x/oQE.
1230 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
The human voice can be used to personalize and embody knowledge, the identification
of social actors can clarify the origin (and bias) of information, and access to multiple
voices can illuminate not only different points of view, but also the social, cultural and
political stakes that are implicit whenever notions of geography, territory, identity,
economy and the nation state are invoked. (Theberge 2005, p. 396)
In the Atlas of Antarctica, some of these functions have been used in the module
dedicated to territorial claims. This module presents and explains the historical and
geographical context of territorial claims in Antarctica. As described in the atlas, the
UK started territorial claims (1908) based on historical discoveries, followed by
colonies New Zealand, Australia, then by France, Norway, Germany, Chile and
Argentina (the US has not made a formal claim, nor do they recognize claims by
other nations; however, they have reserved the right to do so). All these claims are
based on a sector theory – scribed territorial boundaries extended to the South Pole
similar to ‘slicing a pie’ (see figure 2). While most of the territory claimed is not
contested, the territorial claims of the United Kingdom, Argentina and Chile in the
Antarctic Peninsula region overlap. It is in this contested area that sovereignty-
related conflicts have been most prominent in the history of the region.
In this module, the function of vocal narration is to highlight the tension between
these three countries. To do so, newspapers articles from the UK, Chile and
Argentina that address this issue more or less directly have been recorded in their
original language providing impressions of broadcast voices. Broadcast voices can
have many different functions (see Norman 2004). In our case, a male mature voice
characterizing the voice-of-God (see section 2) has been used to record the English
text and to link it to the historical base of the British claim. Younger voices
characterizing the voices as challengers have been used to record the Chilean and the
Argentinean texts. These two voices are similar in terms of language which creates a
common aural texture that reflects some cultural similarities. The distinction
between these two voices is signaled using gender: the Chilean speaker is a male
while the Argentinean is a female. These voices embody the nationalistic enterprise
that is at stake in territorial claims.
These broadcast voices have been linked to the interactive map of the territorial
claims (figure 2), building on a similar idea developed in the electoral sound map of
Ottawa (see Brauen 2006). When the mouse is over a part of Antarctica only claimed
by one country, the voice associated to this country is triggered. The audio message
is, however, clear and seems simply descriptive. When the mouse is over contested
parts of Antarctica, the user hears different voices simultaneously. The audio
message is no longer clear and descriptive but becomes incomprehensible, symbolic
and intuitive. The voices do not talk to each other, but rather are layered one on the
other. These voices are becoming noises emphasizing the blurriness between the
aural codes as well as the richness of the combination and layering of sonic elements.
The resulting cacophonic environment becomes an implicit marker of the
complexity of the tension associated with territorial claims. This tension is enhanced
through the increased volume of the combined voices in the contested areas. These
voices generate an unexpected destabilizing auditory environment that contrasts
sharply with the clarity of the graphic delimitation of territorial claims. This use of
vocal narration highlights the cartographic tension existing between the complexity
of the world and the cartographic simplifications required to analyze, represent and
communicate these phenomena. Just as film music can serve as counterpoint to the
image action (see section 2), voices here run counter to the image map providing an
Designing sound in cybercartography 1231
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
anti-parallelist strategy of accompaniment to the visual message. In this case, vocal
narration conveys the idea that territorial claims are highly complex and tied to
notions of history, identity, culture, geography and authority. This resonates with
the notion of contrapuntal cartography developed by Sparke (1998). Sound maps
could then serve ‘at once both to communicate in and to disrupt the cartographic
conventions’ (Sparke 1998, p. 473). Sound can be used to challenge the social,
cultural and political point of view provided by conventional maps and emphasizes
the idea that cybercartography allows for both the figurative and literal expression
of distinct voices (Taylor 2005).
3.5 Designing the audio identity of the atlas
The purpose of the audio identity is to provide the atlas of Antarctica with an aural
theme that contributes to the overall tone of the project and assists in constructing a
coherent and immersive environment for the user. It consists of a set of recorded
elements which recur throughout the project and provide a common sonic and
conceptual basis for all other sound icons, motifs and user feedback effects. In this
way, it relates to the concept of leitmotiv (see section 2) and functions as the aural
companion of a project’s so-called ‘look and feel’. The visual interface produces a
coherent user environment by adhering to a pre-established structure, a limited and
consistent colour palette, and recurring sets of dynamic graphical elements.
Similarly, an audio identity assists in orienting the user and contextualizing (that
is, not simply reproducing) on-screen information and events by establishing a
pervasive aural environment comprised of thematically linked sonic elements which
become increasingly familiar to the user through their repetition and variation.
Immersion, however, should not necessarily entail ever presence. An audio identity’s
palette can make significant use of silence as well, whether as a means of
punctuating on-screen (non-)events or creating a sense of dynamic interplay between
visual and aural components. An economy of well-deployed sound elements can be
used to powerful effect, whereas an inconsistent sonic framework can jar users out
of their engagement with a multimedia project.
Our aim in building the audio identity of the Atlas was to convey the theme of
Antarctica not only as a natural environment, but also as a site of social and
technological convergence. The resulting sound-image has therefore been conceived
as a loose depiction of three intersecting components: an exterior frontier
(represented by the high-pitched sound of cold winds whistling across a plain);
the human-made interior space of a research station (represented by the muffled
rumble of winds buffeting a small, enclosed acoustic space); and intermittent bursts
of short wave radio activity, meant to suggest the convergence of what Arjun
Appadurai (1996) has labelled techno- and mediascapes. Elsewhere in the project,
recreations of British, Argentine, and Chilean radio broadcasts regarding the
territorial claim tensions are used to depict clashes at the level of what Appadurai
calls the ethno- and ideoscapes. A final component of the audio identity is a recurring
‘sound icon’—a brief, non-musical, non-referential motif which serves as a signature
for the project as a whole and which may act as a model for a variety of user
interface feedback sounds (this audio identity is available at: https://gcrc.carleton.ca/
confluence/x/GAQ).
An earlier version of the audio identity contained two elements which were
conceptually interesting, but ultimately had to be discarded. One of these sounds, a
looped recording of radio signals arcing through the polar atmosphere, seemed well
1232 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
suited to convey a technological presence in Antarctica. Oddly, however, several
individuals who were asked to review our work in progress, and who were unaware
of the recording’s original context, commented that it conjured images of tropical
birds in a jungle. A decision was thus made to choose sounds above all for their
potential to be learned and recognized rather than for any inherent meaning that
might be supposed to inhabit a given recording. The second element to be removed
from the audio identity was a short, looping musical motif produced with a software
synthesizer. A four-note sequence of bass notes was layered with an atmospheric
texture designed to emulate a windswept Antarctic landscape. In this respect, this
version of the audio identity was quite successful. However, several reviewers
ultimately determined that music of any sort might be too prescriptive of a
particular mood amongst users and too culturally specific (in this case, reflective of a
traditional Western tonal system) for the subject matter of the Atlas. This is not to
say that music cannot or will not have a place within the project but that, in each
instance, its cultural and thematic implications will need to be considered.
The design of the Atlas’ audio identity has also had to take into account a number
of technological variables. Bandwidth constraints and the varying capabilities of end
users’ computer systems are always central concerns in the development of web-
based multimedia. This is particularly the case where sound is concerned. While
contemporary graphic designers can be certain that the majority of users’ systems
will have reasonably advanced graphic capabilities within a known range, the sound
designer must consider an immense range of capabilities as well as the possibility of
no sound at all. This can be attributed in large part to the centrality of the graphic
interface to modern computer usage compared to the uneven attention given to
sound. The proliferation of a high-speed Internet, although unevenly distributed
internationally and unevenly accessible to certain groups within communities often
(casually) considered highly connected (Petrazzini and Kibati 1999, Warf 2001), and
low-cost graphics hardware (monitors, and integrated graphics or video cards) have
ensured that many users can view photorealistic images at reasonably high
resolutions without difficulty on home or institutional systems.
Sound design, on the other hand, faces a number of challenges related to users’
hardware and data transmission. Many computers use small speakers enclosed in
plastic boxes or speakers built-in to the monitor or the computer itself. Like a
transistor radio, each of these options is capable of reproducing only a very small
portion of the frequency range, and are therefore an impediment to reproducing
immersive sound. At the same time, a smaller, yet significant number of systems are
designed precisely for the immersive experiences of gaming and watching videos,
and their sound reproduction capabilities are much closer to those of a multi-
channel (‘surround sound’) home theater system. This variation forces the sound
designer to target the project towards one type of system in an attempt to
accommodate the full range. Secondly, audio files tend to be much larger than
graphic files and, even when compressed, use significantly more bandwidth.
Moreover, because the compression process removes sound data from the file
(beginning with the highest and lowest frequencies, and moving towards the middle)
in order to make it smaller, highly compressed audio can also be an impediment to
an immersive experience.
Design of the audio identity has taken these concerns into account while also
avoiding a lowest common denominator approach to audio in the Atlas. This is
reflected in the construction of the wind sounds which form the basis of the audio
Designing sound in cybercartography 1233
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
identity. The high- and low-frequency samples used to create the wisping and
rumbling of the wind can be thought of as two distinct layers of sound, constructed
around possible end user systems. Wisps and gusts have been created using an
isolated range of upper frequencies (1 kHz and upward), ensuring that the sounds
will be adequately reproducible on virtually any sound system, regardless of its
limitations. These sounds alone contribute only partially to the desired state of user
immersion, but ensuring their audibility will help to ensure sonic coherence within
the Atlas, even on low-end systems. The second sonic layer is limited to a portion of
the low-frequency range, beginning at 125 Hz and continuing downward beyond the
threshold of human hearing. Sound in this range is felt as a vibration as much as it is
heard, and it is precisely this immersive effect that home theater-style computer
audio systems are designed to reproduce. Finally, midrange frequencies have been
largely silenced in order to emphasize activity within the two layers, and to avoid
forms of harmonic distortion that commonly occur when small plastic speakers
reproduce certain combinations of mid-frequency sound. The end result is an audio
identity which, rather than being crippled in deference to less capable systems,
maintains a level of universal functionality while also containing an additional sonic
layer that will enhance immersiveness on more powerful systems. Moreover, if it is
later determined that bandwidth constraints require the sound data to be made more
compact, this isolation of sonic elements in the mixing of the audio identity will
make it easier to remove the low-frequency component without the need for further
re-mixing.
This section has presented some considerations in the design of sonic elements for
use in a cybercartographic atlas, drawing attention to the interplay of visual and
audible elements, to the sometimes culturally-biased design and interpretation of the
sound used, as well as to the blurriness of the limits and functions of the major aural
codes. Through these examples we have highlighted some of the consequences of the
recreation of a cinematic soundscape: we discussed the potential of the contrapuntal
functions of sound in mapmaking; the use of sound to create a parallel and
validating narrative element complementing the visual map; and we introduced
some of the conceptual and technical issues that need to be considered when
attempting to enhance coherence of the atlas and the user sense of immersion
through the design of specific audio elements such as the atlas audio identity. These
conceptual and technical issues are highly interrelated in cybercartography as in
other forms of multimedia design and especially in any form of multimedia designed
for use in the Internet. The following section recontextualizes some of these
conceptual and technical issues and highlights the sometimes oppositional nature of
some of the goals of sound design in the Internet environment.
4. Framework for Internet sound maps
In this section we draw on established sound-design genres such as film sound design
and video game sound design (e.g. see Gal et al. 2002, Theberge 2005) to explore the
potentials and constraints of combining sound with maps in the specific context of
the Internet. The presentation and discussion of desired sound design features and
the constraints of presenting sound on the World Wide Web leads us to propose a
taxonomy based on sound predictability, interactivity and control.
Designing sound for cybercartographic atlases requires imagination, sound design
skills, and technical innovation for both the transmission of sound in the context of
a cybercartographic atlas and in the authoring environments for the creation of atlas
1234 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
content to include sound. A sound infrastructure is being developed to support atlas
module authors in experimenting with a variety of styles of sound definitions (Hayes
2006); atlas modules incorporating music, voices, sound effects, and combinations
thereof are being developed or planned; and user evaluations of atlas modules and
prototypes using the sound infrastructure are soon to be conducted. To enable the
sound design experiments described in this document, this infrastructure, designed
to provide sound for the cybercartographic atlases operating as World Wide Web
applications, has been implemented to provide the following functional capabilities
(some or all of which may be used by a particular sound map):
N Sound layering: several sounds and several types of sound (e.g. spoken
narration, music, and sound effects) are simultaneously playable. As described
above, it may be desirable to layer elements in the creation of an atlas theme or,
as in the case of the territorial claims map, to highlight differing perspectives as
part of a map. Additionally, it may be desirable to layer at a higher level, for
example by overlaying thematic information related to a particular map with
the atlas theme.
N Sound looping: sounds of all types may also be played in a loop, repeating a
limited number of times, or for as long as the module is being viewed by the
user, or until some action by the user causes the sound to stop. As discussed by
Theberge (2005), looping has often been used as a strategy to overcome
technical limitations in the ability of systems and early computer games to play
long or complex sounds but has also become a popular technique in dance and
electronic music for producing complex aggregate sound designs based on the
underlying loops as relatively simple components of the final production. The
reality of sound design for the World Wide Web is that simultaneous
transmission and playback of sound still requires a significant amount of
network capacity; reducing the network requirements of a sound design in this
context continues to be necessary and looping can contribute to this goal.
N Interactivity: for some of the sound designs discussed in this paper, the user
must be able to interact with the sound design in the sense that the user should
be able to modify some or all of the sounds or the playback parameters of some
or all of the sounds (e.g. gain setting) through interactions with the atlas.
Designing sound for user interaction may, among other techniques, be done
through the use of layering such that the actions of the user modify the
playback status or parameters of each layer differently. User activity may
trigger ‘scene changes’ within the atlas that may, in turn, require sound
transitions as the new atlas content is presented. The use of interactivity (with
reactive visual and/or auditory elements) must be evaluated for each map in
which it is used because it can increase the overall complexity of the map and
does not necessarily improve the effectiveness of a given map. For example, the
envisioned uses of the audio identity of the Cybercartographic Atlas of
Antarctica certainly includes non-interactive applications.
These functional capabilities are similar to those required by film, computer game
and multimedia sound design, although only computer games and multimedia share
the need for interactivity with the atlas sound design—film sound is designed as part
of the production and the audience does not have the ability to modify the sound
during a performance. While we have been guided by sound design from other
genres in what we want the atlas design to sound like, the need for interactivity in
Designing sound in cybercartography 1235
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
many ways dictates how we need to organize our sound design so that it can be
made responsive to the activities of the user. An additional constraint on the
responsiveness of the atlas to user interaction is the time required to send complex,
high-fidelity sounds in digital format to the user’s computer. A failure to take these
transmission delays into account will result in a map use experience which, to the
map user waiting for a map to load, will seem decidedly non-interactive.
Two different mechanisms for transmitting and playing sounds, each with its own
implications for the delays involved, are available for use in a computer network:
sound streams and sound clips. The term stream is used to describe a sound that is
played by the receiving computer as it is being transmitted from the atlas server; the
receiver does not wait until the sound has been fully received to start playback. The
term clip is used to describe a sound that is sent to the receiving computer and is
stored, in its entirety, in the memory of the receiver prior to being played. Each of
these types of sounds has advantages and disadvantages. Clips are limited in
duration because of the requirement for the receiving computer to store the entire
clip in memory. The delay to begin playback when a clip is first transmitted may be
noticeable, depending on the duration of the clip, because the entire clip must be
received before playback begins. However, once loaded, clip playback is very
reliable because there is no dependence on network transmission to maintain sound
quality and a clip may be played in a loop on the receiver with no need to retransmit
any data. Streams can generally begin playback on the receiving computer more
quickly, except when compared to very short clips, and can be of any duration since
the receiver at any time only stores a small portion of the complete sound. Stream
playback is, however, always dependent on network transmission so network delay
can cause audio drop-outs and looping a stream requires retransmission.
The sounds generated by an atlas module result from the sound definition of the
module author modified by activities of the user, as allowed by the author. As a
minimum, the user must choose to view a module in order for that module’s sound
definition to be loaded. But once loaded, the definition of the module’s sounds may
be more or less predictable depending on how the author has defined it. The author
could design the sounds to react to the actions of the user, such as cursor
movements, by modifying playback parameters of some or all of the sonic elements
in the design. The author’s sound definition, in some form, is stored on the atlas
server preparatory to being downloaded as the user browses the atlas and the sound
definition could be different for each atlas module.
As an abstraction, the sounds presented to the user can be classified in terms of
predictability, determined by the sound definition type created by the author and the
level of interactivity made available to the user while browsing the atlas module.
Figure 3 shows this conceptual model. Darker shading indicates more predictable
sounds in the sense that every time the user reloads a predictable atlas module he or
she would expect to hear the same sounds. Less predictable sounds would be ones
that change across repeated perusals of the module or in response to activities of the
user. Note that the predictability of sounds associated with a module is not the same
as those sounds being repetitive. A composed piece of music may not be internally
repetitive despite the fact that it is used repeatedly as part of the module. A single
sonic element (e.g. a duck quacking) that is repeated within a module sound
definition would quickly be noticed as a repetitive element even if it is mixed with
many other random elements. Although repetitive elements, such as animal noises,
1236 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
may become annoying if overused they can also serve to create an audio identity for
an atlas or module (see section 3.5).
The sound definition types shown in figure 3 are outlined in table 1. Each differs
according to the type and level of composition on the part of the author and each
provides different sound design characteristics to the atlas user and different
possibilities for designing user interaction capabilities into an atlas module. Table 1
presents four main categories of sound type definitions and for each provides:
N The interpretation of the sound type from the perspective of the author (i.e.
‘how do I create this type of sound definition?’).
N The anticipated interpretation of the sound type from the perspective of the
user (i.e. ‘not knowing how this module’s sound was defined, what does it
sound like or how may I interact with the sounds?’).
N Examples of sound designs created using this sound definition type.
To create a composed sound definition, the author selects all of the sound elements
and determines the timing of the playback of those sounds. Individual sound
elements do not respond to user interactions. Sound elements could include
prerecorded music, sound effects, narration or a combination thereof, but the
sounds are presented to the user as a single composition with no interactive
elements. Layering may be used to combine sounds, but from the user’s perspective
they appear as a single layer. An example of composed sound is provided in the
module of Antarctica exploration in which sound effects are scripted along with the
map animation to illustrate the different phases of Antarctic exploration (see section
3.3).
To create a compiled sound definition, the author selects all of the sound elements,
but playback parameters of the individual sound elements (probably created as
separate layers) may be modified by user actions. The user could interact with the
sound design by moving the cursor over different regions of a map or other graphic
or by clicking the mouse buttons while the cursor is located over specific graphic
elements. There are two variants of compiled sound definitions, synchronized and
unsynchronized, which are distinguished by the types of sounds being compiled.
Synchronized compilations are intended for the creation of interactive music sound
designs in which the playback parameters of individual sound layers may be altered
by user interaction but the timing of the layers is synchronized by the author’s
definition. The individual layers are synthesized music such as Musical Instrument
Digital Interface (MIDI) sequences that are composed to be used as a group and the
synchronization between layers is maintained by the atlas sound infrastructure (e.g.
Hansen et al. 1999). Unsynchronized compilations are intended for the creation of
interactive sound designs in which playback parameters of individual sound layers
may be modified by user actions, including timing (e.g. start or stop times) and other
settings such as gain and mute settings. The synchronization of the individual layers
is not maintained by the atlas sound infrastructure. Brauen (2006) used a design
based on an unsynchronized audio compilation as does the territorial claims module
of the cybercartographic atlas of Antarctica described previously (see section 3.4).
To create a sonified sound definition, the author associates sound definitions such
as MIDI instrument type designations (e.g. baby grand piano) with thematic data
associated with regions of a map or some other graphic in an atlas module. Playback
parameters of the MIDI instrument such as pitch, note duration, or note rate are
Designing sound in cybercartography 1237
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
modified according to the value of the thematic data associated with the region over
which the cursor is positioned. This type of sonification has been widely used for
non-spatial applications (Flowers and Hauer 1995, Flowers et al. 2001, Hermann
et al. 2003) as well as for spatial ones (Fisher 1994, Coburn and Smith 2005). One of
the authors has been developing an application for browsing data derived from
Canada’s trade with world regions to experiment with both synchronized sound
compilations and sonified sound designs but a description of this application is
beyond the scope of this paper.
For conceptual completeness, we include undefined sound definitions in which the
content of the sound design is unknown or modifiable at the time a user interacts
with the map such that the author may never have heard the combined audiovisual
presentation, as experienced by that user. To create an undefined sound definition,
the author provides access to sound sources, the contents of which are unknown at
the time the rest of the atlas module content is written. Examples of such sources
would include Internet radio streams or streaming audio from live microphones.
Although the presence of dynamic audio sources by themselves makes this type of
sound definition very unpredictable, the user could be given interactive control
through the ability to select among several streaming audio sources or could control
the mixing of several audio streams through individual gain controls. Although we
have verified the ability to connect audio sources such as these into the atlas sound
infrastructure, we have not yet experimented with atlas modules based on this type
of sound definition.
Note that it is possible to use a selection of sounds, each of which may be of a
different type and each of which may be characterized differently in terms of
predictability. For example, composed music could be layered with ambient
background sounds. While sounds internal to the music are obviously synchronized,
there will not be synchronization between the music and the background sounds.
For all sound definition types, it would be possible for the author to provide
alternative sound definitions from which the user could choose while using the atlas.
For example, alternative compositions could be offered in which different styles of
theme music or environmental sounds could be offered to the user, possibly through
personal atlas preference settings. Although in all cases this increases the
unpredictability of the atlas sound design, it may in the case of a composed sound
definition qualitatively change the author’s ability to deliver an audio-visual
presentation as a ‘package’. The user’s interpretation of the content, as much as it
may be guessed by the author, may vary based on which sound definition alternative
is chosen.
The sound definition types in this framework define a range of predictability from
the very predictable instance of a composed sound presentation to the completely
undefined and therefore unpredictable instance of streaming live audio. This
predictability framework does not place any normative value on the level of
predictability of a particular sound design, but instead reflects the ability of the
author to control the presentation of the atlas content as a ‘package’; the more the
user can control sound parameters while using the atlas, the less likely the author is
to have heard the atlas presentation as the user will experience it.
We intend this framework as a means to assess trade-offs between predictability,
interactivity, control, and the desire of the author to compose a module with an
intended mood or message. This framework should allow us to assess how well
authors can predict the user’s interpretation and understanding of combined visual
1238 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
Table 1. Atlas sound definition types: user and author perspectives.
SOUND DEFINITION TYPE
Composed Compiled (multi-track) Sonified Undefined
PERSPECTIVE Author Synchronized audio presentation Synchronized Unsynchronized Parameterization ofdata to generatesound
Author providesdynamic sound content(e.g. Internet radiostream or streamingaudio from livemicrophones) or theability for the user toadd her/his own sound.
Single-tracksound sample.
Multi-tracksound samplesplayed as a unit.
Multiple MIDI tracksdesigned for use as agroup – each track isa musical part (e.g.melody, harmony).Playback parametersof Individual tracksseparately controllable.
Multi-track soundsamples compiledfor use as a group.Playbackparameters ofIndividual tracksseparatelycontrollable.
Sound definitions (e.g.MIDI instruments,and playbackparameters such asvelocity) associatedwith spatial data.
User Audio composition (music,ambient sounds, sound effects,or combination) accompanyingmap.
Interactive composedmusic (different partsassociated withdifferent variablescomputed fromspatial data).
Unsynchronizedsound compilationwith interactiveparts.
Interactive musicsynthesis based oninteraction with map(different musicalinstrument associatedwith differentvariables computedfrom spatial data).
User may be able toadd sound definitionswhile viewing. Usercould interactivelyselect betweenstreaming sources orcontrol the mixing oflive sources.
EXAMPLES Antarctica Exploration map(see section 3.3) and the audioidentity of the Atlas ofAntarctica (see section 3.5).
Pro-muse (Hansenet al. 1999) provide anon-geographicexample. Browserapplication for dataderived from Canada’sTrade with worldregions.
AntarcticaTerritorial Claimsmap (seesection 3.4).
Sonified mapreliability data(Fisher 1994). Browserapplication for dataderived from Canada’sTrade with worldregions.
Map of the extent ofradio station emissions.
Desig
nin
gso
un
din
cyb
ercarto
gra
ph
y1
23
9
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
and auditory atlas content and user evaluations that are soon to be conducted
should help us to begin assessing the interpretations users develop as a result of
working with a range of sound definition types. Research into intertextuality
(Richardson 2000) shows that people understand complex media content based on
their previous knowledge, culture, beliefs and attitudes. This suggests that
introducing cultural elements, such as sounds, into cartography and atlas design
affects the user’s interpretation of the visual content.
This infrastructure then raises some questions regarding the overall message
conveyed by the sound map. How does a more varied sound design, either because
sound elements change on repeated listening or because of the use of interactive
sound elements, affect the user’s interpretation and understanding of atlas content?
In light of the difficulties involved in transmitting complex sound designs on the
World Wide Web, there is a need to evaluate if a less predictable sound design can
compensate for the potential tedium created by low fidelity sound or short sound
loops. Do relatively low fidelity sounds or short loops become tedious for users? If
so, does increased unpredictability of the sound design alleviate the user’s sense of
tedium? In assessing atlas interpretation, highly interactive possibilities such as
allowing users to bring their own sounds to the map affects the overall cartographic
message. How well do users understand the design and intended use of interactive
sound elements? How do the visual elements of the atlas design need to be modified
to assist the user in understanding the auditory elements of the design? Beyond these
questions, one of the main issues in mapmaking is to better understand how the
increasing movement toward map user as map creator—which is becoming
prevalent in the Internet era—is affecting the way these new forms of maps are
changing our understanding, perceptions and uses of territories. These issues
concern sound maps as discussed in this paper as well as the overall world of
mapmaking.
Figure 2. Territorial claim module: voices are overlapping when the mouse is over the contestedareas. A prototype of this map is available at: https://gcrc.carleton.ca/confluence/x/oQE
1240 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
5. Conclusion
In this paper we presented different perspectives on sound design in contemporary
mapmaking. Just like in many other domains such as computer games, sound in
mapmaking can benefit from the extensive theoretical and applied work done in
cinema. The introduction of sound in cinema has deeply affected the movie
conception, production and its interaction with the audience. Sound has been linked
with images in many kinds of ways, for many different purposes involving
description, emotion, aesthetics or political statements. Playing with sound in
mapmaking opens a range of possibilities. Some of these possibilities have been
explored in this paper through examples of sound design for the Cybercartographic
Atlas of Antarctica. Sound effects have been used to recreate a linear cinematic
soundscape of the different phases of Antarctica exploration. Voices have been
overlaid to explore the contrapuntal function that sound can add to the map image.
The design of the audio identity of the Cybercartographic Atlas of Antarctica
emphasized the idea that sound can be combined to create an aural identity for the
atlas.
As the examples above have demonstrated, designing cybercartographic sound
maps is a complex process of arrangement and interaction. Each medium interacts
with the others thereby affecting the overall message conveyed. Adding sound, just
like any other media, requires a transformation in graphics. Yet the combination
sound/image forces us to rethink the very concept of the map as a primarily visual
image of space.
This combination sound/image also opens up perspectives in two major research
directions related to cybercartography. First it highlights the importance of mapping
and exploring emotions, perceptions or sensations related to space. Studies in
emotional geographies have emphasized the idea that our emotions affect our
relation with our environment and vice-versa (see Davidson and Milligan 2004).
Exploring the way places convey emotions helps understanding how we interact
with these places and with people in these places. These emotions can be mapped
and explored with the use of sound in order to expand the meaning of the map
beyond its primarily functionalist dimension. The challenge for future research will
be to further develop our understanding of how (sound) maps can capture and
convey emotional geographies.
Second, the cybercartographic sound infrastructure described in the last section of
this paper opens the perspective for the audience of becoming more involved in the
choice of sound and the way it will be played and combined with visuals. This
Figure 3. Soundscape predictability parameters. The different taxons are described morespecifically in table 1.
Designing sound in cybercartography 1241
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
infrastructure allows anybody to generate personalized sound maps. These new
forms of maps could have a tremendous impact on the way we understand,
perceive, use maps on the Internet, and interact with them. Better under-
standing how personalized (sound) maps affect the way we understand, perceive
and interact with space remains another major and stimulating challenge in
cybercartography.
Acknowledgments
This is an expanded version of a paper presented at XXII International Cartographic
Conference (Caquard et al. 2005) This research was supported, in part, by the
Cybercartography and the New Economy project which is funded by the SocialSciences and Humanities Research Council (SSHRC) of Canada under the Initiative
on the New Economy (INE) Collaborative Research Initiative Grant. Dr. D.R.
Fraser Taylor is the Principal Investigator for the project. It was also supported by
two SSHRC strategic research grants funded under the Image, Text, Sound and
Technology (ITST) program. The authors would finally like to thank three
anonymous reviewers for comments and suggestions on an earlier version of this
paper.
ReferencesALTMAN, R. (Ed.), 1992, Sound Theory/Sound Practice (New York: American Film Institute).
APORTA, C., 2004, Routes, trails and tracks: trail-breaking among the Inuit of Igloolik. Etudes
Inuit Studies, 28(2), pp. 9–38.
APPADURAI, A., 1996, Modernity at Large: Cultural Dimensions of Globalization
(Minneapolis: Minnesota University Press).
BRAUEN, G., 2006, Designing interactive sound maps using scalable vector graphics.
Cartographica, 41(1), pp. 59–71, Map available at: http://gcrc.carleton.ca/cne/
proof_of_concepts/elect2004/ (accessed May 2006).
BRAUEN, G. and TAYLOR, D.R.F., 2007, A cybercartographic framework for audible
mapping. Geomatica, 61(2), pp. 19–27.
BRIDGETT, R., 2007, Subtlety and silence. Available at: http://www.zero-g.co.uk/index.
cfm?articleid5722 (accessed July 2007).
CAMPBELL, C.S. and EGBERT, S.L., 1990, Animated cartography / thirty years of scratching
the surface. Cartographica, 27(2), pp. 24–46.
CAQUARD, S., BRAUEN, G. and WRIGHT, B., 2005, Exploring sound design in cybercarto-
graphy. Paper presented at XXII International Cartographic Conference (ICC2005),
9–16 July 2005, A Coruna, Spain.
CASTRO, T., 2005, Les archives de la planete—a cinematographic atlas. Jump Cut. Available
at: http://www.ejumpcut.org/currentissue/KahnAtlas/index.html (accessed Aug
2006)..
CHATWIN, B., 1987, The Songlines (London: Jonathan Cape).
CHION, M., 1994, Audio-Vision: Sound on Screen (New York: Columbia University Press).
COBURN, C.A. and SMITH, A.W., 2005, Musical landscapes using satellite data. Paper
presented at SPARK Festival of Electronic Music and Art. 3rd Annual Conference, 16–
20 February 2005, University of Minnesota, Minneapolis, MN.
CONLEY, T., 2006, Cartographic Cinema (Minneapolis, London: University of Minnesota
Press).
DAVIDSON, J. and MILLIGAN, C., 2004, Embodying emotion sensing space: introducing
emotional geographies. Social & Cultural Geography, 5(4), pp. 523–532.
DIBIASE, D., MACEACHREN, A.M., KRYGIER, J.B. and REEVES, C., 1992, Animation and the
role of map design in scientific visualization. Cartography and Geographic Information
Systems, 19(4), pp. 201–214.
1242 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
FISHER, P.F., 1994, Hearing the reliability in classified remotely sensed images. Cartography
and Geographic Information Systems, 21(1), pp. 31–36.
FLOWERS, J.H. and HAUER, T.A., 1995, Musical versus visual graphs: cross-modal
equivalence in perception of time series data. Human Factors, 37(3), pp. 553–569.
FLOWERS, J.H., WHITWER, L.E., GRAFEL, D.C. and KOTAN, C.A., 2001, Sonification of daily
weather records: issues of perception, attention and memory in design choices. Paper
presented at Proceedings of the 2001 International Conference on Auditory Display,
Espoo, Finland (International Community for Auditory Display), pp. 222–226.
Available at: http://www.acoustics.hut.fi/icad2001/proceedings.
GAL, V., LE PRADO, C., MERLAND, J.B., NATKIN, S. and VEGA, L., 2002, Processes and tools
for sound design in computer games. Paper presented at Proceedings, International
Computer Music Conference 2002, Gothenburg, Sweden, 16–21 September 2002 (San
Francisco: International Computer Music Association).
GARTNER, G., 2004, Location-based mobile pedestrian navigation services: the role of
multimedia cartography. Paper presented at International Joint Workshop on
Ubiquitous, Pervasive and Internet Mapping (UPIMap 2004), 7–9 September 2004,
Tokyo, Japan. Available at http://www.ubimap.net/upimap2004 (accessed April
2005).
HANSEN, M.C., CHARP, E., LODHA, S., MEADS, D. and PANG, A., 1999, Promuse: a system for
multi-media data presentation of protein structural alignments. Paper presented at
Pacific Symposium on Biocomputing 4, Hawaii ( (Hawaii: International Society for
Computational Biology), pp. 380–391, Available at http://helix-web.stanford.edu/
psb99/Hansen.pdf (accessed May 2006).
HARROWER, M., 2004, A look at the history and future of animated maps. Cartographica,
39(3), pp. 33–42.
HAYES, A., 2006, Nunaliit: Cybercartographic Atlas Framework. Available at: http://
nunaliit.org (accessed October 2007).
HERMANN, T., DREES, J.M. and RITTER, H., 2003, Broadcasting auditory weather reports—a
pilot project. Paper presented at Proceedings of the 2003 International Conference on
Auditory Display, Boston, MA (Boston: International Community for Auditory
Display), pp. 208–211. Available at: http://www.icad.org/websiteV2.0/Conferences/
ICAD2003/paper/51%20Hermannl%20weather.pdf.
JACOB, C., 1992, L’empire des cartes—approche theorique de la cartographie a travers l’histoire
(Paris: Bibliotheque Albin Michel Histoire, Albin Michel).
KASSABIAN, A., 2003, The sound of a new film form. In Popular Music and Film, I. Inglis
(Ed.), pp. 91–101 (London: Wallflower).
KRAAK, M.-J., and BROWN, A. (Eds), 2001, Web Cartography (New York: Taylor & Francis).
KRYGIER, J.B., 1994, Sound and geographic visualization. In Visualization in Modern
Cartography, A.M. MacEachren and D.R.F. Taylor (Eds), pp. 149–166 (New York:
Pergamon).
LASTRA, J., 2000, Sound Technology and the American Cinema: Perception, Representation,
Modernity (New York: Columbia University Press).
MACEACHREN, A.M., 1994, Time as a cartographic variable. In Visualization in Geographic
Information Systems, D.J. Unwin and H.M. Hearnshaw (Eds), pp. 115–130 (New
York: Wiley).
MACEACHREN, A.M., 1995, How Maps Work—Representation, Visualization, and Design
(New York: Guilford).
METZ, C., 1968, Essais sur la signification au cinema, tome 1 (Paris: Klincksieck).
MCGONIGAL, D., and WOODWORTH, L. (Eds), 2001, Antarctica and the Arctic: The Complete
Encyclopedia (Willowdale: Firefly).
MONMONIER, M.S. and GLUCK, M., 1994, Focus groups for design improvements in dynamic
cartography. Cartography and Geographical Information Systems, 21(1), pp. 37–47.
MOUAFO, D. and MULLER, A., 2002, Web-based multimedia cartography applied to the
historical evolution of Iqaluit, Nunavut. Paper presented at Proceedings of the Joint
Designing sound in cybercartography 1243
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
International Symposium on Geospatial Theory, 9–12 July 2002, Ottawa, Canada.
Available at: http://www.isprs.org/commission4/proceeding (accessed April 2005).
MULLER, J.-C. and SCHARLACH, H., 2001, Noise abatement planning—using animated maps
and sound to visualise traffic flows and noise pollution. Paper presented at
Proceedings of the 20th International Cartographic Conference, 6–10 August 2001,
Beijing, China (ICA Vol. I), pp. 375–385.
NORMAN, K., 2004, Sounding Art: Eight Literary Excursions through Electronic Music
(Aldershot: Ashgate).
PAULIN, S.D., 2000, Richard Wagner and the fantasy of cinematic unity: the idea of the
Gesamtkunstwerk in the history and theory of film music. In Music and Cinema, J.
Buehler and C. Flynn (Eds), pp. 58–84 (Hanover, NH: Weslegan University Press).
PETRAZZINI, B. and KIBATI, M., 1999, The Internet in developing countries. Communications
of the ACM, 42(6), pp. 31–36.
PULSIFER, P.L., PARUSH, A., LINDGAARD, G. and TAYLOR, D.R.F., 2005, The development of
the Cybercartographic Atlas of Antarctica. In Cybercartography: Theory and
Practice, D.R.F. Taylor (Ed.), pp. 461–470 (Amsterdam: Elsevier Science).
PULSIFER, P., CAQUARD, S. and TAYLOR, D.R.F., 2007, Toward a new generation of
community atlases—the Cybercartographic Atlas of Antarctica. In Multimedia
Cartography, W. Cartwright, G. Gardner and M.P. Peterson (Eds), pp. 195–216
(New York: Elsevier).
RICHARDSON, K., 2000, Intertextuality and the discursive construction of knowledge: the case
of economic understanding. Paper presented at Intertextuality and the Media: From
Genre to Everyday Life U.H. Meinhof and J. Smith (Eds), pp. 76–97 (Manchester,
UK: Manchester University Press).
SCHAFER, R.M., 1977, The Tuning of the World (New York: Alfred A. Knopf).
SCHAFER, R.M., 1994, The Soundscape: Our Sonic Environment and the Tuning of the World
(Rochester, VT: Destiny Books).
SERGI, G., 1999, The sonic playground: Hollywood cinema and its listeners. Filmsound.org.
Available at: http://www.filmsound.org/articles/sergi/index.htm (accessed March
2006).
SERVIGNE, S., LAURINI, R., KANG, M. and LI, K.J., 1999, First specifications of an
information system for urban soundscape. Paper presented at IEEE International
Conference on Multimedia Computing and Systems, 7–11 June 1999, Florence, Italy,
Vol. II, pp. 262–266 (Los Alamitos, CA: IEEE Computer Society).
SIMPSON-HOUSLEY, P., 1992, Antarctica: Exploration, Perception and Metaphor (London:
Routledge).
SPARKE, M., 1998, A map that roared and an original atlas: Canada, cartography, and the
narration of nation. Annals of the Association of American Geographers, 83(8), pp.
463–495.
TAYLOR, D.R.F., 1997, Maps and mapping in the information era. Paper presented at
Proceedings of the 18th International Cartographic Association Conference, Stockholm,
Sweden L. Ottoson (Ed.), pp. 1–10 (Stockholm: Swedish Cartographic Society).
TAYLOR, D.R.F., 2003, The concept of cybercartography. In Maps and the Internet, M.P.
Peterson (Ed.), pp. 405–420 (Amsterdam: Elsevier science).
TAYLOR, D.R.F., 2005, Cybercartography: Theory and Practice (Amsterdam: Elsevier
Science).
TAYLOR, D.R.F. and CAQUARD, S., 2006, Cybercartography: Maps and Mapping in the
information era. Cartographica, 41(1), pp. 1–5.
THEBERGE, P., 2005, Sound maps: music and sound in cybercartography. In
Cybercartography: Theory and Practice, D.R.F. Taylor (Ed.), pp. 389–410
(Amsterdam: Elsevier).
TROUBIE, B., 2004, Medal of Honor. filmsound.org. Available at: http://www.filmsound.org/
game-audio/medal_of_honor.htm (accessed July 2007).
1244 S. Caquard et al.
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008
VASCONCELLOS, R. and TSUJI, B., 2005, Interactive mapping for people who are blind or
visually impaired. In Cybercartography: Theory and Practice, D.R.F. Taylor (Ed.)
(Amsterdam: Elsevier Science).
WARF, B., 2001, Segueways into cyberspace: multiple geographies of the digital divide.
Environment and Planning B: Planning and Design, 28, pp. 3–19.
WOODS, B., 2005, Terrain rendering techniques and virtual environment design and
implemention as methods of geo-visualization towards geo-science education. M.
Sc. Thesis, Department of Geography and Environmental Studies, Ottawa, Carleton
University.
Designing sound in cybercartography 1245
Downloaded By: [Ingenta Content Distribution TandF titles] At: 16:40 22 November 2008