Sound Creatures

36
1 Sound Creatures and the Virtual Soundscape By Lloyd Barrett KM42 – Master of Music Supervisor: Andrew Brown

Transcript of Sound Creatures

1

Sound Creatures and the Virtual Soundscape

By Lloyd Barrett

KM42 – Master of Music

Supervisor: Andrew Brown

2

Preamble: (For Assessment Purposes Only) This paper is a dissertation on the motivations, influences and practicalities involved in the development of an artificial life system as a tool for the purpose of immersive soundscape creation. It has been targeted as being most suitable for publication in the Organised Sound journal and has been developed mindful of the guidelines and approach of said journal. This journal was targeted as the primary source for reference materials related to the subject of inquiry. The audience for this paper is essentially anyone interested in soundscapes, virtual systems and experimental composition methods The paper attempts to correlate my personal sound practice and development over the past three years with theoretical considerations in the fields of soundscapes and acoustic ecology; virtual worlds; immersive and spatial sound design; sound morphology and transformation and experimental compositional methods. The paper examines a number of my works, past and present; identifying aesthetic and contextual concerns in relation to the aforementioned fields and my personal choices and future direction as a composer. I conclude that while aspects of my methodology and approach with regards to the development of a functional tool for the purpose of immersive soundscape creation are still developing, the creation of the work ‘Habitat’ fulfils both the brief outlined in April 2005 and my own subjective criteria. I further conclude that the portability of the system implies that future development is possible and suggest some relevant and achievable goals.

1

Sound Creatures and the Virtual Soundscape

Sound Creatures ........................................................................ 2

The Soundscape ........................................................................ 3

Virtual Worlds............................................................................. 7

Surround Sound ....................................................................... 10

Sonic Transformation ............................................................... 14

Compositional Forms ............................................................... 16

System Definition ..................................................................... 18

Piece 1 – Incidental Amplifications ........................................... 20

Piece 2 – Virus ......................................................................... 22

Piece 3 - Fringe........................................................................ 24

Piece 4 – Web.......................................................................... 27

Piece 5 – Habitat...................................................................... 29

Conclusions and Future Directions........................................... 31

Reference ................................................................................ 33

2

Sound Creatures

As a child I remember harbouring an obsessive fascination with the creation

of worlds. From the sculpting of icy conurbations in rural England to elaborate

citadels of sand; Lego space colonies to Sim realities; my obsession with

constructing virtual landscapes weaves a continuous thread through my

creative past. Around the year 2000, this passion started informing my

compositional choices, which at the time had been focused on making as

much noise with as little effort as possible. It was at this point that I found

something of a niche producing ‘sound art’ with an emphasis on immersion

and aural-landscaping.

My current project, Sound Creatures, is a work-in-progress for utilising

artificial-life systems in the creation of soundscapes. It was designed as part

of a Masters of Music that fed from a series of R.E.V. instrument building

workshops. Blurring the distinction between the ‘Real’, ‘Electronic’ and

‘Virtual’ is a key theme with regards to the Sound Creature system and this

paper serves to identify some major contradictions, not just with regards to the

system and its application, but inherent to the key historical areas of sonic

influence.

In line with the opening anecdote it is worth pointing out that this is a personal

journey more focused on creative development than technical expertise,

therefore credit needs to go to the QUT music and sound department (Greg

Jenkins and Andrew Brown in particular) for opening the gates and providing

direction.

3

The Soundscape

Of primary importance to ‘Sound-Creatures’ is the notion of ‘soundscape’. In

collecting materials for this paper I immediately stumbled into a minefield of

semantic warfare with particular reference to the term. Cineastes tend to

brandish the word in association with sound design supporting filmic mise-en-

scene, while many incorrectly consider it an adjunct to Musique Concrete as

defined by Pierre Schaffer. While both Musique Concrete and soundscapes

share historical connection and a number of related concepts, this comparison

is irreconcilable with any sensible understanding of a landscape of sound

based on Schaeffer’s definition of “l’object sonore” as an “object for human

perception and not a mathematical or electro-acoustical object for synthesis”

(Truax: 1999). Schaeffer was interested in isolating and composing with

specific sound objects which “may be defined as the smallest self-contained

element of a soundscape” (Truax: 1999).

The current theoretical framework for sound-scaping has its origin in the work

of R.Murray Schafer and his World Soundscape Project; which includes noted

theorist Hildegard Westerkamp and computer musician Barry Truax. In his

work “The Soundscape: Our Sonic Environment and the Tuning of the World”

(1994), Schafer defines the ‘soundscape’ as primarily “…any portion of the

sonic environment regarded as a field of study” (1994, 274). As Schafer’s

concerns lie particularly with acoustic ecology there is the implication that

environmental sound is a principal source for any soundscape. Westerkamp

argues against this notion in favour of soundscape as “the artistic, sonic

transmission of meanings about place, time, environment and listening

perception” (2002, 52). The inference is that random phonography minus

context is not enough. Truax expands the definition further, calling

soundscapes “an environment of sound (or sonic environment) with emphasis

on the way [they are] perceived and understood by the individual, or by a

society” (1999). The context also needs to be clearly understood by an

audience in order that a soundscape can work: so how then can abstract or

virtual sonic environments be classed as soundscapes? Herein lies the

4

difficulty of adequately defining the soundscape as the semantic debate

indicate a dependence not only on composer intention but also the context of

the piece; structure and movement over time and space and the ability to

adequately imply an environment to the listener.

My work, ‘The Drift Project’ (2003) focuses on distilling the feel of a place by

identifying key sound events and layering them structurally over the lower

level ambient sound from the place. The result is a somewhat psycho-

geographic compositional technique inspired by the ‘derive’; the Situationist

practice of drifting about a geographical space avoiding the usually strictures

of purpose and direction in order to experience the place in more personal

and emotional context. The emphasis with ‘The Drift Project’ is on composing

with sound in order to communicate a whole series of environmental

concerns, with reference to the space where the sounds were recorded, as

opposed to mere documentation. The process (discussed in more detail in

the Compositional Forms section of this paper) highlighted specific sonic

events that could be considered soundmarks; “sound which is unique, or

possesses qualities which make it specially regarded or noticed by the people

in that community” (Schafer: 1994, 274). The process also served to

normalise the ambient background or keynote sound by overlapping and

accentuating it.

Keynote sound is arguably the most essential element for establishing the

immersive quality of the soundscape. Truax defines the keynote as sounds

“which are heard by a particular society continuously or frequently enough to

form a background against which other sounds are perceived” (Truax: 1999).

This is in contrast to the notion of autonomous objects and the focus on

individual and collective morphology more commonly associated with

acousmatic design or indeed any kind of popular or classical music sound

design. With regards to non-environmental sounds I’m working with the

assumption that a piece is effective when it conveys a sense of continuous

dimensional space. Environmental source material invariably includes

keynote sound whereas non-environmental keynote sounds need to be

introduced.

5

Both ‘Sound Forest’ and ‘Summon wings to talk to the sky’ are pieces made

from man-made and pre-recorded sound sources. The style of composition is

on one level an attempt to imply a sense of fluid dimensional space; a space

that moves and changes over the course of the piece. It also features a

number of disparate sound events taken out of context and processed in

order that they might fit as soundmarks within the new context of the created

sound world. While Schafer qualifies his definition of soundscape by stating

that “the term may refer to actual environments or to abstract constructions

such as musical compositions and tape montages, particularly when

considered as an artificial environment” (1994, 274), Westerkamp hijacks

Schafer’s notion of a schizophonic sound to attack such methods:

“In such a case, the composer relates to the recording as an acquired object rather

than as a representation of an experienced place and of lived time. The composer’s

knowledge of such recordings is exclusively aural and does not extend to a

physical/psychic experience with the recorded place or time. Strictly speaking, the

recorded sounds originate in the studio loudspeakers and the actual place and

situation from where they come is transformed inside the composer’s imagination into

an entirely fictional place. The composer is working from within a schizophonic

stance, and creating a new schizophonic experience” (Westerkamp: 2002, 55).

This is somewhat disheartening given Schafer’s use of the word to dramatize

the degeneration of the urban soundscape through the use of Muzak and any

other sound divorced from its origin that can be used to mask the original

contextual sounds of a space. Composer and ecologist Francisco Lopez

criticises the fundamentalist views of the acoustic ecology movement as

conflating “health or communication aspects … with aesthetic judgement”

(Lopez, 1997). The binary assertions that arbitrarily quantify beauty and

tranquillity in opposition to an equally arbitrary concept of ugliness and noise

verses tranquillity defy rational intellectual discussion. My urge to construct

virtual soundscapes is an inherently personal and artistic drive. In regards to

accusations of schizophonia, Lopez challenges the view iterated by

Westerkamp in suggesting that it “is an essential feature of the human

condition to artistically deal with any aspect(s) of this reality… There can only

be a documentary or communicative reason to keep the cause-object

6

relationship in the work with soundscapes, never an artistic / musical one”

(1997). While the constructions may refer to a space that does not physically

exist, they refer to it in the fashion of real spaces that do exist. Assuming this

structural context can be made apparent to the listener I’m unsure how it

could be invalidated.

While a number of artists working within the soundscape paradigm compose

with and layer environmental sounds, the Jewelled Antler Collective from the

United States are particularly interesting in the manner with which they create

and interact with the soundscape. Primarily recording in remote outdoor

environments the improvised instrumentation becomes part of rather than the

sum total of the soundscape. Loren Chasse, one of the more prolific

members of this collective, uses “his sound locations as both the instrument

and the studio” (KQED: 2005). His focus on collecting and recording

“particularly resonant and acoustical situations that are reflected in a peculiar

way” and willingness to post-produce, layer and manipulate the material

results in “soundscapes that are evocative of another place or perhaps even

planet” (KQED: 2005).

Ultimately I agree with Hans U. Werner who asserts that Soundscape

composition is “…open form and definition without edges, ranging

from acoustical photography to semi-abstract sound construction” (Werner:

2002, 73). Of greater importance is the active process of constructing

meaning from a system that is built to exist with reference but without

formalisation. “…soundscapes should not be listened to as static systems.

They should be regarded as vivid processes with continuous references”

(Ploy: 2002, 19). Artificial sound environments that reference and imply real

sound environments cannot properly equate to these environments due to the

fixed nature of sound production and composition. From the perspective of a

“composer” of a “work”, it is these processes that I’m interested in extending

with regards to the Sound Creatures system.

7

Virtual Worlds

The last thirty years have seen a huge development in the representation of

Virtual Worlds across almost all fields of everyday life. In some cases this

development is mimetic – for example the move to apply real-world objects to

improve interaction in web design. Communication is facilitated by online

communities through a variety of different delivery methods incorporating user

avatars and in some cases streaming audio and video. Users interact with

these immersive settings in much the same way that they would interact with

real-physical objects; they are as much a part of our environment as they are

merely artifice as they are tied to specific functions.

In the area of entertainment we can see the move from the creative

imagination applied with pen & paper role playing games of the late 70s to the

online Massively Multiplayer Online Role-Playing Games where the emphasis

has shifted from ASCII representations of heroes and beasties to immersive

titles incorporating first person views, surround sound and interactive

environments. Simulations of life also exist outside of immersion in the

addictive qualities of ‘Sandbox’ games from the original Sim Earth, Sim City,

Sim Life programs through to the latest instalment of The Sims faux-reality

soap opera! Regardless of the aesthetic quality or cultural capital of each

program, a feature common to each is a system of related statistics, variables

and number crunching that creates the balance that energises and solidifies

the world or makes it crumble.

“As long as our machines are faster at mathematics than ourselves they will have the

ability to play the role of mediator between us and the vast computational spaces

outside our direct experience” (Dorin: 2003, 131).

From my perspective, real immersion comes from reading a good book.

Science Fiction of the more speculative kind is my particular interest and it is

from here that most of my urge to explore the virtual domain arises. My first

experience with West Australian author, Greg Egan was with his novel

‘Permutation City’ (1994) which was my first introduction to theories of

Artificial Life. The novel describes a not too distant future where a privileged

8

few have copies made of their personality and biology which exist as software

on a network called the "Autoverse". This virtual world is constructed by a

series of cellular algorithms; is contained physical over a number of

networked computers and features a new life form that (echoing theories of

emergence) manages to develop a sentient understanding of its reality that

ultimately conflicts with the constructed reality of the "Autoverse". Egan

continues his examination of these theories in ‘Diaspora’ (1997); a dense and

somewhat challenging read set entirely in a virtual world of intelligent

software. It is interesting to note that despite the conventional frameworks

that most authors write within constructed fictional worlds; it takes some

getting used to the novelty of Egan’s wholly digital landscape.

Jon McCormack states that novelty “either perceived or real, is a fundamental

driving force behind any creative impetus or gesture” (2003, 184). On a facile

level there is obvious novelty in the creation of Sim versions of friends that

can be placed in virtual circumstances. However there is still the unavoidable

fact that in reality the interactions are based on coded algorithms and the

balancing of statistics. From the perspective of a defiantly anti-commercial

artist I can consciously understand and simultaneously reject notions of

novelty as being a criterion worth ignoring in favour of the creative act, and

subsequent evaluations of the product of said creative act contextually.

There remains the question – why would an artist, especially one working

primarily in sound, be interested in working with Artificial Life? Perhaps, as

alluded to in the introduction to this paper, it is an inherited trait. Sean Zdenek

theorises “that AI is informed by an ‘‘ancestral dream’’ to reproduce nature by

artificial means … which hinges on the belief that human nature (especially

intelligence) can be reduced to symbol manipulation and hence replicated in a

machine” (2003, 340). Alan Dorin expresses a more agreeably pragmatic

approach in suggesting that the goal might be “software which surprise[s] not

only a viewer, but the artist who fashioned the work” (Dorin: 2003, 131).

Crafting a work that remains static certainly has its rewards and can indeed

be mutable depending on the context and circumstance. Creating a work that

essentially recreates itself depending on whatever abilities (or algorithms) are

9

at its disposal introduces what Sommerer and Mignonneau describe as “an

investigation into the creative process itself…similar to John Cage’s use of

chance procedures” (Sommerer and Mignonneau in Whitelaw: 2004, 190).

I’m interested not only in exploring the creative process but also the ability to

create a soundscape composition where the virtual world explored is the

composition. As compositional parameters change so does the virtual nature

and relationships between space and time represented through sound.

Reimann space is an example whereby distance is relative to population (or

density) as opposed to speeds of light or arbitrary time bases. As the

population of a virtual eco-system expands so the distance between each a-

life increases in line with the computational load required to process their

movement (Whitelaw: 2004, 44). While it might seem a little strange to base

compositional process on technological limitations, considering our perception

of time is related to the speed at which the earth revolves these macro-

environmental concerns are arguably as important to the designer of virtual

worlds as any self-perpetuating ecosystem.

Whether as an extension of an existing soundscape or foundation for an

electro-acoustic piece; Artificial Life algorithms can introduce a level of

deterministic uncertainty missing from the composer / producer paradigm and

the possibility “to create artificial organisms that develop their own

autonomous creative practices - to paraphrase the terminology of Langton

(Langton, 1989), life-as-it-could-be creating art-as-it-could-be..” (McCormack:

2003, 184)

The majority of literature reviewed regarding virtual worlds focus either on the

use of algorithms for artistic creation or the development of visual

representatives of three-dimensional space. As my focus is on sound I can

think of nothing more relevant to virtual worlds than the sonification of a virtual

space and its relation to the diffusion of sound in a real space.

10

Surround Sound

In recent years the home-cinema revolution has reignited the fascination with

the spatialisation of sound. Despite a brief flirtation with consumer-based

quadrophonics in the seventies, multi-speaker delivery systems have primarily

been the domain of electro-acoustic composers; such as Karlheinz

Stockhause, Iannis Xenakis and Bernard Parmegiani; and experimental rock

acts like Pink Floyd and Frank Zappa. With the advent of DVD technology,

the ability to stream multi-channel sound has led to the situation whereby low-

end multichannel gear can be purchased and installed for personal computers

from around AU$200. Inevitably this lead to surround audio for computer

games incorporating a new and exciting approach to the creation of non-

linear, sound design reliant on “multiple layers of sound that are nonlinear,

interactive and dynamically mixed and effected in real time [which] allow for

experiences difficult to create in a more traditional real-world setting.”

(Schutze: 2003, 171)

‘Sound Forest’ (2003) was the first piece that I composed for a 4 speaker

square matrix. The audience, seated within the matrix, hear sound processed

and dispersed from a number of pre-defined sound files with the illusion of

movement within the space based upon the relative amplitudes of sound

going to each channel. While the piece maintained a series of variable

sounds across all speakers; a number of mono themes were panned to the

front centre and rear based upon the same system of amplitude relativity.

The important thing to note is the ‘illusion’ of sonic placement which is at the

heart of surround sound diffusion. This illusion was maintained as part of the

context of the piece; an attempt to replicate a forest like environment with

sampled man-made sounds. Four separate stereo streams, each running

independently, were sent left to right in front and rear and then left, front to

back; right front to back. As the piece utilised tiny fragments of sound

buffered and gated the end result was a series of staccato sound events

mimetic of frog or insect calls. Certainly the sound surrounded the audience

but perceptual depth of individual objects was limited by the delivery method

11

with the perpetual surround problem of the ‘sweet spot’; a central location of

equidistance front to back, left to right being the optimum listening point.

Audience feedback was positive but generated comment that just about

anything sounded better in surround. My experience since that time points to

this not necessarily being the case with the effectiveness of the spatialisation

more directly related to the context and structure of the piece and the

treatment of individual objects within the holistic sound-world. We have been

conditioned to focusing our listening forward, perhaps to a stage where some

action is occurring. The delivery of musical performances in surround

frequently still focus on the front speakers with rear surrounds merely

reverberations of the front stage. In live recordings the rear speakers include

this and a louder recorded audience sound to imply immersion in the

audience. Occasionally surround will be used to bring the front stage forward

slightly, giving the impression of the band performing directly in front or

around the listener. With film sound the rear speakers are more consistently

used to apply effects in action sequences. The integration of these elements

in a dimensional space is my focus and the most common usage of full

speaker surround is with regard to the background ambient sound-design or

foley effects in film.

In her paper “Spatio-Musical Compositional Strategies” (2002: 313 - 323)

Natasha Barrett, outlines a number of considerations for surround

composition. The balance of Illusion versus Allusion of a space or a spatial

location of an object is a primary consideration. The illusion can be granted

simply through considering the reverberant properties of the virtual space

which also needs to take into account how sound moves through the space

and the level of refraction / diffusion evident in the materials that compose this

space. The size mass and density of individual objects and their movement

through and interaction with other objects in the space help engender spatial

awareness in the listener. The final element required is a complimentary

space for the objects to inhabit: the notion of a keynote reclaims significance

here. Stephan Schutze describes his keynote as;

12

“a combination of white and brown noise that helps blend the overall audio

environment. In more traditional musical terms this component could be thought of as

a drone or pedal point. It is an ever present level of sound that helps to bind all the

other layers. It is also a reminder that the complete absence of noise almost never

occurs in nature.” (2003, 177-178)

Deployment of the surround is the final important aspect of multichannel

delivery. While the many-varied contexts related to the delivery space

needed specific, on-site organisation the delivery medium is worth considering

here. The most obvious delivery methods are amplitude based wherein the

amplitude of output through each speaker defines the spatial relations of the

sonic objects. One of the problems with this method is the need to compose

for specific speaker placement with regards to the audience. A recent

resurgence in Ambisonic diffusion, due to the availability of free VST plug-ins,

has introduced the ability to compose for a space where the encoding of

spatial information is not reliant on a specific speaker array. While there is not

the scope in this paper to examine the mathematical aspects of how this is

achieved, it is important to note that the decoding to differing speaker arrays

occurs on playback rather than being integral to the design of the piece from

the beginning.

‘The City is Sleeping’ is a surround piece I devised using Ambisonic mixing.

The piece is underpinned by a slowly evolving keynote sound represented by

a mono-file sent through all channels. My idea for the piece was to recreate

the urban soundscape of around 4am and like ‘Sound Forest’, this piece is

constructed completely from tiny fragments of digital sound. The keynote

level is set at slightly louder than the level of room sound while incidental

sounds are introduced sporadically throughout at a non-obtrusive volume

level. These incidental sounds are predefined spatially using the Ambisonic

VST tools. Reverb is applied relative to the distance away from the virtual

listening space. Their relationships are also somewhat incidental and an

exploration of the virtual space is defined by the sounds in real-time. At one

point during the performance a car ignition started in a nearby garage and the

13

car drove off. The immersion was for some listeners so successful that they

found it hard to believe that this wasn’t recorded sound included in the piece.

14

Sonic Transformation

Sonic transformation for both aesthetic and contextual concerns has been

prevalent throughout the history of electro-acoustic music. From the multi-

textual manipulations of national anthems outlined in Karlheinz Stockhausen’s

‘Hymnen’ to the political commentary of Trevor Wishart’s ‘Red Bird’ where

vocalisations imply both torture and bird calls there lies a consistent notion of

the morphology of sounds within the holistic space that is the composition.

The initial, as yet unrealised, impetus for the Sound Creatures project was to

create a system that evolves in the sonic domain through the parallel

morphing of sounds between breeding sound objects. While this is technically

possible the problem of managing the substantial data required to complete

this process is currently beyond my expertise. It is however an uncharted

territory I’m eager to explore and will form the basis of future experiments with

this system.

Recent debates over intellectual property, sampling and file-sharing have

predictably spawned a number of artists who manipulate and combine

multiple artist recordings; often for humorous or political intent. The origin of

this method can be traced to the work of John Oswald who has based quite a

bit of his career on the recontextualisation of the work of others. In an

academic discussion of Plunderphonics, Chris Cutler outlines a series of

applications that include the “out of the air” reproduction of John Cage’s

‘Imaginary Landscape’ series; the importation and referencing made by

Oswald and The Residents and the use of sources as irrelevant or

untraceable common to much concrete and acousmatic sound design. The

context is again essential; “Third Reich ‘N Roll” by The Residents being an

excellent example of importation (Cutler: 2004). Starting with a series of

popular tunes from the fifties and sixties which the group rework and

incorporated into a form of soundscape that is analogous to the mega-mixes

of broadcast radio; the concept of the album develops a critique of the fascism

15

of market forces and charts defining musical worth heralded by the bandstand

era in the United States.

My ‘Sound Forest’ composition sits sonically more in the realm of sources

untraceable. Though their origins are completely relevant and provide the

context for the piece; a lack of awareness does not necessarily reduce

potential enjoyment of the piece as an aesthetic work. Inspired by the film

‘Silent Running’ which depicts a biosphere in space, my goal was to emulate

a forest environment from wholly man-made sounds. While the method of

processing and delivery of the sounds ultimately made the sound sources

irrelevant from an aesthetic point of view; contextually the conversion of man-

made to organic is an important personal form of engagement with the sound-

world akin to a magical spell or a form of alchemy.

The allusion of sonic creation to alchemy has always held a particular

fascination to me with particular emphasis on the transmutation of sound. I

recall an anecdote about William Burroughs, from a source long since

forgotten, that has inextricably etched itself into my memory. Having received

poor service in a favoured restaurant, he recorded a snapshot of the space

onto tape. He then cut-up the recording using a system defined by his friend

and colleague, Brion Gysin. Returning to the restaurant he played back the

manipulated recording in order to alter and hopefully improve the service. In

light of Schafer’s assertations regarding the “tuning of the world” there is

perhaps reason to consider the recontextualisation of ‘harsh’ and ‘ugly’

artificial sound into pristine beautiful environments.

16

Compositional Forms

When considering the use of extra-musical systems in the composition of

sound it is traditional to invoke the name of John Cage. In this case it is

important to set aside the enigma and dogma and present an encapsulation of

his theoretical influence with regards to composition. As with his critique of

the Acoustic Ecology movement, Francisco Lopez provides a no-nonsense

clarity of definition:

Cage’s main contribution can be foreseen as a proposition of non intervention, of

decision-free attitude, of dissoluteness of the idea of composer / composition, deeply

rooted in (or al least explicitly connected to) Zen philosophy. Randomization of sets

of possible decisions regarding the creation of music is thus understood as a form of

liberation of the music from the imperatives of human intervention. An explicit and

strong sense of beauty is found in the fact that some or all the organizational /

structural / constituent features of some music have been generated independently of

us (a very common appreciation in the realms of improvised music) (Lopez: 1996, 1).

Cage’s style was subsequently expanded upon for John Zorn’s game pieces,

of which ‘Cobra’ remains a potent example. In this piece Zorn, conducts an

ensemble hand-picked for their ability to improvise within his specific

frameworks. The conducting of the piece occurs using cards which instruct

players to perform with varying degrees of specificity. The result is that while

no two Cobra performances sound the same, though they come from the

same defining system and therefore adopt structural similarities that vary

dependent on the performers as opposed to the piece. While Zorn has a

hand in the definition of the piece, he is merely guiding his chosen ensemble

towards an unpredictable result.

The notion of composition as a game with a set of rules defining a loose

structure is analogous to the creation of a virtual eco-system. The reason for

adopting these formal methods works on contextual, conceptual and

ultimately aesthetic frames of reference. Palle Dalhstedt notes that “It is

certainly not true that [programming systems] saves time and effort for the

lazy composer, as anyone struggling with a computer program for months

17

could confirm, spending even more time to understand the output.” (2001,

121) Returning to Alan Dorins assertion that a composer might wish to be

“surprised” by the development of a work, it is worth noting that a certain

distance between composer and composition is implied.

Roland Kayn is in my opinion an under-appreciated innovator in this regard.

Combining aspects of information theory, electronics and improvisation his

concept of “cybernetic music” proposes a music that regulates itself much like

a thermostat. His works start with defining a network of electronic equipment

which can be manipulated through a system of controller operations and

commands that can be executed independent of the composer. His work

exemplifies the positive, creative benefits that can result from composers

courting and embracing chance:

“That the composer is unable to predict the results … does not necessarily that the

concept of ‘authorship’ has technically been relinquished. What is abandoned are the

narrative elements, the psycho-emotional effects and other details/aspects usually

associated with the ideas of ‘authorship’ and ‘work of art’.” (Van Rossum: 2003, 5 – 9)

In attempting to define immersive soundscapes through artificial means I have

so far examined some of the concepts primary to their construction.

Historically I have addressed these on a conceptual level with the actual

compositional process being one of layering of sounds in a hard-disk editing

environment. At this point I hit a wall, blocking progress to further

development. While conceptually the pieces work; technically they are held

back by the very nature of the compositional process. Due to the aesthetic

nature of editing in a hard-disk environment it became apparent that a certain

narrative causality or definable style was evident in the manner in which the

environments were constructed. This I believe hampers the nature of an

immersive environment that should equally balance elements of order with

chaos. Therefore the adoption of formal methods that allow the environment

to define itself within a set of specific parameters seems more pertinent to the

nature of what I’m trying to achieve.

18

System Definition

It was clear from the outset that in order to get any form of autonomous

process happening I would need to define it myself. The initial outline was

simply to construct a set of autonomous agents that existed within a virtual

system bound by simple algorithms for movement, proliferation and the

replication of sound. The Processing system, a self-contained extension on

the JAVA platform, provides an excellent base for the neophyte code-monkey

as it is focused on simplifying the creation of computer-based art. With the

addition of a library of extensions it allows Open Sound Control (OSC)

communication between programs and some fairly rudimentary sound

manipulation tools. The initial approach was to construct the artificial life

system in Processing and use OSC to send relevant data to a sound module,

in this case MAX/MSP, which at the lowest level functions as a dispersal /

playback system across multiple speakers. The cross-platform nature of both

programs added a level of portability to the system allowing it to be used in a

number of different settings on both Macintosh and Windows platforms.

The Sound Creatures system utilises an object-oriented approach to

programming whereby each object utilises the same code within exclusive

space in memory. The main class defines the virtual space and is responsible

for the management and interaction of the creatures as well as user

interaction for manual spawning/killing. Interactions with other objects can

spawn new objects or cause the death of an object. At a cyclic moment the

ability to make a ‘call’ is offered to each creature at which point a random

number determines whether the offer is taken or not. If taken, a message is

sent to play the sound. While the approach to data sent and the interpretation

of it in MAX/MSP differed for each version, the consistent element was that

the sound would be played in ‘actual’ space relative to the position of the

object within its virtual space. This is of course highly subjective, somewhat

problematic and dependent on the correct organisation of enough speakers

for even a relatively effective virtual to real transference. The use of at least 4

speakers in a square formation gave the minimum impression of sound

19

events occurring around the listener, the immersive environment therefore

being a combination of the actual environment, an audience and the manner

and context of sounds deployed into the space.

There is more than just a semantic connection between the ‘Sound Creature’

objects and Schaffer’s sound objects. Indeed for the most part the simple

interactions with the source material tally quite strongly with his reduced

listening approach. The existence of the sonic object within a communal

space of similar objects, all contributing to an overall soundscape is, to my

ears, indicative of our human-centric perception of the acoustic ecology of

insects, frogs and birds. This perception is something of a blueprint for the

definition of the system.

One of the major benefits of defining a system like this is the ability to adapt

and rework it for specific situations. From this foundation, further complexity

was added in relation to specific events and pieces; some specific examples

of which are outlined in the remainder of this paper.

20

Piece 1 – Incidental Amplifications

The first major demonstration of Sound Creatures outside of the university

context was as part of the Incidental Amplifications ensemble. This ensemble

was formed specifically for performing at Liquid Architecture and in order to

advertise/reference a sound-art exhibition that I was curating at the time. The

ensemble consisted of three performers utilising different software systems to

reproduce pre-recorded sound in three movements over a stereo system. In

each movement, one performer presents a relatively unmediated soundscape

forming a sonic base on which the other two insert incidental sounds. An

important component of this piece is that the soundscape should dominate in

order that the processed additions can effectively melt into the background,

an effect achieved to reasonable success during the performance despite

some disruptive computer glitches.

The sound sources varied from the deep hum of air conditioners to field

recordings from a street market in China. My sonic base is a recording of a

waterfall in the Gold Coast hinterland, processed lightly to thicken the low end

and emphasise the white-noise inherent in the sound of rushing water. This is

played as a straight file from start to finish. The incidental sounds consist of

the Sound Creatures system playing a series of ‘foley’ recordings I’d made

two years ago that concerned the manipulation of organic and man-made

objects. These recordings are compiled into one ten minute wave file.

On instantiation, each object has two variables associated with in and out

seek points that are scaled to fit somewhere within the compiled wave file.

When the creature ‘calls’ it also sends this seek data, the result being an

indeterminate section of pre-determined sound is played; rather like dropping

a needle on a record. While the horizontal position of the sound creature is

associated with pan information, due to the stereo nature of the performance,

the vertical axis was used to represent amplitude. My performance interaction

in this case focused on moderating the birth and death of creatures and

maintaining the low sound levels to ensure appropriate immersion.

21

The fact that it is difficult to discern who is doing what on the recordings of this

performance is in my opinion part of its success. Overall the performance

combined classic approaches to acoustic-ecology with an improvised

computer music aesthetic popularised in the late nineties in particular by

various members of the Viennese experimental music scene.

The soundscape presented is a virtual space, made from a real space, and

inhabited by objects from contextually different spaces. While the Sound

Creatures system had no level of sophistication at this stage it functioned well

in the subtle insertion of incidental sound. While most of the following pieces

make use of the Sound Creatures system in and of itself, I feel that at this

stage of the development it worked best as a tool used in conjunction with

other systems, enhancing the soundscape with indeterminate incidental

sound.

22

Piece 2 – Virus

Constructed as an exhibition of viral approaches to digital art, the “Virus”

exhibition, curated by Patrick King, provided an opportunity to demonstrate

the Sound Creatures system working in a different contextual framework.

While the artificial life aspect certainly has novel comparative associations

with viral software design, the context of output has always been focused

more on sound-scaping. My understanding of the event led me to conclude

that a faux-organic minimalist soundscape might not be appropriate. The

approach then was to find a sound source more contextually relevant which

lead to my use of dynamic-link library (DLL) files; executable files allowing the

sharing of code and resource among programs. These files are often the

target of malicious viruses and can be opened and played as sound files in

MAX/MSP.

To the stereo based system used in the Incidental Amplifications piece the

instantiation of a frequency value is added which is sent in the OSC message

to MAX and interpreted as a speed to play the files. The files are selected

from my Windows system directory at random and reproduced loudly into the

space. An additional four sound buffers are added in order to capture, loop

and adjust the sound in real-time.

Data-files played as sound hold a novelty value for a very short time. My goal

for the performance was to go beyond this novelty factor, constructing

rhythmical passages with the buffers that would enable the recontextualisation

of the data files into a kind of abstract minimalist techno. As such it served

mainly to clear the dance floor creating an ugly sound that seemed conducive

only to the pleasure of the exhibition curator.

While contextually relevant, in hindsight it seems like a somewhat lazy

approach especially given the rather flat reproduction of the data in MAX.

Had the piece been more heavily weighted to either ‘soundscaping’ or

‘beatscaping’ then it may have been more successful. It does however

23

demonstrate an interesting paradigm with regards to the construction of

immersive sound. Given that immersion requires an audience to correlate the

aural experience with immersive environments or states of immersion, I am

driven to reflect on the very narrow bands of context that ascribe performative

forms with audience perception. The audio-visual systems designed by Troy

Innocent explore a virtual world of language and semiotics that bare scant

relation to everyday life. “Living matter and lived space have no special

primacy here; the real and the natural are drawn into an artificial cosmology

alongside the abstract, the coded and the metaphysical” (Whitelaw:2004, 86).

Considering a virtual world is an act of creation, is it the duty of the composer

to include as many mimetic cues to actual soundscapes or should the

emphasis be placed more on abstraction? Should a virtual eco-system

‘sound’ like a virtual eco-system and is the composer more responsible to his

audience or his creation? This balancing act is a fundamental issue explored

to some extent in further development of Sound Creatures.

24

Piece 3 - Fringe

The ‘Fringe’ performance occurred as part of Electrofringe 2005 at an event

un-evocatively title ‘Elec_Sonic BBQ’. There was no barbecue; merely a

selection of sound-artists working in different contexts from the audio-visual to

extended instrument techniques. Contextually the origin of this piece served

little function other than to demonstrate the Sound Creature process to an

audience overly familiar with the varying manifestations of sonic art. Due to

concerns over the use of an unfamiliar computer setup I did not project visuals

of the interactive creatures; something I would more rigorously attempt in

hindsight; however the positive audience response was a pleasant surprise.

By this stage the Sound Creatures system was substantially enhanced to

make use of multi-channel output with horizontal and vertical creature

coordinates replicated sonically through the square matrix speaker setup.

The creatures now have a considerably more elaborate data structure created

on instantiation defining their health, fertility, mobility and data to be sent

including the channel/ID number for sound reproduction. Essentially blind, the

objects move based on a relative movement algorithm that is modified by their

exclusive mobility variable. While not at all indeterminate in their movements

or actions, the modifiers work to individualise the contributions. Each creature

has three ‘tentacles’ which communicate data converted to resonance value

in MAX/MSP. Additionally the creatures have been split into types with type-

specific variables. The ‘Drones’ type select extremely tight loops in order to

achieve a more buzzing fluid sound over longer durations. The ‘Percussive’

type use longer loops, are more sporadic in their calls and produce louder

sounds – the ultimate idea was to create a more rhythmical object which has

only been achieved to a relative degree now. The ‘Mimets’ combine short

loops with a more frenzied attack and have a communal loop region shared

over all instances of the creature. This loop region changes based upon an

‘event change’ triggered from the main class of the Sound Creatures program.

Other important additions included the selection of output channel based not

on creature ID but on open and closed channels which allowed me to cut

25

down on the use of MAX and the addition of predator ‘Wyrms’ that kill the

creatures on contact.

The sound source used for this performance was an extension on that used

for the Incidental Amplifications performance that had subsequently been

chopped up and re-arranged by a wave file scrambling program. Given the

very arbitrary nature of the sound file and play method the performative

emphasis was the construction of a soundscape that still made sense within

the environment. Interaction occurred with the usual addition and subtraction

of creature types which at this point in the development is integral to the

horizontal and vertical structure of a piece. While the result has the potential

to surprise, the layering of sounds by adding more creatures combines with

the destruction of sounds in establishing the kind of build/fade relationships

common in electro-acoustic composition. Spatially the ‘Drones’ tend to sit

within a specific area while the ‘Percussives’ move with high velocity across

the space. The ‘Drones’ are at this time somewhat harsh sounding, a

problem for consideration in future versions, while the ‘Percussives’ tend to

sound rather like random sonic insertions. While providing some impressive

displays of the resonance filtering, the ‘Mimets’ do not work as originally

intended. Where I was intending to convey a group with the same sound

calling to each other across the space the result is unclear and not particularly

evocative.

The overall sonic result was somewhere between ecological and mechanical

sounds with structure constantly shifting between seemingly random

juxtapositions and moments of inspired confluence. It is interesting to note

that in a live setting this piece is much more effective. Perhaps this is partly

due to the very physical nature of the sound but I also consider the

combination of external noise provides a suitable keynote as a foundation for

the rather invasive nature of the creatures in this piece. The venue in

question featured constantly whirring fans and no sound insulation, allowing

street sounds to infiltrate. While in a sense the solo performance dictated the

particular setup, this piece can be seen as the opposite side of the system to

that demonstrated in the Incidental Amplifications piece. While the sound

26

sources are still incidental in nature; the compositional form here is much

more active and far less subtle.

27

Piece 4 – Web

The web version of Sound Creatures was in many ways inspired by a

demonstration of the Processing system by Samuel Bruce at Electrofringe

2005. The idea of doing away with MAX/MSP, which throughout this project

has been used primarily for its OSC compatibility, proved very attractive. In

addition the construction of a web-based miniature of the system also serves

as a potent advertisement. However, at the time of writing I have yet to

achieve the kind of successful results that he has achieved using the available

sound libraries.

Adapting the system for the web required a major streamlining of the system.

One of the major changes is the return to a single-type system. Since

streaming audio is still relatively unfeasible I opted to can the seek-file method

and use the wave synthesis methods available in the ESS library for

Processing. The standard selections of sine, triangle, sawtooth and square

waves are available along with white and pink noise generators. Doing away

with OSC meant that the sounds could be generated automatically from within

each object, though I’m uncertain that the library is suited to the object-

oriented approach due to problems defining, opening and closing of object-

specific channels. In essence a buffer is created and as the objects call, their

sound is added to the buffer. Frequency is determined on instantiation along

with a number of additional variables concerning the call rate and duration of

each call.

Because the system is designed to run online sans interaction, a series of

environmental variables were introduced to the system. An automated

population floor is included to ensure that the number of creatures is never

less than four. A further enhancement to the system is the introduction of

food and poison types which add or subtract health from the creatures. These

types are distributed relative to the virtual seasons; ‘hot’, ‘rainy’ and ‘cool’ and

are representative of seasonal variations in more equatorial climes. During

the ‘hot’ season poison is more prevalent than food encouraging a culling of

28

creatures, while the ‘rainy’ season is more conducive to food growth and the

increase of healthy creatures. Since health is directly related to amplitude, it

follows that the ‘hot season will be quieter than the ‘rainy’ season and during

the ‘cool’ season a series of sonic changes occur encouraged by increasing

global fertility rates and creature breeding. That the sonic reality does not

reflect this as yet is a result of the aforementioned issues with the sound

library.

While the web piece occasionally surprises it is sonically inferior at this stage

to the offline versions. There is little obvious relation between the creatures

and their sounds with the construction ultimately being a series of piercing

layered drones that build until the applet crashes or locks up. In an attempt to

deal with this issue an epoch change was added relative to the seasons.

After a virtual decade, which in real-time is approximately 10 minutes, all

creatures are killed and the system is reset. As such I have yet to find an

adequate solution for a self-maintaining sonic-system with these tools so this

area is fundamentally still a work in progress.

29

Piece 5 – Habitat

Being a piece created with the most recent build of Sound Creatures, it is

fitting that ‘Habitat’ is the most lucid translation of my original aesthetic and

compositional desire for this project. Composed entirely from a source file of

my electronic project ‘Narghile’ performing live, it successfully demonstrates

the transformative qualities of the system, turning a rhythmically abstract

noise/ambient performance into an immersive soundscape of crackling drones

and flickering harmonics.

The system is in essence an enhancement on the ‘Fringe’ build with the

addition of the seasonal evolution, food/poison supplements and automatic

population floor first seen in the web version. Quite a bit of effort has been

expended on fine-tuning the attributes of the creature types with the ‘Drones’

being less abrasive and the ‘Mimets’ demonstrating subtle repetitive calls

across the space. A major change has been the addition of a sequence

feature for the ‘Percussive’ creatures. Originally intended for the web applet

but dumped due to aforementioned performance issues it cycles through a

sixteen-bar sequence of note on/off values set on instantiation of the object.

While not functioning entirely as intended at this stage it does add the

possibility of rhythmic continuity currently only demonstrated with the ‘Drone’

creatures.

The only addition in MAX/MSP is the use of amplitude ramps in an attempt to

reduce the occurrences of pops and glitches associated with the current

method of file seeking. Since the objects call to play in/out points determined

randomly on instantiation there is no facility to ensure that these points start at

a zero-frequency point resulting in a pop caused by the sudden surge of

amplitude.

While I believe there is still some way forward to the establishment of a

system that can consistently produce interesting immersive soundscapes, it

would seem that with ‘Habitat’ there is a sense of sonic immersion that

30

successfully blurs the distinctions between the real, the electronic and the

virtual. Listening in headphones it is possible to discern distinct spatial

arrangement in the creature calls while the constant buzzing functions as the

keynote that provides the foundation for the virtual sonic reality.

The Wikipedia defines a habitat as “the place where a particular species lives

and grows.” (2005) In this case the piece is a documentation of a virtual

habitat with species that live, grow and sound like no creature I have heard in

my ‘real’ sonic environment; which in my opinion is the reason for this style of

sound-scaping.

31

Conclusions and Future Directions

In my journey from ideation to actualisation I’ve traversed a continent of self-

doubt, contrary opinions and technical frustrations to arrive at the stage where

I can say that I have achieved my goal. It was certainly not a victory without

pain as with the ability to ‘play god’ comes the responsibility to manage a

tumultuous and often unwieldy system that often responds in unexpected and

unwanted manner.

A large part of my current self-evaluation is in regard to the difficulty with

establishing a ‘hands-off’ approach to composition. In theory, the creation of

an immersive digital soundscape quite obviously lends itself to the use of

artificial life as it is essentially my desire to simulate the sound of recorded life.

However the sonic complexity of real phonography is something that arguably

defies emulation. The relatively predictable determinism I’m attempting to

avoid by distancing myself from the compositional structure gives way to

mildly indeterminate chaos that appears imbalanced and ugly when compared

to works like “The City is Sleeping” which utilise the same sonic material in a

more organised fashion. It is my conclusion that the simulation of a composer

requires too many modifiers to make it worthwhile which is perhaps good

news for composers.

As a tool for the establishment of formal methods that can be combined with a

more personally crafted approach, the methods outlined in this essay are the

tip of the iceberg. Future iterations of this system could include a more robust

use of spatialisation, perhaps facilitated by Ambisonic mixing techniques.

Having experimented with the mixing and dispersal of digital objects through

Ambisonic encoders in the production of ‘The City is Sleeping’, it is my opinion

that this method provides a more immersive quality than amplitude-based

surround; at least on the somewhat domestic surround systems that I use.

32

The ability to morph both the data and sound via Fast Fourier Transformation

of breeding objects is something I’m keen to explore, though I’m hesitant as

the data storage requirements are excessive and the idea may end up being

just a novelty with no aesthetic worth.

The major benefit of the Sound Creatures system is the manner in which

modifications can be tailored to suit specific sites, contexts and interactions.

The next logical step is the bridge between our world and the virtual worlds

created with this kind of system. A reactive and interactive system is already

possible with libraries for MAX/MSP, Pure Data and Processing all providing a

level of real-world engagement that requires little more than time and learning

curve adjustments. As a composer unwavering in my enthusiasm for virtual

space and digital immersion it is clear that keeping myself on both sides of the

electronic divide is not just a healthy option but a productive one.

33

Reference

Barrett, Natasha (2002) Spatio-Musical Composition Strategies. Organised Sound

7(3): 313–323 Cambridge University Press.

Cutler, Chris (2004) Plunderphonia 138-156 in Audio Culture:Readings in Modern

Music ed. Cox, Christoph and Warner, Daniel. Continuum

Dahlstedt, Palle (2001) A MutaSynth in parameter space: interactive composition

through evolution. Organised Sound 6(2): 121–124 Cambridge

University Press.

Dorin, Alan (2003) On wonder and betrayal: Creating artificial life software to

meet aesthetic goals. Kybernetes; 32, 1of 2; 184, Academic

Research Library

Egan, Greg (1997) Diaspora. Millenium

Egan, Greg (1994) Permutation City. Millenium

KQED (2005) Loren Chasse from www.kqed.org/spark/artists-

orgs/lorenchass.jsp

Lopez, Francisco (1996) Cagean philosophy: a devious version of the classical

procedural paradigm from http://www.franciscolopez.net/essays.html

Lopez, Francisco (1997) Schizophonia vs. l’objet sonore: soundscapes and artistic

freedom from http://www.franciscolopez.net/essays.html

McCormack, Jon (2003) Evolving sonic ecosystems. Kybernetes; 32, 1of 2; 184,

Academic Research Library

Ploy, Gabrielle (2002) Sound and Sign. Organised Sound 7(1): 15–19 Cambridge

University Press.

Schaeffer, R.Murray (1994) The Soundscape: Our Sonic Environment and the Tuning of

the World (revised edition). Destiny Books

34

Schutze, Stephan (2003) The creation of an audio environment as part of a computer

game world: the design for Jurassic Park – Operation Genesis on the

XBOX™ as a broad concept for surround installation creation.

Organised Sound 8(2): 171–180 Cambridge University Press

Truax, Barry (1999) Handbook for Acoustic Ecology - CDRom edn v1.1

Cambridge Street Publishing

Van Rossum, Franz (2003) notes compiled for Tektra by Roland Kayn. Barooni (BAR016)

Werner, Hans U. (2002) MetaSon #5 Skruv Stockholm: turning schizophonic sound

into audiovirtual image. Organised Sound 7(1): 73–78 Cambridge

University Press.

Westerkamp, Hildegard (2002) Linking soundscape composition and acoustic ecology.

Organised Sound 7(1): 51–56 Cambridge University Press.

Whitelaw, Mitchell (2004) Metacreation. Massachusetts Institute of Technology

Zdenek, Sean (2003) Artificial intelligence as a discursive practice:

the case of embodied software agent systems. AI & Society 17: 340-

363 Springer-Verlag