VIVO: a Wakeful Instrument for Collective Musical Embodiment

29
VIVO: a Wakeful Instrument for Collective Musical Embodiment Fabio Paolizzo A thesis submitted to the University of Kent in fulfilment of the degree of Doctor of Philosophy in Music and Technology

Transcript of VIVO: a Wakeful Instrument for Collective Musical Embodiment

VIVO: a Wakeful Instrument

for Collective Musical Embodiment

Fabio Paolizzo

A thesis submitted to the University of Kent in fulfilment of

the degree of Doctor of Philosophy in Music and Technology

3

Abstract 6

Hypothesis 7

Introduction 7

Methodology 9

CHAPTER 1: A GENEALOGY OF INTERACTIVE MUSIC 13

1.1 The augmentative exercise of interpretation 13

1.2 Democratisation of interpretation in arts 15

1.3 Technology and indeterminacy since process music 18

1.4 Literature review 20

1.5 Self-reflection in human-computer interaction 30

CHAPTER 2: REMOTENESS IN MUSIC TECHNOLOGY 36

2.1 Gaps in the interpretation of live digital content 37

2.2 Coherent validation of musical processes 37

2.3 From gestural surrogacy to interactional proximity 40

CHAPTER 3: A TOOL FOR CONSCIOUSNESS 44

3.1 Information technology empowering consciousness 45

3.2 Subjective motivation and musical coherence 47

3.3 Interactive music as a tool for the investigation of the self 50

CHAPTER 4: A BIOLOGICAL MODEL FOR HUMAN-COMPUTER INTERACTION 53

4.1 Human-machine “communication” 54

4.2 Consciousness in software agents 55

4.3 Comprehension of alterity in humans 57

4.4 Exchange of biological signs 62

4.5 Bio-logics: for a greater salience of computer-generated content 64

CHAPTER 5: SOFTWARE IMPLEMENTATIONS 68

5.1 VIVO 69

5.2 Energy variable 71

5.3 Audio/visual feedback loop 74

5.4 Embodiment in a wakeful music system 76

5.5 Adaptive Video Tracking 77

5.6 Open-content dynamic orchestration and Dynamic Rack 81

4

5.7 Stochastic Energy Score 84

5.8 Map Interface 85

5.9 Collective musical embodiment 86

CHAPTER 6: CASE STUDIES 92

6.1 VIVOtube 93

6.2 Studio1 98

6.3 Interactive Music Group 102

6.4 Crescendo 104

6.5 Holzwege 107

6.6 IMG works: final notes 110

6.7 Velodrone 111

6.8 Invisible Cities 116

6.9 Collective 119

CONCLUSIONS 127

APPENDIX A: FEATURES OF THE SOFTWARE COMPONENTS 129

A1 - Adaptive Video Tracking: features 130

A2 - Dynamic Rack: features 132

A3 - Software features: additional video documentation 135

APPENDIX B: PARAMETERS DESCRIPTION 137

B1 - Main interface 138

B2 - Settings interface 142

B3 - Stochastic Energy Score: interface 145

APPENDIX C: TECHNICAL INFORMATION OF THE CASE STUDIES 148

C1 - VIVOtube: software components 149

C2 - Studio1: formalisation of the musical behaviours 149

C3 - IMG: general rules for improvisation 153

C4 - Crescendo: performative notes 154

C5 - Holzwege: improvisation notes 158

C6 - Collective: additional video documentation 161

REFERENCES LIST 165

6

Abstract

A new category of music-related software is proposed, implemented and tested through

creative practice. “Wakeful” musical instruments are those whose interaction listeners may

understand as musical. The term “wakeful” means here “sign-bearing”, in relation to the

potential of computers and human beings to interact in terms of exchange of signs. Wakeful

instruments may empower the users’ capacity for the interpretation of digital music in terms

of agency, cognition and communication. These users may experience a phenomenon of

collective musical embodiment, producing a shared form of intelligence, extended and

mediated by the computer, in music. VIVO is a computer-based interactive instrument of this

type, whose development is part of the research. It represents salience of action in terms of

energy and features video tracking for mapping quantity of motion to specific sub-ranges of

audio plug-ins parameters, dynamically. It includes an editor for stochastic scores and a single

graphic interface to control any plug-in, while monitoring the users’ and software agents’

activity. The present case studies cross-confirm the results by an analysis of the users’ and

listeners’ experiences and verbalisations and by an identification of the musical coherence in

the content, which the program generated.

7

Hypothesis

If a computer-based musical instrument sonically reflects some of the user’s salience of

action, he/she may have a greater chance of interpreting the structures generated by the

computer as music, as well as the listeners can do. The collective experience of multiple users

may produce a shared form of intelligence, in the music.

Introduction

Speed of computer computing allows for complex manipulation of data in ways a human

agent could not be capable. Musical improvisation requires musicians some effort and

learning time to interplay through a specific musical grammar, coherently. Instead, a

computer program could perform the task easily, if designed to operate appropriately.

However, the application of a grammar is not sufficient to create original ideas. While we may

consider algorithms that benefit from a user’s feedback of some sort to produce a more

interesting interplay than otherwise, the judgment about what is interesting or not is human.

However, while learning may grant some autonomy to the learners, users who delegate

content creation to computers partially renounce to such autonomy and can be spared only

from physical fatigue, as they still have to consume time in assisting the computer during the

decision-making process. Such integration of computers and human capabilities is a complex

issue. The study highlights in the literature review a property common to any interactive

music system: a form of interactional proximity embeds the user and the computer1.

The aim of the present research is to define a protocol for interactive music to allow

computer-based musical instruments to generate, autonomously from users’ assistance,

original sonic structures that human agents may recognise, as music. This new class of

interactive musical instruments adopts salience-based logics of interaction that are distinctive

to living beings. Such logics allow reflection of the users’ music interpretation, in the

computer outcome. “Wakeful” musical instruments are those, which implement this

protocol. The term “wakeful” means here “sign-bearing”, in relation to the potential of

computers and living beings, specifically humans, to interact in terms of exchange of signs.

1 To be discussed further since Section 1.4.

8

The application of the protocol may allow investigation of a shared and collaborative form of

intelligence. The research evaluates this chance by connecting together interpretation of

music and creative capacities of multiple users, self-reflecting in the music, through such

instruments.

While the mostly endorsed model for human-computer interaction frames it as similar to

human communication2, in human-computer interaction a communicative gap exists: while

computers treat any input as a different state of their own components3, humans interpret

information as a network of signs, as the present study will explore. This structural divergence

may hold human agents from attributing intentionality to computer outcomes, where at least

some of the characteristics denoting interpretation in living beings cannot be deduced from

the outcome4. The case studies confirmed this supposition: the audience did not recognise

the sonic structures “invented” by the computer as music.

In approaching a solution to scenarios like the above, the study proposes a theoretical

framework and a piece of software as an example for designing wakeful instruments. The

framework aims to operate human-computer interaction for music, within an audio/visual

feedback loop of action/perception based on a representation of salience as the energy that

the agents consume to act.

The present wakeful instrument analyses the user’s activity or data received from external

software. It may compute the image motion for different types of moving bodies: objects,

performers, performers acting on objects, and video images. The instrument calculates the

energy consumptions reflecting the salience of the activity, which the users can monitor. This

salience is mapped to audio processing and/or sound synthesis by the means of external

2 For example, in Robert Rowe’s definition of an interactive music system (see Section 1.4 and 4.1). 3 As bits, in digital computers. However, ‘[i]ncreasingly often, LSMs [Liquid State Machines], ESNs [Echo State Machines] and the more recently explored Backpropagation Decorrelation learning rule for RNNs [Recurrent Neural Networks] (Schiller and Steil 2005) are subsumed under the name of Reservoir Computing’ (Jaeger, 2007: n.p.), which offers an interesting perspective for future investigations of similarities between machines and living beings—for example in grammatical construction processing (Dominey, Hoen and Inui, 2006: 2088–2107), or Liquid Machine States investigation (Jones et al., 2007: 187–191). 4 While there is evidence that humans may attribute intentionality to the simplest of abstract animations (Blakemore et al., 2003: 1433–1441), recognition of the characteristics denoting interpretation leads to attribute stronger intentionality than otherwise. At least this can be stated for normal control subjects and patients without delusions of persecution, ‘[who] rated the relationship between the movement of the shapes as stronger in both mechanical and intentional contingent conditions than in non-contingent conditions’ (Blakemore et al., 2003: 1433). The present study investigates human intentionality, specifically in the sonic field.

9

audio plug-ins. Implementations of the above framework include a Dynamic Rack to

orchestrate audio signals, as well as a Stochastic Energy Score and a Map Interface to control

the analysis both of the users’ and of the instrument’s actions. These implementations are

part of the piece of software VIVO, a computer-based musical instrument, which the present

experimental work will show as matching the definition of wakefulness explored in the thesis.

Recalling previous research (Gadamer, 2004 [1960]), the present study outlines that for a

given human agent, the capacity for interpretation is aimed at explaining and applying

information to personal concerns and interests. In this process, consideration of previous

occurrences is implied, as are subjective experiences, acquired culture and broader biological

factors. The present investigation proposes a model for human interpretation, embodied in

the specific technology, as depending on the capacities for agency, cognition and

communication. The software implementation extends the musicians’ sonic creation capacity

and provides a feedback. This feedback can be useful to the users and the audience to explain

better the connections among the musical constructs5, to share such knowledge with others

and to increase applicability of the musical intentions. In terms of agency, cognition and

communication, users may achieve a different awareness of the relationship between the

self, the musical morphology and the empowering technology through which they interplay.

Redefining interactive music could have a great impact on knowledge in terms of

development of new musical forms and instruments and of a different understanding of the

human agent’s potential, which might be within reach. Evaluation of the case studies draws

from Gino Stefani’s analysis of semiosis in the human/music interface (Stefani, 1998), Mihaly

Csikszentmihalyi’s flow theory (Csikszentmihalyi, 2008 [1985]) and Aaron Antonovsky’s sense

of coherence (Antonovsky, 1987), as described in the Methodology.

Methodology

The investigation that led to the present research started in 2004. I published diverse texts6,

produced different music and developed various pieces of software. Each of these works

5 The notion of “musical construct” refers here to any sound or sequence of sounds considered as a whole (see Appendix D3). 6 The present dissertation draws from some of those texts (Paolizzo, 2006; 2010a; 2010b; 2011).

10

reached conclusions that sometimes confirmed, and sometimes contradicted, the previous

assumptions. Along this path, I abandoned some ideas, software implementations and

musical architectures in favour of others. However, in different degrees, all experimentation

was useful in pursuance of the research goals. A preliminary consideration preceded them.

Any technique or technology inevitably defines the content for which it is used; a

misunderstanding of the fundamental principles underlying a specific medium may lead the

musician or designer to cap or mislead their own expressive and creative capacity. This

concern led me to question how interactive music technology might affect the audience and

the users’ capacities in terms of agency, cognition and communication.

In consideration of the above, the proposed methodology is interdisciplinary. It draws from

cultural musicology, software development and music composition. In the present model,

formulation of hypotheses precedes and gives ground to the development of software

components based on the theorisations. I design and realise works of interactive music in

order to evaluate the hypotheses and promote them to working hypotheses, which the case

studies could verify. According to the theoretical tools of investigation offered by cultural

musicology, I carry out a critical analysis both of the audience’s and musicians’ responses and

of the creative outcomes. In the study, I connect this analysis with a theoretical dissertation

of the audience’s informally gathered observations. The method of evaluation envisages an

approach to art as theory, which benefits from the holistic attitude7 typical both of practice

and of experience in art, with the aim of developing and refining both theories and

implementations.

In my research, I approach holism in terms of a users’ perception of morphology-as-whole, in

order to evaluate coherence in the unit structures under analysis. These can either be musical

constructs, dynamics of interaction or works of interactive music. This approach of analysis

includes considering the subjective process of semiosis. In this sense, the methodology draws

7 It is useful to note that in philosophy of mind, ‘[f]unctionalism about content and meaning appears to lead to holism. In general, transitions among mental states and between mental states and behaviour depend on the contents of the mental states themselves. […] Other functionalists accept holism for “narrow content”, attempting to accommodate intuitions about the stability of content by appealing to wide content’ (Block, 1995: 329–330). Holism is not exempt from criticism. Samuel Guttenplan notes that while ‘holism has been most widely discussed in connection with both the contents of propositional attitudes and the meanings of linguistic constructions… no two speakers could ever mean the same thing by some given sentence, since it is almost certainly true that any two speakers will differ somewhere in their epistemic attitudes’ (Guttenplan, 1995b: 347).

11

and differs from Stefani’s research in an original way. Following his model for the analysis of

semiosis in the human/music interface, where the formulation of theories focuses on the

subjective experience of music (Stefani, 1998), the present research proposes a method to

formulate, evaluate and refine both theories and implementations through the creative

practice of interactive music.

As Luca Marconi points out, Stefani’s method does not include listening tests by observation,

devices or measurements of verbal and/or not verbal behaviours or other phenomena

occurring during the listening (Marconi, 2010: 1–4). Stefani adopts the method of an “orally

shared listening”8. In interactive music, interplay and listening intertwine and, therefore, in

my research I direct the users’ focus to the morphology by the nature of the interactive

experience itself. After the experience, I question the users about both the “quality” of the

music and of their interaction. “Quality” means here the comprehensibility, manageability

and meaningfulness of the musical constructs created and perceived in the experience, as

Antonovsky’s sense of coherence defines these concepts. I relate such a quality to the level

of subjectively perceived satisfaction and control in creating and exploring the music9, as

connected to Csikszentmihalyi’s notion of “flow”. Flow is a mental state, ‘where challenges

match skills, and in which people experience “optimal” states, are able to concentrate, to

forget time, and create new goals in a totally autonomous way, the so-called autotelic state

[(auto = “self” + telos = “goal”)]’ (Steels, as cited in Pachet, 2008: 10).

Users do not receive questions when they spontaneously express of experiencing a sense of

coherence, or if listening to the outcome and their musical behaviour in the interaction is

sufficient to draw conclusions about the investigated theories and software

implementations10. In collective experiences of interactive music based on the present model

of interpretation, the users’ musical morphology may become a language for the group self-

8 ‘Ascolto condiviso oralmente’. 9 Differently, Stefani’s method requires the listeners to focus on the subjective musical experience (the listener’s impression, which emerges during the listening) rather than on their description attributed to the music. Stefani gathers information about the experiences in the form of the participants’ verbalisations, in order to highlight similarities (Stefani, 1977: 23). 10 Stefani’s method aims at raising the listeners’ awareness about their musical proficiency as an ability similar to understand (and to produce) new sentences and discourses in a verbal language: ‘[ognuno dei partecipanti] possiede sulla musica una certa “competenza” analoga a quella che si ha per una lingua, ossia una capacità di capire (e produrre) frasi e discorsi sino allora non ancora sentiti (e prodotti)’ (Stefani, 1977: 21).

12

reflecting in the music. This commonality finds correspondence with the influence of shared

referents that Stefani aims at highlighting in the listening11 (Stefani, 1977: 23). While he aims

at increasing the awareness of the participants12 about the experts’ discourses, which may

influence these referents13 (Stefani, 1977: 26), in my research the users receive information

about their interaction by the software implementations. In the present case studies, I focus

such empirical methodology in result of the work started in 2004, gradually. Recent

theoretical research (Lutz, 2009: 63–67) first suggested the formal integration of these

collinear concepts from Antonovsky’s and Csikszentmihalyi’s researches. The present study is

an example of the empirical application of this integration14—which combines with other

theoretical tools, as mentioned.

In short, the present methodology allows the theorisation and application of interactive

music, in a specific co-presence of music and computer practices and aesthetics, as a tool for

the morphological investigation of human-computer interaction. Although the study focuses

on music and computer-based interactive music systems, concepts proposed within the

framework might apply to any human-computer interaction, where a computer generates or

transforms the content. In an attempt to favour coherence in interaction between humans

and software agents, I develop and test both VIVO and the framework by tailored works of

interactive music. Finally, broader resonances with historical and cultural ideas are sought by

interpreting fundamental notions and their aesthetics. From such a path the present

investigation will start.

11 ‘[…] dei rinvii o significati comuni che […] predominano sulle frange dell’interpretazione soggettiva’. 12 Stefani aims at achieving that through the followings: considerations about the sonic objects through listening (Stefani, 1979: 145–147, 1993a, 1993b); cultural investigation of the production process and historical background of the specific music works and of the composers’ ideas (Stefani, 1979: 147–149); different attitudes and ways of listening (Stefani, 1994: 13). 13 ‘[…] i contributi didattici che formulano o spiegano il sapere comune, gli apporti creativi che lo stimolano, gli occultamenti ideologici che se ne appropriano e lo camuffano da invenzione personale, il vaniloquio dovuto alla vanità o all’inettitudine’. 14 In 2009, Jonathan Luz could not find evidence of any publication formally suggesting such an integrative model (Lutz, 2009: 63–67). Similarly, I did not find any empirical method applying the model.

165

References list

Adobe Systems Incorporated (2012). Adobe Flash player [Downloadable program]. Version

11.3, Windows and OSX. Adobe Systems Incorporated. Available from:

http://www.adobe.com/support/flashplayer/downloads.html [Accessed 13

September 2013].

Antonovsky, A. (1987). Unraveling the Mystery of Health: How People Manage Stress and

Stay Well. San Francisco: Jossey-Bass.

Apache Software Foundation (2012). Apache HTTP Server [Downloadable program]. Version

2.4.3, Windows. Apache Software Foundation. Available from:

http://httpd.apache.org/download.cgi#apache24 [Accessed 13 September 2013].

Apple Inc. (2012). Quicktime [Downloadable program]. Version 7.7.2, Windows and OSX.

Apple Inc. Available from: http://www.apple.com/it/quicktime/download/

[Accessed 13 September 2013].

Aronoff, M. and Fudeman, K. (2004). What is Morphology? Oxford: Blackwell Publishing.

Ascott, R. (1999). Interviewed by Cilli, C. Umanità tecnologiche e macchine umane. La

scommessa dell'arte interattiva. Mediamente [Online]. Available from:

http://www.mediamente.rai.it/quotidiano/arte/a010418_01.asp [Accessed 13

September 2013].

Assayag, et al. (2006). OMax Brothers: a Dynamic Topology of Agents for Improvization

Learning. In: Proceedings of the First Workshop on Audio and Music Computing for

Multimedia (AMCMM’06). Santa Barbara, CA.

Aylesworth, G. (2012). Postmodernism. In: Zalta, E. N. ed. Stanford Encyclopedia of

Philosophy. Fall 2012 edn. [Online]. Available from:

http://plato.stanford.edu/archives/fall2012/entries/postmodernism/ [Accessed 13

September 2013].

Biles, J. A. (1999). Life with GenJam: Interacting with a musical IGA. In: Proceedings of the

1999 IEEE Conference on Systems, Man and Cybernetics, pp. 652–656.

166

Blackwell, T. and Young, M. (2005). Live algorithms. In: AISB Quarterly, Vol. 122, pp.7–9.

Blackburn, S. (2008). Oxford Dictionary of Philosophy. 2nd edn rev. (app edn. version 1.0154)

Oxford: Oxford University Press.

Blakemore, S.-J., Sarfati, Y., Bazin, N. and Decety, J. (2003). The detection of intentional

contingencies in simple animations in patients with delusions of persecution. In:

Psychological Medicine, 2003(33), pp. 1433–1441. New York: Cambridge University

Press.

Block, N. (1995). Holism. In: Guttenplan, S. ed. A Companion to the philosophy of mind.

Oxford: Wiley-Blackwell, pp. 329–330.

Bongers, B. (1998). An Interview with Sensorband. In: Keislar, D. ed. Computer Music

Journal. Vol. 22(1), pp. 13–24.

Bongers, B. (2000). Physical Interfaces in the Electronic Arts—Interaction Theory and

Interfacing Techniques for Real-Time Performance. In: Wanderley, M. M. and

Battier, M. eds. Trends in Gestural Control of Music. Paris: IRCAM-Centre Pompidou.

Bongers, B. (2007). Electronic Musical Instruments: Experiences of a New Luthier. In:

Leonardo Music Journal. Vol. 17, pp. 9–16. Cambridge, MA: The MIT Press.

Bowman, H. et al. (2012). Emotions, Salience Sensitive Control of Human Attention and

Computational Modelling. Howard Bowman's home page. EPSRC [Online]. Available

from: http://www.cs.kent.ac.uk/people/staff/hb5/attention.html [Accessed 13

September 2013].

Bown, O. and Martin, A. (2012). Autonomy in Music-Generating Systems. In: 1st

International Workshop on Musical Metacreation. Palo Alto, California (2012).

Brown, C. and Bischoff, J. (2002). Indigenous to the Net: Early Network Music Bands in the

San Francisco Bay Area. Available from:

http://crossfade.walkerart.org/brownbischoff/IndigenoustotheNetPrint.html

[Accessed 13 September 2013].

167

Buchla, D. (1998). 100 series Modular Electronic Music System [Online]. Available from:

http://www.buchla.com/historical/b100/index.html [Accessed 13 September 2013].

Bunnin, N. and Yu, J. (2004). The Blackwell Dictionary of Western Philosophy. Oxford:

Blackwell Publishing Ltd.

Cage, J. (ca. 1951). 4′33″. Handwritten revision of 1960 typed score version. Edn. Peters, cat.

6777. New York: Henmar Press Inc.

Cage, J. (1960a). Cartridge Music. Edn. Peters, cat. 6703. New York: Henmar Press Inc.

Cage, J. (1960b). Imaginary Landscape No. 4. Edn. Peters, cat. 6718. New York: Henmar

Press Inc.

Cage, J. (2013). Variations V. In: The Complete John Cage Edition, Vol. 48 [DVD]. New York:

Mode Records.

Camurri, A., Mazzarino B. and Volpe, G. (2003). Analysis of Expressive Gesture: The EyesWeb

Expressive Gesture Processing Library. In: Gesture Workshop. Vol. 2915, pp. 460–

467. Springer Verlag.

Chadabe, J. (1997). Electric Sound: The Past and Promise of Electronic Music. Upper Saddle

River, NJ: Prentice Hall.

Chadabe, J. (2005). The Meaning of Interaction, a Public Talk Given at the Workshop in

Interactive Systems in Performance (Wisp). In: Proceedings of the 2005 HCSNet

Conference. Sydney: Macquarie University.

Chung, J. (1994). Hyperlisp [Downloadable Program]. MIT Media Laboratory. Available from:

http://free-compilers.sharnoff.org/TOOL/CommonLi-13.html [Accessed 13

September 2013].

Cowart, M. (2004). Embodied Cognition. In: Internet Encyclopedia of Philosophy [Online].

Available from: http://www.iep.utm.edu/embodcog/ [Accessed 13 September

2013].

168

Crano, W. (1995). Attitude strength and vested interest. In: Petty, R. E. and Krosnick, J. A.

eds. Attitude strength: Antecedents and Consequences. Mahwah, NJ: Erlbaum, pp.

131–158.

Csikszentmihalyi, M. (2007). Creativity Flow and the Psychology of Discovery and Invention.

EPub edn. Pymble, NSW: HarperCollins Publishers.

Csikszentmihalyi, M. (2008). Flow: The Psychology of Optimal Experience. The EPub edn.

Pymble, NSW: HarperCollins Publishers.

Cycling ’74 (2012). MAX [Downloadable program]. Version 6, Windows and OSX. Cycling ’74.

Available from: http://cycling74.com/ [Accessed 13 September 2013].

Cycling ’74 (n.d. a). jit.rgb2luma. Max 5 Help and Documentation [Online]. Available from:

http://www.cycling74.com/docs/max5/refpages/jit-ref/jit.rgb2luma.html [Accessed

13 September 2013].

Cycling ’74 (n.d. b). jit.slide. Max 5 Help and Documentation [Online]. Available from:

http://www.cycling74.com/docs/max5/refpages/jit-ref/jit.slide.html [Accessed 13

September 2013].

Daintith, J. and Wright, E. eds. (2008). A Dictionary of Computing 6th edn. (app edn. version

2.1.414). New York: Oxford University Press.

Dalmasso, G. (2005). Chi dice io: razionalità e nichilismo. Milan: Editoriale Jaca Book SpA.

Dannenburg, R. B. (1984). An On-Line Algorithm for Real-Time Accompaniment. In:

Proceedings of the 1984 International Computer Music Conference (ICMC–84). Paris:

International Computer Music Association, pp. 193–198.

De Kerckhove, D. (2009). McLuhan aujourd’hui. In: Semaine Internationale des Arts

Numériques & Alternatifs (SIANA 2009), 23–28 March. Evry, France.

Delalande, F. (1993). Le condotte musicali. Comportamenti e motivazioni del fare musica e

ascoltare musica. Bologna: Editrice Clueb.

Deleuze, G. (1994). Difference and Repetition. Trans. Patton, P. New York: Colombia

University Press.

169

Derrida, J. (1991). Donner le temps: la fausse monnaie. Paris: Galilée.

Derrida, J. (1999). Donner la mort. Paris: Galilée.

Dominey P. F., Hoen M. and Inui T. (2006). A neurolinguistic model of grammatical

construction processing. In: Journal of Cognitive Neuroscience. Vol. 18(12), pp.

2088–2107.

Douglas, R. L. (1991). Formalizing an african-american aesthetic. In: New Art Examiner, June,

pp. 18–24.

Duchamp, M. (1917). Fountain [Porcelain]. Urinal reoriented ninety degrees, placed on a

stand, dated and signed under the alias “R. Mutt 1917”. Originally held at The

Philadelphia Museum of Art, Philadelphia.

Drummond, J. (2009). Understanding Interactive Systems. In: Landy, L. ed. Organised Sound.

Vol. 14(2), pp. 124–133. New York: Cambridge University Press.

ECMA International (2011). Javascript [Scripting language]. Version ECMA-262 edn. 5.1

(June). Geneva, Switzerland: ECMA International.

Eigen, M. (1992). Steps towards Life: a Perspective on Evolution. Oxford: Oxford University

Press.

Emmeche, C. and Kull, K. (2011). Biosemiotics [Online]. Semiotics, Evolution, Energy, and

Development. Available from:

http://www.library.utoronto.ca/see/pages/biosemioticsdef.html [Accessed 13

September 2013].

Emmerson, S. (1995). Live Performance: How do you know it’s me you’re listening to? In:

Report No. 16: Report from an Electro-Acoustic Music Conference, pp. 9–15.

Stockholm: Royal Swedish Academy of Music.

Emmerson, S. (2000). ‘Losing touch?’: the human performer and electronics. In: Emmerson,

S. ed. Music, Electronic Media and Culture. Aldershot, UK: Ashgate Publishing

Limited. Burlington, USA: Ashgate Publishing Company.

170

Fondazione Eranos Cur. (1997) I Ching. Il Libro della Versatilità. Trans. Ritsema, R. Sabbadini,

S. A. In: Botto, O. ed. Classici delle Religioni. Le Religioni Orientali. Torino: Utet.

Gadamer, H.-G. (2004). Truth and Method. Trans. Weinsheimer J. and Marshall D. G. 2nd

edn. rev. London and New York: Sheed & Ward Ltd. and the Continuum Publishing

Group.

Gartland-Jones, A. (2003). MusicBlox: A real-time algorithmic composition system

incorporating a distributed interactive genetic algorithm. In: S. Cagnoni et al. eds.,

Applications of Evolutionary Computing. Vol. 2611 of Lecture Notes in Computer

Science, pp. 145–155. Berlin/Heidelberg: Springer.

Gresham-Lancaster, S. (1998). The Aesthetics and History of the Hub: The Effects of

Changing Technology on Network Computer Music. In: Leonardo Music Journal. Vol.

8, pp. 39–44. Cambridge, MA: The MIT Press.

Guttenplan, S. (1995a). Agency. In: Guttenplan, S. ed. A Companion to the philosophy of

mind. Oxford: Wiley-Blackwell, pp. 121–122.

Guttenplan, S. (1995b). Holism. In: Guttenplan, S. ed. A Companion to the philosophy of

mind. Oxford: Wiley-Blackwell, p. 347.

Hazell, C. (2009). Alterity: The Experience of the Other. Bloomington, In: AuthorHouse, pp.

53–56.

Heidegger, M. (2002). Off the Beaten Track. Eds. and trans. Young, J. and Haynes, K. New

York: Cambridge University Press.

Hoffmeyer, J. (2008). Biosemiotics. An Examination into the Signs of Life and the Life of Signs

[Online]. Available from: http://jhoffmeyer.dk/One/books-in-english/biosemiotics-

an-examination/ [Accessed 13 September 2013].

Hofstadter, D. (1999). Gödel, Escher, Bach: an Eternal Golden Braid. New York: Basic Books,

Inc.

Hofstadter, D. (2007). I am a strange loop. New York: Basic Books, Inc.

Hopper, P. (1987). Emergent Grammar. In: Berkeley Linguistics Society. Vol. 13, pp. 139–157.

171

Hunt, A. and Kirk, R. (2000). Mapping Strategies for Musical Performance. In: Wanderley, M.

M. and Battier, M. eds. Trends in Gestural Control of Music. Paris: IRCAM-Centre

Pompidou.

Huxley, A. (2000). Brave New World Revisited. New York: HarperPerennial.

Illinois Distributed Museum (n.d. a). Illiac. University of Illinois [Online]. Available from:

http://distributedmuseum.blogspot.it/p/illiac.html [Accessed 13 September 2013].

Illinois Distributed Museum (n.d. b). SalMar Construction. University of Illinois [Online].

Available from: http://distributedmuseum.blogspot.it/p/sal-mar-

construction_1.html [Accessed 13 September 2013].

Impett, J. (2001). Interaction, Simulation and Invention: a Model for interactive music. In:

Bilotta, E., Miranda, E. R., Pantano, P. and Todd, P. M. eds. Artificial Life Models for

Music Applications, pp. 108–119.

INA-GRM (2004). GRM tools [Downloadable program]. Version 2, Windows and OSX. INA-

GRM. Available from: http://www.inagrm.com/categories/anciennes-versions-

older-versions/ [Accessed 13 September 2013].

Iversen, M. (2004). Readymade, Found Object, Photograph. In: Art Journal. Vol.

63(2)(Summer), pp. 44–57.

Jaeger, H. (2007). Echo state network. In: Scholarpedia. Vol. 2(9), p. 2330.

doi:10.4249/scholarpedia.2330 revision #125662 [Online]. Available from:

http://www.scholarpedia.org/article/Echo_state_network [Accessed 13 September

2013].

Jones, B., Stekel, D., Rowe, J. and Fernando, C. (2007). Is there a Liquid State Machine in the

Bacterium Escherichia Coli? In: The First IEEE Symposium on Artificial Life. IEEE-

ALife’07, April 1–5, 2007. Honolulu, Hawaii, USA, pp. 187–191.

Jordà, S. (2005). Digital Lutherie: Crafting Musical Computers for New Musics’ Performance

and Improvisation. Barcelona: Universitat Pompeu Fabra. PhD Thesis.

172

Jourdan, E. (2008). ej.function [Downloadable program]. Version 3.0b1, Windows and OSX.

Available from: http://www.e--j.com/ [Accessed 13 September 2013].

Kaylo, J. (2007). The History of Movement Pattern Analysis. Laban/Bartenieff and Somatic

Studies International, Vancouver [Online]. Available from:

http://www.labancan.org/articles/MPA.pdf [Accessed 13 September 2013].

Kim, J. H. and Seifert, U. (2007). Embodiment and Agency: Towards an Aesthetics of

Interactive Performativity. In: Spyridis, C., et al. eds. Proceedings of the 2007 Sound

and Music Computing Conference. Lefkada, Greece. Athens: National and

Kapodistrian University of Athens.

Koyré, A. (2000). Dal mondo del pressappoco all’universo della precisione. Trad. Zambrelli, P.

Torino: Piccola Biblioteca Einaudi NS.

Lazzetta, F. (2000). Meaning in Musical Gesture. In: Wanderley, M. M. and Battier, M. eds.

Trends in Gestural Control of Music. Paris: IRCAM-Centre Pompidou.

Layton, E. T. (1974). Technology as Knowledge. In: Technology & Culture. Vol. 15(1)(January),

pp. 31–41.

Levinas, E. (1998). Otherwise Than Being or Beyond Essence. Trans. Lingis, A. Pittsburgh, PA:

Duquesne University Press.

Lévy, B., Bloch, G. and Assayag, G. (2012): OMaxist Dialectics: Capturing, Visualizing and

Expanding Improvisations. In: Proceedings of the 2012 Conference on New Interfaces

for Musical Expression (NIME 2012). Ann Arbor: University of Michigan.

Lewis, G. E. (1992). Voyager. [CD audio]. Cat. 014. Tokyo: Disk Union-Avan.

Lewis, G. E. (2000a). Endless Shout [CD audio]. Cat. 7054. New York: Tzadik.

Lewis, G. E. (2000b). Too Many Notes: Complexity and Culture in Voyager. In: Leonardo

Music Journal. Vol. 10, pp. 33–39. Cambridge, MA: The MIT Press.

Lippe, C. (1993). A Composition for Clarinet and Real-Time Signal Processing: Using Max on

the IRCAM Signal Processing Workstation. In: Proceedings of the 1993 10th Italian

Colloquium on Computer Music, pp. 428–432. Milan.

173

Luther, R. (2011). Chronology 1953-1993 [Online]. Moog Archives. Available from:

http://moogarchives.com [Accessed 13 September 2013].

Machover, T. (1992a). HyperInstruments: A Composer’s Approach to the Evolution of

Intelligent Musical Instruments. In: Jacobson, L. ed. Cyberarts: Exploring Art and

Technology, pp. 67–76. San Francisco: Miller Freeman Inc.

Machover, T. (1992b). Hyperinstruments: A Progress Report, 1987–1991. MIT Media

Laboratory, Massachusetts Institute of Technology.

Machover, T. (2003). Begin Again Again… [CD audio]. In: Hyperstring Trilogy. Cat. 2003. Port

Washington, NY: Oxingale Records.

Machover, T. and Chung, J. (1989). Hyperinstruments: Musically Intelligent and Interactive

Performance and Creativity Systems. In: Proceedings of the International Computer

Music Conference (ICMC). pp. 186–190.

Malpas, J. (2009). Hans-George Gadamer. In: Zalta, E. N. ed. The Stanford Encyclopedia of

Philosophy. Summer 2009 edn. [Online]. Available from:

http://plato.stanford.edu/archives/sum2009/entries/gadamer/ [Accessed 13

September 2013].

Mathews, M. V. and Abbott, C. (1980). The Sequential Drum. In: Keislar, D. ed. Computer

Music Journal. Vol. 4(4), pp. 45–59.

Mathews, M. V. and Roads, C. (1980). Interview with Max Mathews. In: Keislar, D. ed.

Computer Music Journal. Vol. 4(4), pp. 15–22.

Marconi, L. (2010). Gino Stefani e la teoria musicale del futuro [Online]. Available from:

http://www.musicheria.net [Accessed 13 September 2013].

Martin, G. A., Daly, J. and Thurston, C. (2005). Interaction within Multimodal Environments

in a Collaborative Setting. In: First International Conference on Virtual Reality, 22–27

July 2005. Las Vegas, Nevada.

Mayr, E. (1998). This Is Biology: The Science of the Living World. Harvard: Harvard University

Press.

174

McLuhan, M. (1965). Understanding Media: The Extension of Man. New York: Mcgraw-Hill.

Miranda, E. R. and Wanderley, M. (2006). New Digital Musical Instruments: Control and

Interaction Beyond the Keyboard. Middleton, WI: A-R Editions.

Morris, D. and Fiebrink, R. (2012). Using machine learning to support pedagogy in the arts.

In: Thomas, P. ed. Personal and Ubiquitous Computing, April 2012, doi:

10.1007/s00779-012-0526-1.

Mumma, G. (2012a). Medium Size Mograph [CD audio]. In: Live-Electronic Music. Cat. No.

7074. New York: Tzadik.

Mumma, G. (2012b). Hornpipe. In: Live-Electronic Music [CD audio]. Cat. No. 7074. New

York: Tzadik.

Nagel, T. (1974). What is it like to be a bat? In: Philosophical Review Vol. 83(October), pp.

435–450.

Napoli, M. et al. (2004). Musica come cognizione: Rapporto finale sulla ricerca longitudinale

dedicata allo studio delle acquisizioni di abilità musicali e patterns cognitivi generali

in età evolutiva. Florence: Firenze University Press.

Nyman, M. (1999). Experimental Music: Cage and Beyond. 2nd edn. New York: Cambridge

University Press.

Oracle Corporation (2012) MySQL Community Server [Downloadable program]. Version

5.5.27, Windows. Oracle Corporation. Available from:

http://dev.mysql.com/downloads/ [Accessed 13 September 2013].

Pachet, F. (1999). Continuator [Computer-Based Musical Instrument]. Paris: Sony CSL Paris.

Pachet, F. (2002). The Continuator: Musical Interaction with Style. In: Proceedings of ICMC,

Göteborg, Sweden, September: ICMA, pp. 211–218.

Pachet, F. (2008). The future of content is in ourselves. In: Cheok, A. D. Inakage, M. and Lee,

N. eds. Computers in Entertainment (CIE). Vol. 6(3) (October), article No. 13.

175

Pachet, F. and Addessi, A.-R. (2004). When Children Reflect on Their Playing Style: The

Continuator. In: Cheok, A. D. Inakage, M. and Lee, N. eds. Computers in

Entertainment (CIE). Vol. 1(2), pp. 14–14.

Paine, G. (1999). MAP2 [Installation]. Musical Instrument Museum, Berlin, December 1999–

January 2000.

Paine, G. (2000a). Gestation [Installation]. RMIT Gallery. Melbourne, December.

Paine, G. (2000b). Reeds [Installation]. Melbourne International Festival, October–

November.

Paine, G. (2002). Interactivity, where to from here? In: Landy, L. ed. Organised Sound. Vol.

7(3), pp. 295–304.

Paine, G. (2004a) Endangered Sounds [Installation]. Biennale of Electronic Arts Perth (BEAP).

Perth, Australia.

Paine, G. (2004b). Gesture and Musical Interaction: Interactive Engagement through

Dynamic Morphology. In: Proceedings of the 2004 Conference on New Interfaces for

Musical Expression (NIME 2004). Hamamatsu, Japan. Singapore: National University

of Singapore, pp. 80–86.

Paolizzo, F. (2001a). Le vacche sacre non pascolano qui [.mp3]. Available from:

http://www.fabiopaolizzo.com/holycows/index.html [Accessed 13 September

2013].

Paolizzo, F. (2001b). Diana e la Tuda [.mp3]. Available from:

http://www.fabiopaolizzo.com/dianatuda/index.html [Accessed 13 September

2013].

Paolizzo, F. (2004). Identità sospesa [.mp3]. Available from:

http://www.fabiopaolizzo.com/suspendedidentity/index.html [Accessed 13

September 2013].

Paolizzo, F. (2006). Musica e interazione. Dipartimento di Arti Musica e Spettacolo (DAMS).

Rome: University of Rome “Tor Vergata”. Master Thesis.

176

Paolizzo, F. (2008). La caduta della casa Usher [.mp4]. Available from:

http://www.fabiopaolizzo.com/vivousher/index.html [Accessed 13 September

2013].

Paolizzo, F. (2008). Studio1 [.mp3]. Available from:

http://www.fabiopaolizzo.com/studio1/index.html [Accessed 13 September 2013].

Paolizzo, F. (2009a). Grade Zero [.mp3]. Available from:

http://www.fabiopaolizzo.com/gradezero/index.html [Accessed 13 September

2013].

Paolizzo, F. (2009b). Climax [.mp3]. http://www.fabiopaolizzo.com/climax/index.html

[Accessed 13 September 2013].

Paolizzo, F. (2009c). Broken Age [.mp3]. Available from:

http://www.fabiopaolizzo.com/brokenage/index.html [Accessed 13 September

2013].

Paolizzo, F. (2009d). Enchained [.mp3]. Available from:

http://www.fabiopaolizzo.com/enchained/index.html [Accessed 13 September

2013].

Paolizzo, F. (2010a). VIVO (Video Interactive VST Orchestra): An Interactive and Adaptive

Musical Instrument for Self-reflection in Music. In: The International Journal of the

Arts in Society. Vol. 4(6), pp. 149–159.

Paolizzo, F. (2010b). VIVO (Video Interactive VST Orchestra) and the Aesthetics of

Interaction. In: Wolf, M. and Hill, A. eds. Proceedings of Sound, Sight, Space and Play

2010 (SSSP2010). Leicester, UK: De Montfort University, June. Available from:

http://www.dmu.ac.uk/documents/art-design-and-humanities-

documents/research/mtirc/sssp201010fpaolizzo.pdf [Accessed 13 September

2013].

Paolizzo, F. (2011). Wakeful software and wakeful musical instruments: a theoretical

approach to the implementation. In: Proceedings of the 17th International

Symposium on Electronic Art (ISEA 2011). Istanbul, Turkey, September [Online].

177

Available from: http://isea2011.sabanciuniv.edu/.paolizzo/ [Accessed 13 September

2013].

Paolizzo, F. and Genova, E. (2009). Reazione di prossimità [Public event]. Rome: Le cinque

giornate di Roma. Available from:

http://www.fabiopaolizzo.com/proximityreaction/index.html [Accessed 13

September 2013].

Paolizzo, F., Ventucci, S. and Vignone, M. (2004). I-nteract Ching [Workshop and

performance]. Dipartimento di Arti Musica e Spettacolo (DAMS). Rome: University

of Rome “Tor Vergata”.

Peirce, C. S. (1991). Peirce on Signs: Writings on Semiotic By Charles Sanders Peirce. Chapel

Hill, NC: UNC Press Books.

Pelletier, J.-M. (2010). cv.jit.sum, cv.jit.mean [Downloadable program]. In: cv.jit. Version 1.7,

Windows and OSX. Available from: http://jmpelletier.com/cvjit/ [Accessed 13

September 2013].

Perkis, T. (1999). The Hub, an Article Written for Electronic Musician Magazine.

http://www.perkis.com/wpc/w_hubem.html [Accessed 13 September 2013].

Picasso, P. (1912). Still Life with Chair Caning [Oil and oilcloth on canvas, with rope frame].

Held at the Musée Picasso, Paris. 27cm x 35 cm.

Puckette, M. and Lippe, C. (1992). Score Following in Practice. In: Proceedings of the 1992

International Computer Music Conference (ICMC92). San Francisco: International

Computer Music Association, 182–185.

Ramaprasad, A. (1983). On The Definition of Feedback. In: Behavioral Science. Vol. 28(1), pp.

4–13.

Ramberg, B. and Gjesdal, K. (2005). Hermeneutics. In: Zalta, E. N. ed. The Stanford

Encyclopedia of Philosophy. Summer 2009 edn. [Online]. Available from:

http://plato.stanford.edu/archives/sum2009/entries/hermeneutics/ [Accessed 13

September 2013].

178

Richard, D. (2000). Holzwege on Mount Fuji: a doctrine of no-aesthetics for computer and

electroacoustic music. In: Landy, L. ed. Organised Sound. Vol. 5(3), pp. 127–133.

Rokeby, D. (n.d.) David Rokeby: Works [Online]. Available from:

http://www.davidrokeby.com/installations.html [Accessed 13 September 2013].

Rokeby, D. (1990) Very Nervous System [Installation]. Firstly exhibit in 1986, at: Arte,

Technologia e Informatica. Venice Biennale, Venice, Italy.

Rothwell, J. D. (2012). In Mixed Company: Communicating in Small Groups. Independence,

KY: Cengage Learning.

Rosenboom, D. (1976). Homuncular Homophony. In: Rosenboom, D. ed. Biofeedback and

the Arts. Results of Early Experiments. Vancouver: Aesthetic Research Centre of

Canada.

Rosenboom, D. (2006). Brainwave Music [CD audio]. Cat. No. EM1054. Osaka, Japan: EM

Records.

Sanders, P. (2012). Liveness in Modern Music: Musicians, Technology, and the Perception of

Performance. New York: Routledge.

Sanfilippo, D. (n.d.). Dario Sanfilippo: Projects [Online]. Available from:

http://dariosanfilippo.tumblr.com/ [Accessed 13 September 2013].

Sanfilippo, D. (2012). LIES (distance/incidence) 1.0: a human-machine interaction

performance [Poster]. In: Proceedings of the 19th Colloquium of Musical Informatics

(XIX CIM). Triest, pp. 21–24.

Saussure, F. de (1983). Course in General Linguistics. Trans. Roy Harris. London: Duckworth.

Schiemer, G. (1999). Improvising Machines: Spectral Dance and Token Objects. In: Leonardo

Music Journal. Vol. 9(1), pp. 107–14. Cambridge, MA: The MIT Press.

Schueller, H. M. (1957). Schelling's Theory of the Metaphysics of Music. In: Feagin, S. L. ed.

The Journal of Aesthetics and Art Criticism. Vol. 15(4), pp. 461–476.

179

Seidler, K. “Oswald” (2012). XAMPP [Downloadable program]. Version 1.8, Windows and

OSX. Apache Friends. Available from:

http://www.apachefriends.org/en/xampp.html [Accessed 13 September 2013].

Shernoff, D. J., Csikszentmihalyi, M., Schneider, B. and Shernoff, E. S. (2003). Student

Engagement in High School Classrooms from the Perspective of Flow Theory. In:

Kamphaus, R. W. ed. School Psychology Quarterly. Vol. 18(2), pp. 158–176.

Smalley, D. (1992). The listening imagination: listening in the electroacoustic era. In:

Paynter, J. et al. eds. Companion to Contemporary Musical Thought. Vol. 2, pp. 514–

554. London: Routledge.

Smalley, D. (1997). Spectromorphology: explaining sound-shapes. In: Landy, L. ed. Organized

sound. Vol. 2(2), pp. 107–126.

Smith, A. (2000). Oxford Dictionary of Biochemistry and Molecular Biology. Rev. edn. New

York: Oxford University Press.

Solomon, L. (2002). John Cage and 4′33″. Solomon’s Music Resources [Online]. Available

from: http://solomonsmusic.net/4min33se.htm [Accessed 13 September 2013].

Spiegel, L. (1987). A Short History of Intelligent Instruments. In: Keislar, D. ed. Computer

Music Journal Vol. 11(3), pp. 7–9.

Spiegel, L. (1992). Performing with Active Instruments—an Alternative to a Standard

Taxonomy for Electronic and Computer Instruments. In: Keislar, D. ed. Computer

Music Journal. Vol. 16(3), pp. 5–6.

SplitmediaLabs Ltd. (2010). VH Screen Capture Driver (VHSCD) [Downloadable program].

Version, 3.0.0.5, Windows. SplitmediaLabs Ltd. Available from:

http://www.splitmedialabs.com/download/ [Accessed 13 September 2013].

Steels, L. and Spranger, M. (2008). The Robot in the Mirror. In: Connection Science [Online].

Vol. 20(4), pp. 337–358. Available from:

http://www.tandfonline.com/doi/abs/10.1080/09540090802413186 [Accessed 13

September 2013].

180

Stefani, G. (1977). Insegnare la musica. Proposte di animazione e didattica. Florence:

Guaraldi.

Stefani, G. (1979). L’ascolto musicale. In: Stefani, G., Tafuri J. and Spaccazocchi, M. eds.

Educazione musicale di base. Brescia: La Scuola, pp. 133–157.

Stefani, G. (1993a). La parola all’ascolto, Progetto Uomo-Musica (3), pp.16–23. Now in:

Stefani, G. (2000). La parola all’ascolto. Bologna: Clueb, pp.179–190.

Stefani, G. (1993b). La parola all’ascolto. 2. Dalle risposte alla musica. In: Progetto Uomo-

Musica (4), pp. 11–19. Reprinted in: Stefani, G. (2000). La parola all’ascolto.

Bologna: Clueb, pp. 190–204.

Stefani, G. (1994). La parola all’ascolto. 3. Le condotte. In: Progetto Uomo-Musica (5), pp.

11–17. Reprinted in: Stefani, G. (2000). La parola all’ascolto. Bologna: Clueb, pp.

204–214.

Stefani, G. (1998). Musica. Dall’esperienza alla teoria. Milan: Ricordi.

Stefani, G. (2000). La parola all’ascolto. Bologna: Clueb.

Steinberg (2006). Steinberg releases VST 2.4 standard with new features. Steinberg Media

Technologies GmbH [Online]. Available from:

http://www.steinberg.net/index.php?id=334&L=1 [Accessed 13 September 2013].

Stockhausen, K. (1954) Studie II. Work No. 3/II. Kettenberg, Germany: Stockhausen-Verlag.

Stockhausen, K. (1965). Mikrophonie II. Work No. 17. Vienna: Universal Edition.

Tanaka, A. and Bongers, B. (2001). Global String: A Musical Instrument for Hybrid Space. In:

Fleischmann, M. and Strauss, W. eds. Proceedings: Cast01 // Living in Mixed

Realities. Schloss Birlinghoven, pp. 177–181.

Tarabella, L. (2004). Handel, a Free-Hands Gesture Recognition System. In: Proceedings of

the 2004 Second International Symposium Computer Music Modeling and Retrieval

(CMMR 2004). Esbjerg, Denmark: Springer Berlin/Heidelberg, pp. 139–148.

181

Tarabella, L. (2007). Gesture touchless live computer music. In: eNTERFACE’07. Istanbul:

Boğaziçi University. Available from:

http://www.docstoc.com/docs/96828574/Gesture-touchless-live-computer-music/

[Accessed 13 September 2013].

Tarabella, L., Boschi, G. and Bertini, G. (2001). Gesture, mapping and audience in live

computer music. Pisa: ISTI-CNR. Available from:

http://puma.isti.cnr.it/dfdownload.php?ident=/cnr.iei/2001-TR-033 [Accessed 13

September 2013].

Tarabella, L., Magrini, M. and Scapellato, G. (1997). Devices for Interactive Computer Music

and Computer Graphics Performances. In: IEEE First Workshop on Multimedia Signal

Processing. Princeton, New Jersey, pp. 65–70.

The PHP Group (2012). PHP [Downloadable program]. Version 5.4.6, Windows. The PHP

Group. Available from: http://www.php.net/ [Accessed 13 September 2013].

Tudor, D. (1998). Rainforest IV. In: Rainforest [CD audio]. New York: Mode 64.

Turing, A. (1950). Computing machinery and intelligence. In: Ryle, G. ed. MIND: A Quarterly

Review of Psychology and Philosophy. Vol. LIX, N.S. (236)(October), pp. 433–460.

Ustream Inc. (2011). Ustream.tv [Online]. Available from: http://www.ustream.tv/

[Accessed 13 September 2013].

Vaggione, H. (2001). Some Ontological Remarks about Music Composition Processes. In:

Keislar, D. ed. Computer Music Journal. Vol. 25(1), pp. 54–61.

Van Gulick, R. (2004). Consciousness. In: Zalta, E. N. ed. The Stanford Encyclopedia of

Philosophy. Summer 2011 edn. [Online]. Available from:

http://plato.stanford.edu/archives/sum2011/entries/consciousness/ [Accessed 13

September 2013].

Van Tonder, C. (2004). Music composition and performance in interactive computer/human

systems. School of Arts. Johannesburg: University of the Witwatersrand. Research

essay.

182

Vercoe, B. (1984). The Synthetic Performer in the Context of Live Performance. In:

Proceedings of the 1984 International Computer Music Conference (ICMC-84). Paris:

International Computer Music Association, 199–200.

Voltan, A. (2006). Gli strumenti dell’interazione—Incontro fra la ‘bio-logica’ e la ‘new-

techno-logica’ [Online]. Available from:

http://org.noemalab.eu/sections/ideas/ideas_articles/voltan.html [Accessed 13

September 2013].

Weil, B. (1999). The artist and the Internet: a new form of exchange with the public. In: Art

and Society, pp. 93–97. Paris: United Nations Educational, Scientific and Cultural

Organization [Online]. Available from:

http://unesdoc.unesco.org/images/0011/001171/117110mo.pdf [Accessed 13

September 2013].

Weinberg, G. (2005). Interconnected Musical Networks: Toward a Theoretical Framework.

In: Keislar, D. ed. Computer Music Journal. Vol. 29(2), pp. 23–39.

Wijnans, S. (2010). The Moving Body as a Spatial Sound Generating Instrument: Defining the

Three Dimensional Data Interpreting Methodology (3DIM). School of Music and

Performing Arts, Bath Spa University. Bristol: University of the West of England. PhD

Thesis. Available from: http://www.mudanx.nl/PhD/2.2.5.html [Accessed 13

September 2013].

Wiley, D. (1988). OpenContent License (OPL) [Online]. Available from:

http://www.opencontent.org/opl.shtml [Accessed 13 September 2013].

Winkler, T. (1999). Composing interactive music: Techniques and Ideas Using Max.

Cambridge MA: The MIT Press.

Wishart, T. (1996). On Sonic Art. Amsterdam: Harwood Academic Publishers GmbH.

Witzany, G. (2010). Biocommunication and Natural Genome Editing. Dordrecht: Springer

Netherlands, pp. 1–26.

Witznay, G. and Baluška, F. eds. (2012). Biocommunication of Plants. Springer Verlag.

183

YouTube LLC (2012). YouTube [Online]. Available from: http://www.youtube.com/ [Accessed

13 September 2013].

Zend Technologies USA (2012). Zend Framework [Downloadable program]. Version 1.11.12,

Windows. Zend Technologies USA. Available from:

http://www.zend.com/community/downloads/ [Accessed 13 September 2013].

Zicarelli, D. (1987). M and Jam Factory. In: Keislar, D. ed. Computer Music Journal. Vol. 11(4),

pp. 13–29.