Phys Osc An Interactive Physics based environment that controls Sound Synthesis

51
Phys-Osc: An Interactive Physics based environment that controls Sound Synthesis Jason Doyle 10048464 Faculty of Science and Engineering Department of Computer Science & Information Systems University of Limerick BSc in Music Media and Performance Technology Submitted on 17th April 2014

Transcript of Phys Osc An Interactive Physics based environment that controls Sound Synthesis

Phys-Osc: An Interactive Physics based

environment that controls Sound Synthesis

Jason Doyle

10048464

Faculty of Science and Engineering

Department of Computer Science & Information Systems

University of Limerick

BSc in Music Media and Performance Technology

Submitted on 17th April 2014

1. Supervisor: Dr. Kerry Hagan

Department of Computer Science & Information Systems

University of Limerick

Ireland

Supervisor’s signature:

2. Second Reader: Mr. Nicholas Ward

Department of Computer Science & Information Systems

University of Limerick

Ireland

Second Reader’s signature:

ii

Abstract

The project’s aim is to find new mediums for controlling sound synthe-

sis, transforming the the desktop paradigm of point and click. It aims

to provide sonification alongside visualization of systems commonly

seen in nature such as the flocking behavior of birds. Implementation

of these systems will allow for the acquisition of control data which is

to be sent to the host Granular Synthesis engine over the Open Sound

Control (OSC) network protocol creating a dynamic audiovisual ex-

perience for the user.

Declaration

I herewith declare that I have produced this paper without the pro-

hibited assistance of third parties and without making use of aids

other than those specified; notions taken over directly or indirectly

from other sources have been identified as such. This paper has not

previously been presented in identical or similar form to any other

Irish or foreign examination board.

The thesis work was conducted under the supervision of Dr. Kerry

Hagan at University of Limerick.

Limerick, 2014

Acknowledgements

I wish to acknowledge the knowledge and guidance provided by my

advisor Dr. Kerry Hagan throughout the development of this project.

I would also like to thank the faculty of the Computer Science de-

partment for their time and assistance throughout the course of my

studies. I also wish to acknowledge my colleagues in the MMPT class

of 2014 for their assistance and encouragement throughout.

For my parents Michael and Loretta, my sister Lisa, and my partner

Sara for their support and encouragement throughout the course of

my studies.

Contents

List of Figures v

1 Introduction 1

1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Research 6

2.1 The Nature of Code & Indeterministic Algorithms . . . . . . . . . 7

2.2 Granular Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.4 Existing Products . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4.1 Konkreet Performer . . . . . . . . . . . . . . . . . . . . . . 10

2.4.2 GBoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Overview of Technology 14

3.1 Android Development Tools . . . . . . . . . . . . . . . . . . . . . 14

3.2 Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.2.1 oscP5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.2.2 controlP5 . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.2.3 Ketai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.2.4 APWidgets . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.3 Max MSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.4 CSound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

iii

CONTENTS

4 Implementation 19

4.1 Early Android Prototypes . . . . . . . . . . . . . . . . . . . . . . 19

4.1.1 Prototype 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.1.2 Prototype 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.2 Final Android Implementation . . . . . . . . . . . . . . . . . . . . 24

4.3 Audio Engine Implementation . . . . . . . . . . . . . . . . . . . . 24

4.3.1 Max MSP . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.3.2 CSound . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.4 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.4.1 Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5 Conclusions and Future Directions 27

5.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Appendix A: Application Screenshots 31

Appendix B: Java Implementation of the Boids Algorithm 35

Appendix C: Max MSP GUI 37

Appendix D: Granular Implementation in CSound 40

References 41

iv

List of Figures

2.1 Konkreet Performer . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2 Gboids Max Msp Interface . . . . . . . . . . . . . . . . . . . . . . 13

4.1 Android OSC Settings . . . . . . . . . . . . . . . . . . . . . . . . 21

4.2 Boids Visualisation . . . . . . . . . . . . . . . . . . . . . . . . . . 23

1 Nexus 7 Tablet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2 Nexus 4 Phone . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3 Max MSP Router . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

v

1

Introduction

The project investigates the application of the Boids algorithm to sound synthe-

sis. The type of Boids algorithm used is Daniel Shiffman’s processing port of

Craig Reynold’s original algorithm. In this report I will describe the process of

developing an Android application that sends the Boids location and accelerom-

eter data over Open Sound Control (OSC) to a host audio synthesizer that has

been constructed in Max and Csound. The audio synthesizer utilises synchronous

granular synthesis techniques to granulate sampled sound. Synthesis parameters

are updated in real-time across a wireless network creating audiovisual synchrony

between the visual and audio algorithms.

1.1 Background

Mobile Processors have come a long way since the early days of the Android

platform, but some problems do still exist. Peter Kirn (2013) discussed Android

latency in–dept stating that “Google has deserved some criticism. Years of subse-

quent updates saw the company remain silent or unresponsive about critical audio

issues.” The critical audio issues Peter discusses are namely related to latency.

The Dalvik runtime system that Android employs has issues with threading when

it comes to audio performance. While this has certainly improved over time, it

1

1.1 Background

still has a long way to go to compared to the iOS model.

LibPD seemed like a good place to start from, rather than dealing with the

complexities of building an audio system that when fine tuned may still have issues

with latency. After testing some basic functionality from Peter Brinkman’s book,

Mapping Musical Apps Brinkman (2012), it was decided that implementations of

granular synthesis would require too much polyphony to operate efficiently as an

android application. A simple Wavetable, Additive, FM or AM synthesis method

can be easily built because polyphony is not really a problem when using two

to five voices at any one time. Problems would most certainly arise when using

one hundred instances of the patch which would be a requirement for an effective

granular synthesis patch.

Upon consultation with my advisor it was decided that a more effective so-

lution was to create a granular synthesis patch in Max and use the Open Sound

Control protocol to send data from the mobile device to the granular engine.

Discussions also took place as to suitable visual algorithms to drive the granular

engine. It was decided that the Boids algorithm would be a good match since

it can generate a large amount of dynamic data that a granular implementation

requires.

The Boids algorithm was created by Craig Reynolds in 1987, it sought to

simulate systems found in nature such as the behaviour and flocking patterns of

birds and shoals of fish. Reynolds (1987) based the algorithm on three dimen-

sional computational geometry of the sort normally used in animation. Each Boid

uses three basic forces, Separation, Cohesion and Alignment to define its position

in a three dimensional space. The flocking animation that occurs shifts dynami-

cally across the screen updating each Boids location. The algorithm, which is an

implementation of Daniel Shiffman’s Foundation (2013) Boids algorithm, which

in itself is an implementation of the Reynolds (1987) model that provides the

visual feedback component of the application. Modification of Shiffman’s imple-

mentation occurred to make it more suitable for a mobile device’s screensize and

2

1.2 Motivation

to supply accelerometer data alongside the data being gathering from the Boids

flocking pattern.

1.2 Motivation

There were numerous motivations to build this product. The first one being that

this type of product doesn’t exist on the Android Platform. There are many

products that use the OSC protocol such as TouchOSC, Touch Daw, Control,

QuickOSC, and Kontrolleur, but none of these products offer the use of inde-

terministic algorithms to generate control data in order to send over a wireless

network. The aforementioned programs merely respond to touch activities in the

interface and send that data to the host computer. PhysOsc generates its own

control data which then can be tailored or modified to the user’s needs. The idea

of continuous sonic and visual variation also appealed to me. Granular Synthesis

provides a very rich timbral backdrop to the algorithm’s visual complexities. The

synthesis parameters benefit from a wide range of control data that an algorithm

like Boids can provide.

Implementing a higher level control scheme such as Boids takes the focus from

data creation and shifts the focus to controlling geometric shapes. I wanted to

create an interface that allowed for the attention of the user to be focused on

the animations movements and their correlation with the sonic output, Most mu-

sical interfaces concentrate on one-to one mappings, whereby the user touches

the screen and creates a direct response in the interface that usually changes one

parameter at a time.

The project was born from the idea that the equal temperament scale coupled

with one-to-one mappings does not provide adequate control over the many pa-

rameters a granular synthesis engine needs. Higher level control of musical data

is not a new concept having being explored in the Serialist works of Karlheinz

Stockhausen and Stochastic work of Iannis Xenakis. Although different in con-

cept the principles stay the same, to leave the creation aspect of musical data to

3

1.2 Motivation

a pre-defined set of rules.

Another motivation to build this product is the use of audiovisual synchrony

through the algorithm. The application displays the visual feedback of the al-

gorithm on a mobile device whilst the sonification occurs through Max MSP on

a desktop computer. Keeping the visualization and sonification algorithms sep-

arate holds many advantages in that the user can use this software to control

any program that uses the OSC protocol, This includes Reaktor, Pure Data,

Supercollider, Csound, Iannix, Processing and vvvv. The number of programs

increases by the month, leading to OSC becoming a standard for communication

in audio and visual programs since its inception in 2001.

4

1.3 Thesis Outline

1.3 Thesis Outline

The remaining chapters of this dissertation are as follows:

Chapter Two Research

Chapter Three Overview Of Technology

Chapter Four Implementation

Chapter Five Conclusions and Future Directions

A series of documents have been included in the Appendix section of this

dissertation. These are:

• Appendix A includes application screenshots from an android phone and

tablet

• Appendix B presents the Java code from the Android application

• Appendix C includes application screenshots from the Max MSP GUI

Attached to this dissertation is a CD containing the following items:

• folder 1 : Android Application

• folder 2 : Max MSP and CSound files

5

2

Research

In the four years since our college course commenced, Mobile Technologies have

become an integral part of everyday life. These devices offer huge potential for fur-

ther development. With that in mind, research was carried out on many different

libraries and tools with a view to the construction of an audiovisual application.

During this period of development many tools were road-tested through tutorials.

The development tools in question include Processing, Android SDK, Max MSP,

Phonegap, and also many web technologies such as the WebAudio API. These

cross platform tools can allow for rapid prototyping of applications which then

can be scaled up in a professional grade IDE such as Android’s Eclipse or Apple’s

Xcode.

The end goal of this period was to transfer the knowledge learned from the

course alongside the extra curricular activities and apply that to mobile technol-

ogy development. Research began on the audio capabilities of the current crop

of Mobile Devices in the summer of 2013. LibPD, Csound, and Processing offer

cross platform visual and audio capabilities. All three are ideal prototyping plat-

forms as the same code will work on both desktop and mobile which makes all

three very valuable when working with multiple platforms and hardware devices.

6

2.1 The Nature of Code & Indeterministic Algorithms

2.1 The Nature of Code & Indeterministic Al-

gorithms

Coding systems seen in nature using algorithms has been an area of particular

interest to me throughout the course of my studies. Daniel Shiffman calls this

The Nature of Code, he has been one of the leading proponents of this type

of programming in the last decade. His self–published book of the same name

Shiffman (2012), detailed ways to simulate the physical world with the program-

ming library Processing. His book covers many aspects of the systems of nature

recreated through computational media. Of particular interest to me is his work

with Physics libraries such as Toxilibs or Box 2D. There are also many excellent

chapters on Forces, Fractals, Particle Systems and Genetic Algorithms. I plan

to use these algorithms to generate control data for the granular synthesis engine.

My research began with an implementation of the Boids algorithm. The algo-

rithm exhibits fascinating life–like patterns from nature with its creator Reynolds

(1987) stating “A flock exhibits many contrasts. It is made up of discrete birds

yet overall motion seems fluid; it is simple in concept yet is so visually com-

plex.”These complexities that are contained within the algorithm have generated

many avenues for research, specifically the sonification of each boids movements

within the flock. Reynolds defined three forces which determined the behavior of

each flock member. These forces are Alignment, Separation, and Cohesion. Each

force provides a unique opportunity to transform each component’s data into a

useable stream of information for sonification purposes.

Reynolds (1987) states “the simulation of the flock is an elaboration of a par-

ticle system.” He defines the flock as having many contrasts and that“it is made

up of discrete birds yet overall motion seems fluid.” This seemed a familiar con-

cept to that of Granular Synthesis. In the case of Granular Synthesis thousands

of simple grains combine to create a fluid motion. Truax (1990) put it best when

he made the analogy of granular synthesis to that of a river whose power is based

on accumulation of countless droplets of water. The common ground between

both paradigms already seemed useful for mapping to each other.

7

2.2 Granular Synthesis

2.2 Granular Synthesis

British physicist Denis Gabor (1946) first proposed the idea of sound grains in

1946. His work proved very influential to Iannis Xenakis who proposed an ex-

pansion on that theory in his book Formalized Music. Opie (2003) hypothesized

that Xenakis set out to “design ways he could arrange grains of sound to cre-

ate complex and beautiful textures.” Iannis Xenakis hypothesized the theory of

granular synthesis and laid the framework for future research in this area. Xe-

nakis (1992) states, “All sound is an integration of grains, of elementary sonic

particles, of sonic quanta. Each of these elementary grains has a threefold na-

ture: duration, frequency, and intensity. All sound, even all continuous sonic

variation, is conceived as an assemblage of a large number of elementary grains

adequately disposed in time”. He was particularly interested in spatialisation, in

three dimensional soundscapes composed of sonic clouds all with 3D trajectories.

He integrated his vast mathematical knowledge to associate this elementary grain

idea with stochastic methods for arrangement. Moving on from this lineage, Cur-

tis Roads and Barry Traux have provided extraordinary insight into Granular

Synthesis techniques for the last three decades and remain the leading opinions

on the subject.

Research was carried out to understand the concepts of Granular Synthesis, to

determine which method would be the most useful to implement in the prototype.

The classic text on the theory side of this is Curtis Road’s book Mircrosound. In

the book Roads (2001) describes a grain as “a brief microacoustic event, with a

duration near the threshold of human auditory perception, typically between one

thousandth of a second and one tenth of a second (from 1 to 100 ms). Each grain

contains a waveform shaped by an amplitude envelope.” Each of these individual

grains has an amplitude envelope, Commonly used envelopes are Gaussian, Han-

ning, Trapezoidal and Triangular. The Gaussian curve is the classic described

in Gabor’s text. Roads states that this is the “smoothest from a mathematical

point of view.” The typical parameters of granular synthesis include grain size,

grain shape, envelope shape, grain spacing over time and grain density. When

these thousands of sonic events get assimilated and are combined, we can hear

8

2.3 Mapping

an animated texture at play.

There are many granular synthesis methods. The methods focused on are

Synchronous Streams, Quasi-Synchronous Streams and Asynchronous Clouds.

Roads (2001) defines the Synchronous Granular Synthesis method as grains that

“follow each other at regular intervals,” He also notes that “density parameter

controls the frequency of grain emission, so grains per second can be interpreted

as a frequency value in Hertz.” When emitting grains at regular intervals, lower

densities lead to a rhythmic, metric feel. This repetition at a constant rate leads

to what Scavone (2013) calls “an audible frame rate,” He also notes that “Vi-

brato can be created by slowly by varying the time spacing between grains.” An

effective mapping method would need to be employed from the Boids Algorithm

to ensure that the data’s not changing too quickly to achieve this effect in my

implementation.

Asynchronous Granular Clouds does away with the idea of linear streams.

Roads (2001) states the Asynchronous method “scatters the grains over a spec-

ified duration within regions inscribed on the time-frequency plane.” The Asyn-

chronous method certainly has benefits when using an algorithm like Boids to

control it. The major difference in this method as opposed to the Sychronous one

is that this method uses a stochastic or chaotic algorithm to control the grain

streams and also produces irregular stream as opposed to the constant stream for

the Synchronous method. Asynchronous granular synthesis has many benefits

in terms of precision control and more variety in the sonic output. However, I

plan to use the Synchronous form because it’s better suited to real-time granular

implementations.

2.3 Mapping

The link that exists when mapping electronic devices, such as computers, to one

another can be defined as active. An active link provides for two way communi-

cation between both devices. Chadabe (2002) states “the link can be a computer

9

2.4 Existing Products

that can generate information, run algorithms, or act in any way as an interme-

diary between a performer’s controls and the variables of a sound generator.”The

performer’s controls in this case are controlled by the Boids algorithm running

on an Android tablet, with an exception being the accelerometer data, which is

determined by the user’s movement. The tablet is the intermediary that controls

the variables of my sound generator.

Phys-Osc takes the generative approach to mapping. Hunt and Wanderly

(2002) state, “Mapping using neural networks can allow the designer to benefit

from the self-organising capabilities of the model.”The Flock will act as the model

with Separation, Alignment, and Cohesion driving each Boid’s acceleration points

to new locations within the Flock. It is envisaged that Separation, Alignment,

and Cohesion parameters will have dedicated sliders. This element of the GUI will

utilise a one-to-many mapping strategy, with simple parameter changes resulting

in directional changes in each Boids movement. Mapping strategies in the audio

engine will involve positional data from each Boid being mapped to frequency and

playback position in the Boids corresponding Granular instrument. A one-to-one

mapping is used to reinforce the synchronicity between both audio and visual

data streams.

2.4 Existing Products

2.4.1 Konkreet Performer

Konkreet Performer is closely related related to Phys-Osc in that it uses a physics

engine to control data that is sent over an OSC network to a host computer. The

performance interface is designed to use the device’s multi–touch capability to

shape the control data output. The program allows the user to alter shapes on

screen to dynamically control the type of OSC data sent to it’s local host. The

user merely alters the existing indeterministic algorithm to shape the sound ex-

perience accordingly.

10

2.4 Existing Products

It takes a non traditional approach to interface design. Historically controllers

relied on midi knobs in a skeuomorphic design that mimics real world constraints.

It implements a physics engine that allows for inertia control of individual control

nodes. The inertia can be sped up to allow for smooth musical transitions. It

proved an inspiration to my design because it uses an unconventional approach to

control data. It provides the mechanisms that allow the user to interact with the

software using different gestural movements to shape the resulting data stream

in new ways.

Figure 2.1: Konkreet Performer - OSC Performance interface with nodes that

are controlled by inertia and the users touch. [Source: (Visnjic 2013)]

11

2.4 Existing Products

2.4.2 GBoids

Gboids is a Max MSP patch that uses an implementation of Craig Reynolds

Boids algorithm to drive a granular synthesis engine. Each agent represents a

grain scrubbing an audio sample as it moves in a two dimensional space. The

Boids algorithm in this implementation, is written in Javascript. It is used to

scrub areas of a pre-recorded sampled sound. The algorithm drives change to the

synthesis implementation, with its position changing parameters such as inertia,

alignment and separation. Parameter controls can define the number of agents

that scrub the audio, size and velocity of those agents, and the classic Boids pa-

rameters Separation, Alignment, and Cohesion. In this implementation the Boids

algorithm also introduces Friction and Gravity parameters.

The important aspect of this application with regard to my own design was

that it used the Boids algorithm to scrub a preloaded audio sample which en-

ables the granulator to play the sample at different points both backwards and

forwards in time. It allows for an interesting rhythmic variation to occur with

separate Boids playing the same sample with varying frequencies to create a dense

animated texture.

12

2.4 Existing Products

Figure 2.2: Gboids Max Msp Interface - A granular synthesizer thats con-

trolled by a javascript implementation of the Boids algorithm, which scrubs the

audio sample. [Source: (Angelo 2010)]

13

3

Overview of Technology

I chose to develop the application for the Android platform because of its open

source nature and all the tools I needed, including libraries, were freely available.

Using this platform also allows me to have a wider reach of potential users with

Business-Insider (2013) claiming the operating system has exceeded one billion

activations as of this year. Android is built upon the Java framework, which

I studied in first year at college. We also used the open source programming

language Processing for animating visuals in second year. This library is based

on Java so it can be used as a library within the Eclipse environment. It has

pre-built methods which simplifies the animation process and cuts down on large

amounts of code needed to do simple functions.

3.1 Android Development Tools

The Android Developer tools provide a fully professional grade environment to

construct Android Applications. Included in the software suite is the Eclipse

IDE, along with Google’s Software Development Kit, and Emulator. Eclipse in

itself is a powerful code editor allowing one to build complex Java programs with

features like error protection and Lint warnings, which can prove invaluable to a

novice Android programmer.

14

3.2 Processing

3.2 Processing

Processing is a visual programming framework developed in MIT during the

2000s. It was conceived to enable artists to easily create interesting computational

media without having to learn the Java programming language from scratch. It is

based on the Java programming language and therefore can be easily integrated

into my Android development suite as a Java library.

3.2.1 oscP5

The application will use the Open Sound Control (OSC) messaging transfer pro-

tocol to relay information over a wireless network. The reason for using this

protocol is that it will enable me to use the substantial computing power of a

desktop computer. This is necessary because of the limited processing power in-

herent on mobile devices. There are other Android related issues such as buffering

audio to take into account as well. After some initial research I discovered that

the Android platform does not have API related to the Open Sound Control for-

mat.

It became clear that I needed to use a third-party library in order to send

the data in the format I needed. After some initial research with the libraries

LusidOSC and javaOSC, I chose the Oscp5 library from Andreas Schlegel (2011).

It has added benefits in that its code base is small and it can also query an

IP address without having to call the Android Wifi Manager within an activity.

It was built for Processing so it can be used as a Java library alongside the

Processing core and the Ketai sensor library. Initial tests with sending sensor

data and touchscreen activities proved successful.

3.2.2 controlP5

Controlp5 is a library created for Processing to enable the use of UI elements like

sliders, knobs and check boxes. When using Processing as an activity, one major

disadvantage is that the program cannot access Android’s UI elements such as

the action bar or menu buttons. Controlp5 can be used as a replacement. Its sole

15

3.3 Max MSP

purpose is to provide UI objects to enable GUI interactions to change parameters

within the program. Control elements include Flock Speed, switching on and off

sensors, and controlling Separation, Alignment, and Cohesion parameters.

3.2.3 Ketai

The Ketai Sensor library was created to work in Android mode in the Processing

IDE. Ketai (2012) can be used to detect sensor activity on a mobile device. The

library can detect data from mobile devices humidity, proximity, accelerometer,

orientation and light sensors. The code base is light, enabling me to program an

Android’s device sensors and use that data and send it over oscP5 in a few lines

of code. The library allows me to program extra features into the app which will

control parameters in the Granular Synthesis Engine.

3.2.4 APWidgets

A major downside in using the Processing platform for Android Development is

that one does not get access to the Android UI framework. This doesn’t help

applications wanting to conform to Android best practice UI guidelines. Research

was carried out as to a suitable alternative. APWidget provides all Androids

UI Widgets like text boxes, and buttons in one simple and easy to use library.

I’m using a textedit box widget and a button to set a new IP address within

Processing.

3.3 Max MSP

Max MSP is a graphical programming language created by Miller Puckette in IR-

CAM. The Max paradigm can be described as a way of combining pre-designed

building blocks into configurations useful for real time computer music (Puckette,

2002). The program has proved an invaluable learning tool for computer music

and computer vision paradigms with its in-built Jitter objects. It was a natural

choice for the granular synthesis implementation in that throughout the college

term it was used in teaching fundamental concepts in Amplitude Modulation,

16

3.4 CSound

Ring Modulation, and Frequency Modulation synthesis techniques.

The program really excels in its handling of multiple types of data be that

integers, floats, UDP messages and can convert that data in to almost any other

kind. An example of this is how the program is used to read in the OSC data

that’s being transferred from the Android application and apply it through clever

mapping strategies to the granular synthesis engine. The Max external Open-

soundcontrol from CNMAT is used to read in the different message addresses and

data. The data from the CNMAT object allows me to map data to my sound

engine within the Max environment and has many advantages over the in-built

UDP object allowing one to add time tags for certain functions.

It was decided that Csound would be a more viable alternative to Max MSP

for the audio engine. Max MSP can also be used to implement a granular syn-

thesis engine, but its implementation is difficult and it can get quiet confused

visually because of the sheer amount of objects it takes to construct the granular

synthesizer, especially an implementation that needs a great deal of polyphony.

In order to make Max MSP adaptable enough for a granular synthesis implemen-

tation a max external would need to be created in the C programming language,

In the end Csound was chosen for the task.

3.4 CSound

CSound was chosen as my sound engine because of the history it has with Granu-

lar Synthesis. Many opcodes have been written on the platform for the purpose of

granularization of sound, these include Partikkle, Syncgrain, Granule and many

others. CSound can be defined as “a sound design, music synthesis and signal

processing system, providing facilities for composition and performance over a

wide range of platforms”(Csounds.com, 2013). The ability to use CSound on

many platforms such as iOS, Android, Linux, Windows and OSX makes it a ver-

satile choice for rapid development of prototypes.

17

3.4 CSound

CSound’s ability to interface directly with Max MSP through the use of an

external has proved very useful to a multi–platform project such as PhysOsc. Its

sound engine is capable of producing better sound quality than its counterpart

Max MSP.

18

4

Implementation

4.1 Early Android Prototypes

The first steps in this project trace back to the summer of 2012 when I was trying

to get the Visual Language Processing, which is based on Java, to work within

the Android Developer tools as a library. Processing’s own IDE offers an Android

Mode, but this does not allow for extensive modification of the Android services

available in the Eclipse IDE. My goal at this time was to use Processing in com-

bination with the Android SDK to create the animations and UI for the look and

feel of the application. I found Processing’s forum very beneficial and gained the

knowledge there on how to run Processing as a library. This involves finding the

processing-core.jar file in Processing application and adding it to the build path

of your Android Application.

At the beginning of my project I wanted to implement the sound synthesis

on the mobile device itself. I quickly learned that Granular Synthesis is very

processor intensive. Therefore, a mobile device’s limited CPU might cause issues

when running intensive patches. Roads (2001) states “If n is the number of pa-

rameters per grain, and d is the density of grains per second, it takes n times d

parameter values to specify one second of sound. Since n is usually greater than

ten and d can exceed one thousand, it is clear that a global unit of organization

is necessary for practical work.” Each type of Granular Synthesis organizes the

19

4.1 Early Android Prototypes

grains differently according to different algorithms.

After consultation with my advisor, we decided that a better option for my

project would be to build the Granular Synthesis engine in Max Msp whilst let-

ting OSC (Open Sound Control) handle the data transmission from my Android

Application to the desktop Granular Patch. Androids own developer tools does

not include an OSC library so embedding the oscp5 library is a prerequisite.

4.1.1 Prototype 1

My first Android prototype began development in September. The first task was

to build a multi-page application that could navigate to a page by selecting a

button. I created the main screen, called an Activity which in turn would trigger

two further Activities. One screen for the Boids algorithm and another for OSC

Settings. I created two buttons on the home screen which allows for the user to

click into another aspect of the program. These buttons trigger a chosen Activity

through the use of intents. The Android Developer website defines intents as “a

facility for performing late runtime binding between the code in different appli-

cations. Its most significant use is in the launching of activities. It is basically

a passive data structure holding an abstract description of an action to be per-

formed”(Google, 2013).

In Android 4.0 and above the menu structure has changed. The UI guidelines

require an implementation of an action bar at the top of the application screen.

This housed all top level functions such as a Done button which saves data upon

exit and an option to get back to the home page. The return to home page button

is used as the application icon (Google, 2013). The Android Developer website

states “Icons need to be supplied in different sizes to support multiple screen

densities.” Google have implemented it in this way so that a tablet or phone can

choose which icons resolution suits the screen size best. The early prototype icons

were created with the open-source picture editor Gimp.

20

4.1 Early Android Prototypes

The next process of the prototype design required me to build an OSC Settings

screen which would store the data input by the user and remember those settings

when the user returned to the settings screen. Android handles its layout in the

XML format which has some similarities with CSS. I used a table to construct

the menu with clickable text boxes that allow for the input of numerical data.

Figure 4.1: Android OSC Settings - Screenshot from the OSC Settings menu,

that allows the user to configure preferences.

Another important function of the settings screen was to allow for the return

of the local IP address. This is important in that the OSC host, which is Max

MSP in my case, needs to know what IP to send and receive data on. A popup

toast message can be used to display this information, but I plan on returning

21

4.1 Early Android Prototypes

this data to the local IP text box from the menu in a future revision. This ensures

the user does not have to use a third party tool to ascertain their local address

on the network.

The in-built Android API Wifi Info was used to gather this information. When

this data was displayed via Toast message it was displayed in Little Endian for-

mat, this format then needed to be converted to a readable one for ease of use.

After some initial research I stumbled upon a blog post by Damien Flannery

(2010) which detailed an efficient way to translate the returned Wifi Info data

into its component string IP address. A requirement of the Android operating

system is that permissions are required in the xml file of the program. In this

case, main.xml was altered to allow it to access data from the network.

4.1.2 Prototype 2

The next requirement needed to be the visualization of the Boids algorithm. I

used Processing as a library within the Eclipse environment and added this to the

applications build path (Doyle, 2013). When an Android project initializes an

activity it extends the Main Activity class. When using Processing you need to

extend the Papplet class so that the Processing library can communicate with the

base Java code and understand the methods being called. The next step of the

process required me to to use import statements for the libraries that I intended

on using. In the PhysOsc program, I implement Oscp5 for Wifi, Processing-Core

for visuals, Controlp5 for UI elements, and the Ketai library for grabbing sen-

sor data. I chose to use an implementation of the Boids algorithm by Daniel

Shiffman Foundation (2013) which in turn is an implementation of the original

Boids algorithm (Reynolds, 1987). When the algorithm was first imported into

my Eclipse workspace, I needed to alter the algorithm so that it would compile

in a correct manner.

Within the Processing IDE, methods generally do not need to be public. Most

methods start with the void declaration which means the method does not return

any data. In Android all of these methods need to be modified to include public

22

4.1 Early Android Prototypes

Figure 4.2: Boids Visualisation - An early implementation of the Boids algo-

rithm with the Processing library.

23

4.2 Final Android Implementation

methods so that different parts of the program can access those methods, numbers

and data types also needed to cast, so that, Eclipse could understand what type

of data it was handling.

4.2 Final Android Implementation

In order to run the Processing library as an all encompassing Android applica-

tion, extra libraries were needed. To this end, Controlp5 was used to implement

UI features such as text, sliders, and checkboxes. They allow for the Boids al-

gorithm parameters to be modified by touching the appropriate area on screen.

The graphics were configured to grayscale to ensure the colour scheme remained

consistent with the desktop application. The Ketai library was used to grab ac-

celerometer info. When the sensor is initialised it sends that data to CSound and

is mapped to global reverb frequency.

4.3 Audio Engine Implementation

An active link between the mobile application and desktop application was estab-

lished to change granular synthesis parameters in real-time. To this end I used

Max MSP as an OSC hub to route information from the Android application

in real-time to CSound. All granular synthesis implementations were created in

CSound.

4.3.1 Max MSP

Max MSP was chosen as the intermediary between the visual engine of the An-

droid application and the sound synthesis engine in CSound. It allowed for con-

struction of UI features and acts as a frontend for my CSound program. It

assumes the important function of parsing the OSC packets received from the

Android application into K-rate CSound variables. Max also provides UI func-

tions to act as a transport bar for the CSound score. It allows for the starting,

stopping, resetting, and the recording of CSounds audio output. Its UI features

also provide a visual feedback mechanism from the resulting CSound audio signal.

24

4.4 Mapping

4.3.2 CSound

Csound was chosen as the synthesis engine because of its long association with

Granular Synthesis methods. Opcodes are the basic building blocks of CSound

and they provide many uses such as time based effects, and signal generators.

Many granular opcodes exist such as Partikkel, Granule, and the Grain family.

For the purposes of my research I chose to use Syncrain. Syncgrain employs a

synchronous granular method and is adaptable and extendable because most of

its parameters allow for the use of K-rate variables meaning that parameters can

change in real-time. Its also not processor intensive, allowing for the stacking of

20 such Syncgrain instruments, allowing them to be operated simultaneously.

Each Boid has a corresponding granular instrument created in CSound. All

instruments preload the same source sample into an f–table. A particularly short

sample of the Spanish phoneme U was chosen in order to create an interesting

rhythmic dynamic in the resulting audio output. A Hanning windowing function

was chosen and implemented in the CSound score file and global reverb was

applied to overall audio output with individual send levels from each CSound

instrument. Accelerometer data from the host tablet is mapped to control the

global CSound reverb cut off rate, this facilitates the creation of filtering effects

when the tablet is moved in 3D space.

4.4 Mapping

Information from each Boids on screen location is sent as data packets over a

wireless network to a host computer. Two separate OSC message streams are

employed, one for the y information of each Boid and one for the x information.

OSC message streams are received by Max and translated into readable informa-

tion for the purposes of mapping to sonic parameters. The Max MSP external

OSC-Route deciphers the messages into floating point and string information.

String information is then converted to a symbol and back into an integer in

order to separate each Boid’s information stream and trigger a bang to send its

location data to CSound as a K-rate variable.

25

4.4 Mapping

4.4.1 Strategies

When a Flock is born each Flock member accelerates away from each other in

a multitude of directions. It was decided that from a perception perspective

pitch would resonate most with movement on the y axis. Although a one-to-one

mapping can often yield limited results in this case it helped initialise audiovisual

synchrony between the visual and sonic algorithms. Often when the Flock is left

in a static state the entire Flock movement appears to move along the x axis

and for this reason it was decided that this data set could be best utilised for

traversing the sample moving forward and backward in time from a range of -1 to

1. X data is unpacked from the OSC message and reads in the range of 0 to 1280

which then needs to be scaled from -1 - 1 to ensure CSounds K-Rate variable

receives the correct data.

26

5

Conclusions and Future

Directions

5.1 Summary

The aim of the project was to develop a sound synthesizer based on the Boids

algorithm. Detailed background research into both the Boids algorithm and gran-

ular synthesis techniques was carried out as well as research into the necessary

tools and processes required to complete a multi–platform project such as this.

A number of different approaches were investigated for the android application

and audio engine before a final product was decided upon.

5.2 Conclusions

There were many reasons to undertake a project of this nature. I saw this project

as the perfect vehicle to show my knowledge of the curriculum taught through-

out the four years. Programming concepts in Java, Max MSP, Processing, and

CSound were of great benefit throughout the curriculum. The aim was to take

the methodologies thaught forward to produce a product that posed many chal-

lenges both from a coding and conceptual point of view.

Indeterministic algorithms such as Boids provide a rich source of data that can

27

5.3 Future Work

be used to create compelling sonic narratives. PhysOsc applies this methodology

to control a granular synthesis implementation so that the user doesnt have to

change each of the many parameters required to create the sonic output. This

has many advantages in that the user of the software has the option to use a

macroscopic approach rather than microscopic.

The higher level of control allows the user to concentrate on the creative

aspects of music creation rather than getting bogged down at the microscopic level

with the vast array of parameter changes needed to create a dense granular sound.

In this regard, the point and click paradigm of music creation with granular

synthesis can be seen as a counter intuitive process. A conscious decision was

made at the start of the development cycle to keep the sound engine and graphical

user interface separate allowing the user to concentrate on moving shapes to

different locations on screen, which takes the focus off the underlying granular

synthesis process.

5.3 Future Work

Ultimately the android application could be refined further with many new fea-

tures added. Accelerometer data can be easily obtained using the Ketai Java

library, this data could be further refined and scaled with x, y, and z controlling

cohesion, separation and alignment parameters of the Boids algorithm. Further

automatization of parameters could be utilised through various other sensors such

as the promixity sensor for grain size.

The sonic output from the granular implementation would benefit from the

use of surround sound. This would allow the movement of the Flock to be better

sonically realized in a three dimensional space. It would also allow for the signal

of each Boid to have better separation so that panning movements would become

more apparent in a 360 degree listening space. The use of an attractor with the

Boids algorithm could function as a flock positioner within the 360 degree field

adding another level of control to the android application.

28

5.3 Future Work

In a future revision I plan to use Javascript in Max MSP to actively create

and delete audio channels when Boids are added and subtracted in the Android

application. This would allow the user to have control over how many sound

objects are on screen.

29

Appendix A

Nexus 7 Screenshot

Figure 1: Nexus 7 Tablet - PhysOsc on a 7 inch Tablet screen.

30

Nexus 4 Screenshot

Figure 2: Nexus 4 Phone - PhysOsc on a 4.65 inch Mobile Screen. The Pro-

cessing library does not allow for the loading of multiple XML files for layout, so a

design that complements both tablet and phone needed to be implemented.

31

Appendix B

Java Implementation of the Boids Algorithm

ThirdActivity.java

package com . example . physosc ;

//−−−−− Library Imports −−−−−−−−−−−−−−

import java . u t i l . ArrayList ;

import k e t a i . s en so r s . KetaiSensor ;

import oscP5 . ∗ ;import netP5 . NetAddress ;

import p ro c e s s i ng . core . ∗ ;import contro lP5 . ∗ ;import android . u t i l . Log ;

import apwidgets . ∗ ;

//−−−−− Main Act iv i ty −−−−−−−−−−−−−−−−

pub l i c c l a s s ThirdAct iv i ty extends PApplet

{// Object Dec l a ra t i ons

OscP5 oscP5 ;

KetaiSensor s enso r ;

ControlP5 contro lP5 ;

NetAddress remoteLocation ;

Flock f l o c k ;

PFont p ;

S t r ing ip=” 192 . 168 . 43 . 141 ” ; //My laptops address on the route r

APEditText t ex tF i e l d ;

APWidgetContainer widgetContainer ;

APButton button1 ;

// Global Var iab l e s

i n t cohFactor = 8 ;

i n t sepFactor = 12 ;

i n t a l l i g nFac t o r = 7 ;

i n t v i s i b l eBo i d s = 20 ;

f l o a t accelerometerX , accelerometerY , acce le rometerZ ;

boolean smooth = true ;

boolean a c c e l e r = true ;

boolean ValidIP = true ;

// Creat ing o f the f l o c k

@SuppressWarnings ( ” deprecat ion ” )

pub l i c void setup ( )

{// Screen dimensions

s i z e (720 , 1280) ;

32

Appendix

// Set t h i s so the sketch won ’ t r e s e t as the phone i s ro tated :

o r i e n t a t i o n (LANDSCAPE) ;

// Sketch Frame Rate

frameRate (45) ;

smooth ( ) ;

// noStroke ( ) ;

f i l l (255) ;

t e x tS i z e (28) ;

// Adding Android Core Widgets

// These are not a v a i l a b l e whi le us ing the PApplet , im us ing apwidgets as a work

around

widgetContainer = new APWidgetContainer ( t h i s ) ;

t e x tF i e l d = new APEditText (775 ,43 ,250 ,80) ;

button1 = new APButton (1020 ,43 ,130 ,80 , ” host ” ) ;

widgetContainer . addWidget ( t ex tF i e l d ) ;

widgetContainer . addWidget ( button1 ) ;

// Ca l l i ng an in s tance o f the Flock Class

f l o c k = new Flock ( ) ;

// Add an i n i t i a l s e t o f bo ids in to the system

f o r ( i n t i = 0 ; i < v i s i b l eBo i d s ; i++)

{f l o c k . addBoid (new Boid ( width /2 , he ight /2 , i ) ) ;

}

// s t a r t oscP5 :

oscP5 = new OscP5( th i s , 8000) ; //App l i s t e n e r l i s t e n on port 8000

// ”192 . 168 . 1 . 19” i s my laptop ip4 address ’

// 8001 i s the port number Max i s l i s t e n i n g on

remoteLocation = new NetAddress ( ip , 8001) ;

s ensor = new KetaiSensor ( t h i s ) ; // Turn on Ketai

s ensor . enableAcce lerometer ( ) ; // I n i t i a l i z e Acce lerometer

// Create Control p5 GUI

contro lP5 = new ControlP5 ( t h i s ) ;

// change the de f au l t font to Verdana

PFont p = createFont ( ”Verdana” ,24) ;

contro lP5 . setContro lFont (p) ;

contro lP5 . se tCo lorLabe l ( c o l o r (0 ) ) ;

contro lP5 . setColorForeground ( c o l o r (255) ) ;

contro lP5 . setColorBackground ( c o l o r (0 ) ) ;

contro lP5 . setColorValue ( c o l o r (0 ) ) ;

contro lP5 . s e tCo lo rAct ive ( c o l o r (224) ) ;

contro lP5 . addS l ide r ( ” setCohFactor ” ,0 ,20 , cohFactor , 50 , height −440 ,50 ,400) . s e tLabe l ( ”

Cohesion” ) ;

contro lP5 . addS l ide r ( ” setSepFactor ” ,0 ,20 , sepFactor , 200 , height −440 ,50 ,400) . s e tLabe l ( ”

Separat ion ” ) ;

contro lP5 . addS l ide r ( ” s e tA l l i gnFac to r ” , 0 ,20 , a l l i gnFac to r , 350 , height −440 ,50 ,400) .

s e tLabe l ( ”Alignment” ) ;

contro lP5 . addS l ide r ( ” setFrameRate” , 1 ,100 ,45 ,500 , height −440 ,50 ,400) . s e tLabe l ( ”

Framerate” ) ;

contro lP5 . addToggle ( ” t ogg l eAcc e l e r ” , true ,50 ,155 ,120 ,50) . s e tLabe l ( ”Send Sensor ” ) ;

contro lP5 . addToggle ( ” toggleSmooth” , true ,50 ,240 ,50 ,50 ) . s e tLabe l ( ”Smooth” ) ;

}

// Se t t ing data from con t r o l p5 UI to g l oba l v a r i a b l e s

void setFrameRate ( i n t ra t e )

{frameRate ( ra t e ) ;

}

void s e tA l l i gnFac to r ( i n t f a c t o r )

{a l l i g nFac t o r = f a c t o r ;

33

Appendix

}

void setSepFactor ( i n t f a c t o r )

{sepFactor = f a c t o r ;

}

void setCohFactor ( i n t f a c t o r )

{cohFactor = f a c t o r ;

}// Sca l e i n t e g e r input in to f l o a t

f l o a t s c a l e ( i n t f a c t o r )

{f l o a t s ca l ed = ( f l o a t ) ( f a c t o r ) / 10 ;

re turn s ca l ed ;

}

// On / Off Switch f o r Smooth Animation

void toggleSmooth ( )

{i f ( smooth == true )

{smooth = f a l s e ;

noSmooth ( ) ;

}e l s e

{smooth = true ;

smooth ( ) ;

}}

//On / Off Switch f o r Acce lerometer

void t ogg l eAcc e l e r ( )

{i f ( a c c e l e r == f a l s e )

{a c c e l e r = true ;

s ensor . s t a r t ( ) ;

}e l s e

{a c c e l e r = f a l s e ;

s ensor . stop ( ) ;

}}

// Cl i ck ing ip address Widget to s t o r e i n f o

pub l i c void onClickWidget (APWidget widget )

{i f ( widget == button1 )

{ip = t ex tF i e l d . getText ( ) ;

remoteLocation = new NetAddress ( ip , 8001) ;

ValidIP = true ;

}}

// Draw Boids to the sc r een

pub l i c void draw ( )

{background (128 ,128 ,128) ;

widgetContainer . show ( ) ;

remoteLocation = new NetAddress ( ip , 8001) ;

f l o c k . run ( ) ;

f i l l ( c o l o r (0 , 0 , 0) ) ;

t ext (

”X: ” + nfp ( accelerometerX , 1 , 3) + ”\n” +

”Y: ” + nfp ( accelerometerY , 1 , 3) + ”\n” +

34

”Z : ” + nfp ( acce lerometerZ , 1 , 3) , 50 , 50 , width , he ight )

;

OscMessage acce l x = new OscMessage ( ”/ acce l x ” ) ;

a c c e l x . add ( acce lerometerX ) ;

a c c e l x . add ( ”AccelerometerX” ) ;

oscP5 . send ( acce lx , remoteLocation ) ;

OscMessage acce l y = new OscMessage ( ”/ acce l y ” ) ;

a c c e l y . add ( acce lerometerY ) ;

a c c e l y . add ( ”AccelerometerY” ) ;

oscP5 . send ( acce ly , remoteLocation ) ;

OscMessage a c c e l z = new OscMessage ( ”/ a c c e l z ” ) ;

a c c e l z . add ( acce le rometerZ ) ;

a c c e l z . add ( ”AccelerometerZ ” ) ;

oscP5 . send ( acce l z , remoteLocation ) ;

}

pub l i c void mousePressed ( )

{OscMessage myMessage = new OscMessage ( ”/ t e s t ” ) ;

myMessage . add ( ” r e c e i v i n g loud and c l e a r : ) ” ) ;

oscP5 . send (myMessage , remoteLocation ) ;

f l o c k . addBoid (new Boid (mouseX ,mouseY , −1) ) ;

}

pub l i c void onAccelerometerEvent ( f l o a t x , f l o a t y , f l o a t z )

{acce lerometerX = x ;

acce lerometerY = y ;

acce le rometerZ = z ;

}

//−−−−The Boids Class c reated by Danie l Shiffman−−−−−−−−−−−−// PhyOsc Ful l Java Code i s a v a i l a b l e on the attached DVD

35

Appendix C

Max MSP GUI

Figure 3: Max MSP Router - Max patch used as an OSC hub to receive

incoming messages and send them to CSound

36

Appendix D

Granular Implementation in CSound

SyncGrain2.csd

<CsoundSynthesizer>

<CsOptions>

; S e l e c t audio /midi f l a g s here accord ing to plat form

−odac ; ; ; r e a l t ime audio out

; −o syncgra in . wav −W ; ; ; f o r f i l e output any plat form

</CsOptions>

<CsInstruments>

s r = 44100

ksmps = 32

0 dbfs = 1

nchnls = 2

chn k ”boid0x” , 1

chn k ”boid0y” , 1

chn k ”boid1x” , 1

chn k ”boid1y” , 1

chn k ”boid2x” , 1

chn k ”boid2y” , 1

chn k ”boid3x” , 1

chn k ”boid3y” , 1

chn k ”boid4x” , 1

hn k ”boid4y” , 1

chn k ” acce l x ” , 1

; Zak I n i t i a l i z a t i o n − 1 a−ra t e and one k−ra t e va r i ab l e

z a k i n i t 1 , 1

i n s t r 1

i o l a p s = 2

i g r s i z e = 0.10

k f r eq = i o l a p s / i g r s i z e

kps = 1/ i o l a p s

k s t r chnget ”boid0x”

gks t r chnget ”boid0x”

gks t r = .3 /∗ t ime s ca l e ∗/kenv adsr p3 ∗ . 1 , p3 ∗ . 3 , . 4 , p3 ∗ . 4kp i tch chnget ”boid0y”

gkpitch chnget ”boid0y”

gkpitch = p4 /∗ p i t c h s c a l e ∗/

37

Appendix

a s i g syncgra in 0 .1∗ kenv , kfreq , kpitch , i g r s i z e , kps∗kstr , 1 , 2 , i o l a p s

outs as ig , a s i g

iRvbSendAmt = 0.3 ; reverb send amount (0−1)

; wr i t e to zak audio channel 1 with mixing

zawm as i g ∗iRvbSendAmt , 1

endin

i n s t r 2

i o l a p s = 2

i g r s i z e = 0.10

k f r eq = i o l a p s / i g r s i z e

kps = 1/ i o l a p s

k s t r chnget ”boid1x”

gks t r chnget ”boid1x”

gks t r = .3 /∗ t ime s ca l e ∗/kenv adsr p3 ∗ . 1 , p3 ∗ . 3 , . 4 , p3 ∗ . 4kp i tch chnget ”boid1y”

gkpitch chnget ”boid1y”

gkpitch = p4 /∗ p i t c h s c a l e ∗/

a s i g syncgra in 0 .1∗ kenv , kfreq , kpitch , i g r s i z e , kps∗kstr , 1 , 2 , i o l a p s

outs as ig , a s i g

iRvbSendAmt = 0.3 ; reverb send amount (0−1)

; wr i t e to zak audio channel 1 with mixing

zawm as i g ∗iRvbSendAmt , 1

endin

i n s t r 3

i o l a p s = 2

i g r s i z e = 0.10

k f r eq = i o l a p s / i g r s i z e

kps = 1/ i o l a p s

k s t r chnget ”boid2x”

gks t r chnget ”boid2x”

gks t r = .3 /∗ t ime s ca l e ∗/kenv adsr p3 ∗ . 1 , p3 ∗ . 3 , . 4 , p3 ∗ . 4kp i tch chnget ”boid2y”

gkpitch chnget ”boid2y”

gkpitch = p4 /∗ p i t c h s c a l e ∗/

a s i g syncgra in 0 .1∗ kenv , kfreq , kpitch , i g r s i z e , kps∗kstr , 1 , 2 , i o l a p s

outs as ig , a s i g

iRvbSendAmt = 0.3 ; reverb send amount (0−1)

; wr i t e to zak audio channel 1 with mixing

zawm as i g ∗iRvbSendAmt , 1

endin

i n s t r 4

i o l a p s = 2

i g r s i z e = 0.10

k f r eq = i o l a p s / i g r s i z e

kps = 1/ i o l a p s

k s t r chnget ”boid3x”

gks t r chnget ”boid3x”

k s t r = .3 /∗ t ime s ca l e ∗/

38

Appendix

kenv adsr p3 ∗ . 1 , p3 ∗ . 3 , . 4 , p3 ∗ . 4kp i tch chnget ”boid3y”

gkpitch chnget ”boid3y”

gkpitch = p4 /∗ p i t c h s c a l e ∗/

a s i g syncgra in 0 .1∗ kenv , kfreq , kpitch , i g r s i z e , kps∗kstr , 1 , 2 , i o l a p s

outs as ig , a s i g

iRvbSendAmt = 0.3 ; reverb send amount (0−1)

; wr i t e to zak audio channel 1 with mixing

zawm as i g ∗iRvbSendAmt , 1

endin

i n s t r 99 ; Reverb Always On

aInSig zar 1 ; read f i r s t zak audio channel

kFblvl i n i t 0 . 5 5 ; feedback l e v e l − i . e . reverb time

kFco chnget ” acce l x ”

gkFco chnget ” acce l x ”

gkFco i n i t 6000 ; c u t o f f f r e q . o f a f i l t e r with in the reverb

aRvbL , aRvbR reve rbsc aInSig , aInSig , kFblvl , kFco

outs aRvbL , aRvbR ; send audio to outputs

z a c l 0 , 1 ; c l e a r zak audio channe ls

endin

</CsInstruments>

<CsScore>

f 1 0 0 1 ”U.wav” 0 0 0 ; Deferred tab l e f o r source waveform

f2 0 8192 20 2 1 ; Hanning func t i on f o r Grain Envelope

i 1 0 2000 1

i 2 0 2000 1

i 3 0 2000 1

i 4 0 2000 1

i99 0 2010

e

</CsScore>

</CsoundSynthesizer>

<MacOptions>

Vers ion : 3

Render : Real

Ask : Yes

Functions : ioObject

L i s t i n g : Window

WindowBounds : −1072 −924 572 424

CurrentView : i o

IOViewEdit : On

Options : −b128 −A −s −m167 −R</MacOptions>

<MacGUI>

ioView background {32125 , 41634 , 41120}i o S l i d e r {266 , 7} {20 , 98} 0.000000 1.000000 0.173469 amp

i o S l i d e r {10 , 29} {239 , 22} 100.000000 1000.000000 258.158996 f r e q

ioGraph {8 , 112} {265 , 116} t ab l e 0 .000000 1.000000

i oL i s t i n g {279 , 112} {266 , 266}ioText {293 , 44} {41 , 24} l a b e l 0 .000000 0.00100 ”” l e f t ”Lucida Grande” 8 {0 , 0 , 0} {65280 ,

65280 , 65280} background noborder Amp:

ioText {333 , 44} {70 , 24} d i sp l ay 0.000000 0.00100 ”amp” l e f t ”Lucida Grande” 8 {0 , 0 , 0}{65280 , 65280 , 65280} background noborder 0 .1837

ioText {66 , 57} {41 , 24} l a b e l 0 .000000 0.00100 ”” l e f t ”Lucida Grande” 8 {0 , 0 , 0} {65280 ,

65280 , 65280} background noborder Freq :

ioText {106 , 57} {69 , 24} d i sp l ay 0.000000 0.00100 ” f r e q ” l e f t ”Lucida Grande” 8 {0 , 0 , 0}{65280 , 65280 , 65280} background noborder 261.9247

ioText {425 , 6} {120 , 100} l a b e l 0 .000000 0.00100 ”” l e f t ”Lucida Grande” 8 {0 , 0 , 0} {65280 ,

65280 , 65280} nobackground border

ioText {449 , 68} {78 , 24} d i sp l ay 0.000000 0.00100 ” freqsweep ” cente r ”DejaVu Sans” 8 {0 , 0 ,

0} {14080 , 31232 , 29696} background border 999.6769

ioButton {435 , 24} {100 , 30} event 1.000000 ”Button 1” ”Sweep” ”/” i 1 0 10

39

ioGraph {8 , 233} {266 , 147} scope 2.000000 −1.000000

</MacGUI>

<EventPanel name=”” tempo=”60.00000000 ” loop=”8.00000000 ” x=”0” y=”0” width=”596” he ight=”322

”>

</EventPanel>

40

References

Angelo, T. (2010), ‘Gboids’, in www.cycling74.com [online], available: http://bit.ly/

1hanw3Ul [accessed: 5 December 2013] .

Brinkman, P. (2012), Making Musical Apps: Real-time Audio Synthesis on Android

and iOS, O’Reilly Media.

Business-Insider (2013), Chart Of The Day [online], available: http://read.bi/

1ixMK1N [accessed: 4 September 2013].

Chadabe, J. (2002), The limitations of mapping as a structural descriptive in electronic

instruments, in ‘Proceedings of the 2002 Conference on New Interfaces for Musical

Expression’, NIME ’02, National University of Singapore, Singapore, pp. 1–5.

Csounds.com (2013), Granular Synthesis [online], available: http://bit.ly/Pt7FGB

[accessed: 20 December 2013].

Doyle, J. (2013), Using Processing In Eclipse [online], available: http://bit.ly/11g8lB9

[accessed: 5 August 2013].

Flannery, D. (2010), Obtaining you ip address on Android [online], available: http:

//bit.ly/dxIQLJ [accessed: 21 October 2013].

Foundation, P. (2013), Examples [online], available: http://bit.ly/1nLJwX8 [accessed:

14 September 2013].

Gabor, D. (1946), ‘Theory of Communication’, The Journal of the Institution Of Elec-

trical Engineers 3(93), 429–457.

Google (2013), Design [online], available: http://bit.ly/yROIVW [accessed: 23

September 2013].

41

REFERENCES

Hunt, A. and Wanderly, M. (2002), ‘Mapping performance parameters to synthesis

engines’, Organised Sound 2(7), 97–108.

Ketai (2012), Ketai [online], available: http://code.google.com/p/ketai/ [accessed:

16 November 2013].

Kirn, P. (2013), Why mobile low latency is hard [online], available: http://bit.ly/

MxNaXd [accessed: 5 August 2013].

Opie, T. (2003), Creation of a Real Time Granular Synthesis for Live Performance,

Master’s thesis, Queensland University of Technology, Australia.

Puckette, M. (2002), ‘Max at seventeen’, Computer Music Journal 4(26), 31–43.

Reynolds, C. (1987), ‘Flocks, Herds, and Schools: A Distributed Behavioural Model’,

Computer Graphics 4(21), 25–34.

Roads, C. (2001), Microsound, MIT Press.

Scavone, G. (2013), Granular Synthesis [online], available: http://bit.ly/Pt7FGB

[accessed: 20 December 2013].

Schlegel, A. (2011), Libraries [online], available: http://bit.ly/PpHKPW [accessed:

14 November 2013].

Shiffman, D. (2012), The Nature of Code Simulating Natural Systems with Processing,

Self Published.

Truax, B. (1990), ‘Composing with Real-Time Granular Sound’, Perspectives of New

Music 2(28), 120–134.

Visnjic, F. (2013), ‘Konkreet Performer’, in www.creativeapplications.net [online], avail-

able: http://bit.ly/1nMpKuNl [accessed: 13 November 2013] .

Xenakis, I. (1992), Formalised Music: thought and mathematics in composition, Pen-

dragon Press.

42