Music as a Functional Tool for Optimizing Neurological Arousal: A Preliminary Review

36
Music as a Functional Tool for Optimizing Neurological Arousal: A Preliminary Review Matthew J. Bassett 1135731 Submitted to: Dr. R. Sonnadara PNB 3Q03 August 18 th 2014 A theory of neurological arousal is proposed. Neurological arousal will be defined as an intrinsic state of neurological output continually driven by (a) predisposed cognitive function and (b) the external environment. Factors included in this definition are: cortical morphology, neuronal characteristics, and behavioural categorizations. In this thinking, individuals are placed along a spectrum of neurological arousal. Certain populations are characterized by an underaroused system, other populations endure an overaroused state, and those with balanced arousal typically fall between these two extremes. As music is a stimulus that interacts with nearly every neurological system, it is hypothesized that music will optimize neurological arousal regardless of where one falls along the baseline. In order to address such a question, numerous areas of the current literature must be reviewed. First, evidence that music influences arousal states and various neurologic systems must be presented. Second, neurological arousal profiles must be drawn from two populations speculated to occupy the extremes of the distribution: Attention Deficit Hyperactive Disorder (ADHD) and Autism Spectrum Disorder (ASD). Third, as neurological arousal will be determined through Electroencephalography (EEG) and autonomic measures, the current methods for EEG in research involving music must be investigated. Fourth, as musical samples will be defined under levels of activity, methods of classifying musical samples must be reviewed. In reflection of the current literature, methodologies will be designed that will lead to exploratory music cognition research with applications in healthcare, education, and audio technology development.

Transcript of Music as a Functional Tool for Optimizing Neurological Arousal: A Preliminary Review

Music as a Functional Tool for Optimizing Neurological Arousal:

A Preliminary Review

Matthew J. Bassett

1135731

Submitted to: Dr. R. Sonnadara

PNB 3Q03

August 18th 2014

A theory of neurological arousal is proposed. Neurological arousal will be defined as an intrinsic state of neurological output continually driven by (a) predisposed cognitive function and (b) the external environment. Factors included in this definition are: cortical morphology, neuronal characteristics, and behavioural categorizations. In this thinking, individuals are placed along a spectrum of neurological arousal. Certain populations are characterized by an underaroused system, other populations endure an overaroused state, and those with balanced arousal typically fall between these two extremes. As music is a stimulus that interacts with nearly every neurological system, it is hypothesized that music will optimize neurological arousal regardless of where one falls along the baseline. In order to address such a question, numerous areas of the current literature must be reviewed. First, evidence that music influences arousal states and various neurologic systems must be presented. Second, neurological arousal profiles must be drawn from two populations speculated to occupy the extremes of the distribution: Attention Deficit Hyperactive Disorder (ADHD) and Autism Spectrum Disorder (ASD). Third, as neurological arousal will be determined through Electroencephalography (EEG) and autonomic measures, the current methods for EEG in research involving music must be investigated. Fourth, as musical samples will be defined under levels of activity, methods of classifying musical samples must be reviewed. In reflection of the current literature, methodologies will be designed that will lead to exploratory music cognition research with applications in healthcare, education, and audio technology development.

Introduction

Science has always been famous for placing things under the microscope. Researchers

tear apart pieces of our nature, of our sensations, of our character, and deduce them to their

smallest components. How can we create the puzzle without knowing the shape of each little

piece, they might say. How can we understand the answer without knowing how to solve the

question, they might persist. Yet, in a world as complex as the beautiful Earth we inhabit, how

close can we truly get to the truth with our eyes fixed on minute space?

Theories of how neurodevelopmental disorders originate are novel, and the solutions

society has derived for those afflicted rely on little to no true foundation. Children with

anaesthetics, adolescents with stimulants, adults with pain killers: solutions driven by findings of

chemical imbalances and behavioural discrepancies. It is time for us to find solutions that do not

insist on nulling our systems and outweighing our brain’s intrinsic quality.

When you learn about the brain, one can be drawn to the awe-inspiring rationality of its

processes. Through analysis, it becomes evident that any attempt to outwardly influence a

rational being is inherently an irrational solution. So why not work within the rationality

presented? Why not exploit neurological capacities for the means of improved function? A

solution that is driven by the core will always be more successful than one from the external.

Every once in a while, the puzzle begins to come forth from the jagged pieces of its

existence. Tiny pieces of evidence point in certain directions and lead us down paths that may or

may not lead to light. And so we walk on, picking up tiles and dropping them in place as we

learn and grow, hoping that we might look up from the microscope and see truth in the pieces

that align.

Recent advances in cognitive neuroscience and music cognition have begun to reveal

the interaction that exists between music and the brain. Neuroimaging techniques such as

Functional Magnetic Resonance Imaging (fMRI), Electroencephalography (EEG), and

Magnetoencephalography (MEG) have illuminated countless neural areas and neuronal

characteristics involved in music processing. From connections between the motor system and

rhythm perception, to overlaps in language and communication, music appears nearly all

encompassing in our neural scene.

The present review began with a hypothesis: that music can optimize neurological

arousal regardless of where one falls along a baseline. The hypothesis originated from two main

ideas: (1) our characters as humans lie along a spectrum, and (2) music does not discriminate; it

influences neurological systems in all of us. In order to address this hypothesis, current

understanding in the field must first be explored.

Exploration into this topic has been divided into four comprehensive sections. The first

will outline the neurological effects of music. The second will create profiles of neurological

arousal. The third will provide contextual basis for neuroimaging. The fourth will dictate current

methods of music sample classification. Using the information presented, elaborate

methodologies in exploratory music cognition research will be created.

Neurological Effects of Music

Music is universally practised among human beings. It can evoke powerful emotions

and communicate discrete concepts. It can induce pleasure or pain, provide incentive or

deterrence, and can excite and empower to no end. The essential characteristics of music cause

neurological changes across a wide network of brain areas. Neurological effects of music will be

explained within: the central nervous system (CNS), the autonomic nervous system (ANS),

neural plasticity, pre-natal development, and specific neural regions involved in the motor,

language, memory, emotion, and attention circuits.

In the central nervous system, music modulates brainstem-mediated measures including

heart rate, blood pressure, body temperature, skin conductance, and muscle tension (Chanda &

Levitan, 2014). Brain stem response mediates sensory and motor function through

catecholamines and synaptic projections through the cortex. Musical properties therefore may

affect central neurotransmission (Chanda & Levitan, 2014). As the CNS precedes all other

bodily systems, music can be seen to influence major processes such as homeostasis, and induce

reactions in the autonomic nervous system.

The autonomic nervous system is the link between the central nervous system and the

peripheral organs. There are two branches: sympathetic, which is responsible for energy

mobilization, and parasympathetic, which provides restorative function (Ellis & Thayer, 2010).

Music has several effects on autonomic reactions. From nature, positive affect and reward are

associated with high frequency, motif calls, which increase sympathetic arousal (Chanda &

Levitan, 2014). Soft and low pitch sounds, resembling cooing or purring, cause a decrease in

sympathetic arousal (Chanda & Levitan, 2014). Through this, it is evident that neurological

effects of music are not human-specific.

Autonomic activity is communicated through physiological measures, such as skin

conductance, heart rate, and oxygen intake. Physiological ratings are commonly used to confirm

or deny sample classification in the 2D (arousal-valence) affective space of emotion. Recent

findings suggest that physiological patterning during musical exposure can be used to infer the

emotion induced (Kragel & Labar, 2013). Skin conductance level (SCL)—a measure of the

arousal response in the hypothalamus-pituitary-adrenal (HPA) axis—is linearly correlated to

arousal measures of emotion in music, and therefore may provide a measure of emotional

intensity (Rikard, 2004). However there a number of non-linear relations also found between

physiological ratings and music. Kragel & Labar suggest that although physiological ratings do

not fit into 2D affective space, they are still categorical in nature (Kragel & Labar, 2013). Further

research is needed in this area.

Musical training has been associated with neural changes in plasticity. Music-related

changes in neural plasticity range from low-level auditory processing to high-level cognitive

functions (Moreno et al., 2014). Enhanced plasticity through musical training correlates with

improvement on behavioural measures of intelligence, as evidenced through improved executive

function and auditory classification (Moreno et al., 2011). Musically trained individuals also

show increased performance on measures of cognitive flexibility, working memory, and verbal

fluency (Zuk et al., 2014). These findings suggest that the positive effects of musical plasticity

cross over into neural networks involved in general cognitive functions (Moreno et al., 2011).

Structural support for this notion is given by an increased size of the corpus callosum in

musicians, signalling enhanced inter-hemispheric communication (Moore et al., 2014).

Studies on pre-natal music stimulation are infrequent, but some yield curious findings to

the current hypothesis. Pre-natal music listening has been shown to enhance neurogenesis in the

hippocampus of pups, and PH-1 chicks (Kim et al., 2006; Sanyal et al., 2013). An increase in

hippocampal synaptic markers accompanies enhanced spatial orientation, learning, and memory

(Kim et al., 2006; Sanyal et al., 2013). In humans, newborns that were exposed to music during

pregnancy show improved autonomic stability, and rapid motor development (Chen et al., 1994;

Lind, 1980). These effects may be explained through increased neurogenesis in the hippocampus

(Kim et al., 2006). Perhaps music is a powerful factor in long-term potentiation between the

hippocampus and cortical areas. If so, music training could induce an increase in the strength of

neuronal projections.

There are a number of specific neural regions implicated in musical exposure. These

regions are located primarily in the motor, language, memory, emotion, and attention systems.

Listening to and encoding auditory rhythms elicits activity in the primary sensorymotor cortex,

pre-motor cortex, supplementary motor cortex, striatum, and cerebellum (Chen et al., 2008; Zuk

et al., 2014). Musical training has also been shown to enhance preSMA/SMA activation (Zuk et

al., 2014). In relation to language, a number of shared neural areas have been discovered. These

areas include: the primary motor cortex, supplementary motor area, Broca’s area, anterior insula,

primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus, and

posterior cerebellum (Brown et al., 2006). Within the memory system, hippocampal

neurogenesis is increased given pre-natal musical exposure (Kim et al., 2006; Sanyal et al.,

2013). Music has also been shown to decrease dorsolateroprefrontal (DLPFC) cortex activity in

elderly adults with memory deficits (Ferreri et al., 2014). A decrease in DLPFC activity signals

that music modulates activity in a less demanding direction during encoding (Ferreri et al.,

2014). Neural regions involved in emotion are vast, but specific findings point to distinct

networks for processing positive and negative emotions. Pleasant musical samples elicit activity

in the primary auditory cortex, middle temporal gyrus, and cuneous, all in the left hemisphere

(Flores & Gutierez, 2007). Unpleasant musical samples cause bilateral activity in the inferior

frontal gyrus, insula, and orbito-frontal regions of the right superior frontal gyrus (Flores &

Gutierez, 2007). The system involved in unpleasant samples has direct links with the paralimbic

system. Both pleasant and unpleasant samples induce strong coherence at distinct neural sites

(Flores & Gutierez, 2007). This implies that there is a path for musical emotions to travel. Music

also has implications in the attention networks. Music increases DLPFC and VLPFC activity,

and training induces an increase in frontoparietal functionality, evidenced through pre-frontal,

anterior cingulate, and temporal lobe region activity (Ferrari et al., 2014; Moreno et al., 2011;

Zuk et al., 2014). These neural areas have shared functions between music, language, and

intelligence (Moreno et al., 2011).

In order for music to influence neurological arousal, it would need to be implicated in

an extensive neural network. Indeed this is evident through CNS and ANS involvement, training-

induced enhanced plasticity, and countless neural regions shared with major systems that make

up cognitive ability and behavioural expression.

Hypothesis of Neurological Arousal

The proposed spectrum of neurological arousal is based on a simple deduction: the brain

is the seat of individual character. If our characters as human beings can be aligned along a

spectrum, than our brains must precede this alignment, and thus have intrinsic qualia. Clear

evidence that sensory stimuli interact with, rather than determine, our neurological states,

provides support for this intrinsic notion (Wang et al., 2013). In keeping with the idea of

character, certain populations are categorized under varying levels of base neurological arousal.

Base neurological arousal refers to the overall state of neurological output given from the brain

at any particular time due to (a) predisposed cognitive function and (b) the external environment.

The definition of neurological arousal thus contains three main factors: cortical morphology,

neuronal characteristics, and behavioural categorizations. These three factors make up the reality

of individual perception, cognition, and behaviour.

When speculating on the characters at the extremes of the spectrum, two main

populations come to mind: individuals classified with Attention Deficit Hyperactive Disorder

(ADHD) and individuals classified under Autism Spectrum Disorder (ASD). They come into

question first because of their behavioural traits. ADHD is characterized by restlessness,

hyperactivity, impulsivity, and working memory deficits. ASD is characterized by social

impairment, sensory hypersensitivity, communication deficits, and savant abilities. ADHD is

proposed to occupy the underaroused extreme, due to its constant pursuit for external stimulation

and energy. ASD is proposed to inhabit the overaroused extreme, due to its incredible capacity

yet consistent difficulty in making sense of this loud world. The proceeding section will dive into

the neurological aspects of the definition.

There are a number of cortical morphological abnormalities in ADHD. Reduced size

and functional activity of the right prefrontal cortex is a consistent finding (Arnsten, 2009).

White matter tracks originating from the prefrontal cortex also show significant disorganization

(Makris et al., 2008). There is clear evidence of a maturational lag, particularly in pre-frontal

areas, with researchers reporting a five-year delay for peak frontal cortex thickness in ADHD

(Lietchi et al., 2013; Shaw et al., 2007). These findings are consistent with ADHD

symptomology, as integration of top-down processes is implicated. A recent hypothesis by

Castellanos et al. suggests that an abnormal default mode network (DMN) is present in ADHD.

Evidence towards the DMN hypothesis is given through volumetric abnormalities and grey

matter reductions in the posterior cingulate cortex (PCC) and the precuneus, and decreased

cortical thickness in the retrosplineal cortex (Castellanos et al., 2009). These morphological

reductions illuminate key aspects of the ADHD phenotype.

Neuronal characteristics of ADHD will be separated into three categories:

catecholaminergic relations, atypical attention and executive function networks, and oscillatory

responses. Catecholamine release in the prefrontal cortex is mediated by an arousal state

(Arnsten, 2009). A global reduction in dopamine and alterations in the synthetic enzyme for

noradrenaline has been implicated in ADHD (Arnsten et al., 2009; Faraone et al., 2005). These

findings coincide with weaker prefrontal cortex functioning and the fact that dopamine receptors

such as DRD4 and DRD5 are consistent genetic markers for ADHD (Tye et al., 2012). Low

serotonin, as indicated by the serotonin transporter and 1B receptor, has also been found in

ADHD, which specifically reflects poor impulse regulation (Tye et al., 2012). Catecholamine

dysregulation in the prefrontal cortex has serious impacts on the attention and executive function

networks in ADHD.

Numerous theorists have indicated, through fMRI, that dysfunction of frontostriatal

circuitry underlies the attention issues in ADHD (Banich et al., 2009; Castellanos et al., 2009).

Hypoactivity has been reported in the anterior cingulate cortex, dorsal lateral prefrontal cortex,

inferior prefrontal cortex, basal ganglia, thalamus, and posterior parietal cortex (Banich et al.,

2009; Castellanos et al., 2009). Disruptions in prefrontal and parietal regions involved in

executive control have also been demonstrated (Banich et al., 2009). This disruption is

characterized by weak central coherence and reduced cortical differentiation, which affects both

transient and sustained attention (Banich et al., 2009; Tye et al., 2012). In addition to atypical

attention and executive function networks, frontostriatocerebellar dysfunction has been found to

reflect working memory deficits in ADHD (Lenartowicz et al., 2014). The combination of

hypoactive networks controlling attention, executive control, and memory, signify major aspects

of neural activity in ADHD.

Oscillatory responses recorded through eyes-closed/resting state EEG provide a lens

into underlying cognitive activity (Barry et al., 2010). The most consistent findings in ADHD

across all bands are as follows: enhanced absolute and relative posterior delta, enhanced absolute

and relative anterior theta, reduced absolute and relative alpha, reduced absolute and relative

posterior beta, and reduced absolute and relative gamma (Barry et al., 2010; Fonseca et al.,

2013). Given these consistent readings, several hypotheses in line with a decreased state of

neurological arousal have been proposed. In particular, theta decrease is consistent with cortical

underactivation and evoked gamma oscillations suggest neuronal hyperexcitability (Tye et al.,

2012). In combination, these findings suggest that greater neural activation is necessary to

achieve a typical level of neurological functioning. Alpha mean has also been found to

negatively correlate with SCL (Clarke et al., 2013). This evidence supports the notion that alpha

activity is an indicator of arousal, and that decreased SCL and arousal are present in ADHD.

Measures of oscillatory responses in ADHD are incredibly variable and subject to

intense scrutiny due to the inherent heterogeneity of the disorder. Nevertheless, EEG and ERP

measures related to arousal and attention are potential intermediate phenotypes for ADHD, as

neurophysiological performance of cognitive control, arousal, and response variability serve as

incremental measures of symptom severity and dimension in ADHD (Tye et al., 2012; Nikolas &

Nigg, 2013).

In order to explore the profile of neurological arousal for ASD, several areas of cortical

and neuronal characteristics must be illuminated. ASD involves early brain overgrowth and

dysfunction, primarily in the prefrontal region. Discrete patches of abnormal cytoarchitechture

and disorganization have been found in the prefrontal and temporal cortexes of ASD using

colourmetric RNA in situ hybridization (Stoner et al., 2014). These abnormalities occur in the

brain areas consistent with ASD symptomology, namely social, emotional, and communicative

areas. Reduced intrinsic white and grey matter connections have also been discovered (Ecker et

al., 2013). MSD’s (mean separation difference) indicate the average length of neuronal

connections required to wire each vertex to the rest of the cortex. MSD’s are reduced in ASD in

several frontoparietal and central regions (Ecker et al., 2013). Severity of MSD reductions also

correlates with severity of autistic traits (Ecker et al. 2013). On a local level, reduced radius

function (length of connection) and enhanced perimeter function (number of connections) has

been reported (Ecker et al., 2013). MSD reduction thus provides support for neural

overconnectivity. A higher density of serotonin 5-HT axons has also been implicated in ASD

(Azmitia et al., 2011). ASD 5-HT axons are often thicker and degenerative, with dystrophic

fibres in the amygdala, temporal lobe, and hippocampus, becoming present as early as twelve

years of age (Azmitia et al., 2011). Reduced volume and processing speed of the corpus

callosum has also been consistently found, which provides insight into interhemispheric

communication in ASD (Alexander et al., 2007; Fiebelkorn et al., 2013). These findings reflect a

number of neural properties across the autism spectrum.

Neuronal characteristics of ASD will be separated into four categories: atypical

frontopariteal attention network, brainstem dysfunction, sensory abnormalities, and oscillatory

responses. A significant piece of evidence towards an atypical attention network is that a 67

percent increase in prefrontal neurons is seen in two to sixteen year olds with ASD (Stoner et al.,

2014). This increase could have serious implications across a wide network. In respect to the

attention network itself, increased activation in the right prefrontal and left inferior parietal is

seen during auditory detection revealed through N2 and P3 deflections in fMRI (Gomot et al.,

2008). Specific areas of heightened activation include: the right superior/middle and inferior

gyrus (BA 6, 44), the dorsal premotor cortex (BA 3/4), the left inferior parietal lobule (BA 40),

and left middle frontal gyrus (BA 6) (Gomot et al., 2008). This increased activation accompanies

reduced response time across ASD groups compared with controls.

A consistent finding in ASD cognition is a bias for local over global processing.

Fiebelkorn et al. suggest that this bias originates from enhanced hemispheric specialization and

reduced interhemispheric communication (Fiebelkorn et al., 2013). Evidence of hyper-

selectivity, strong systemizing, and right hemisphere weighted category specific selective

attention, all support this notion (Fiebelkorn et al., 2013; Gomot et al., 2008). Additionally, weak

coherence between frontal and temporal, occipital, and parietal regions has been consistently

demonstrated using fMRI and EEG (Wang et al., 2013; Murias et al., 2007).

Brainstem dysfunction may be a contributing factor to atypical posterior and anterior

attention networks (Cohen et al., 2013). The joint occurrence of very early problems in brainstem

function and arousal regulation of attention is strongly correlated with later diagnosis and

severity of ASD (Cohen et al., 2013). Brainstem dysfunction is often tested through the Auditory

Brainstem Response (ABR). ABR abnormalities correlate significantly with later reports of

ritualism, and provide evidence towards the Arousal-Modulated-Attention (AMA) hypothesis in

ASD (Cohen et al., 2013). AMA is a homeostatic principle in which stimulation is sought when

arousal is low and stimulation is avoided when arousal is high. Activation of this system is

presumably controlled at the brainstem level, and is thus impacted by central nervous system

(CNS) pathology. In essence, a greater CNS pathology leads to a preference for low rates of

stimulation (Cohen et al., 2013). As ASD is highly heritable, and thus has a biological basis,

increased CNS pathology, early brain overgrowth, and decreased AMA may be reliable and

robust markers for ASD. The AMA implications in ASD directly relate to the hypothesis that

base neurological arousal is increased in ASD. Further exploration is needed before true causal

and functional relations can be claimed.

Sensory abnormalities are found in 90 percent of ASD individuals (Baron-Cohen et al.,

2009). General hypersensitivity has been reported in ASD through superior visual, auditory, and

somatosensory perceptual domains (Foss-Feig et al., 2010). Visual studies indicate

hypersensitive vision detection as well as enhanced processing of first order gratings (Baron-

Cohen et al., 2009; Bertone et al., 2003). Auditory results have consistently demonstrated that

ASD individuals have enhanced pitch perception, exceptional absolute pitch processing, superior

auditory discrimination, and superior processing of pitch in speech (Bonnel et al., 2003; Heaton

et al., 2008; Järvinen-Pasley et al., 2008; Mottron et al., 1999; O’Riordan & Passetti, 2006).

Hypersentivity to vibrotactile stimulation has also been reported, as evidenced by a significantly

decreased suprathreshold (Tommerdahl et al., 2007). Additionally, an extended temporal

window to which individuals with ASD bind multisensory information has been discovered

(Foss-Feig et al., 2010). An extended window of multisensory binding would lead to decreased

coherence of the external world, which is consistent with ASD behaviour. Some theorists suggest

that certain characteristics of ASD attention, such as strong systemizing, can be reduced to

sensory hypersensitivity (Baron-Cohen et al., 2009). The greater image however, reflects a

sensory, and therefore perceptual, system that is consistently more active than typical controls.

Oscillatory responses measured through resting state EEG activity in ASD reveal

several departures from the norm. As a caveat, ASD is an immensely diverse disorder, and

nearly all studies imply inconsistency in EEG findings. However in a recent study by Wang et

al., oscillatory abnormalities appeared consistent throughout all developmental stages. The

distribution of band activity is as follows: excessive absolute and relative delta across dorsal-

midline, parietal, right temporal, and frontal, excessive absolute and relative theta across frontal

and right parietal, reduced absolute and relative alpha across frontal and right parietal, excessive

absolute and relative beta, and excessive absolute and relative gamma across occipital, midline,

and parietal (Wang et al., 2013). This pattern of activity reveals an inverted U shape of

oscillatory responses, atypical from the U shape found in healthy controls (see Figure 1 in Wang

et al., 2013). Enhanced left hemisphere activity across all bands is also found, which signals a

decrease in the signal to noise ratio (Wang et al., 2013). This pattern results in more background

noise during active tasks, which is consistent with sensory hypersensitivity and overactivation in

frontoparital networks.

Taken together, clear evidence towards the hypothesis of neurological arousal is

presented. Indeed there are inconsistencies and more questions to be asked, but a general pattern

of base neurological arousal can be seen. ADHD is characterized by neuronal hyperexcitability,

cortical hypoactivity, and catecholamine reduction. ASD is characterized by neuronal

overgrowth, sensory hypersentivity, and increased CNS pathology. As both of these disorders lie

on spectrums on the borders of the normal population, and neurological arousal seems to

correlate heavily with symptom severity, extrapolations and deductions of the spectrum of

neurological arousal may provide incredible and novel insights into the reality of

neurodevelopmental disorders.

Electroencephalography in Music Cognition Research

EEG research has revealed incredible insights across a number of domains. EEG

activity reflects the summation of post-synaptic potentials, primarily from pyramidal cells, in the

cortex. Depolarizations in deep layers with passive returns below appear as positive deflections,

while depolarizations in superficial layers with passive returns above appear as negative

deflections (Trainor, 2012). EEG is characterized by poor spatial acuity but excellent temporal

acuity. A number of logistical advantages of EEG over other neuroimaging techniques are that

EEG is silent, non-invasive, more tolerant to movement, relatively inexpensive, and virtually risk

free.

EEG activity to an auditory event is termed an Event Related Potential (ERP). ERP’s

can be divided into three main categories: time waveforms, mismatch responses, and oscillatory

responses. Time waveforms are series of positive and negative deflections originating from the

brainstem (<15ms), the auditory cortex (<50ms), and areas beyond (>50ms) (Trainor, 2012).

Typical waveforms that signal perceptual abilities include the N1, P2, P3a, n400, late

discriminative negativity (LDN), and reorienting negativity (RON) (Daltrozzo and Schon, 2009;

Kühnis et al., 2013; Putkinen et al., 2013). Mismatch responses (MMN) are memory-based

detections of auditory change, and have played a particular role in the understanding of auditory

perception, discrimination, and memory (Putkinen et al., 2013). Oscillatory responses are

divided into five bands of electrical activity. Delta (0-4Hz) waves typically represent deep sleep,

and slow waves for salience (Knyazev, 2012). Theta (4-8Hz) waves are involved in memory

processes (Klimesch, 1996). Alpha (8-13Hz) means often indicate arousal, and sensory and

cognitive inhibition (Clarke et al., 2013; Klimesch et al., 2007). Beta (13-30Hz) waves reflect

active task engagement and motor behaviour (Neuper and Pfurtscheller, 2001). Gamma (30-

100Hz) waves are implicated in a wider top-down network of attention, feature binding, working

memory, sensory responses, and anticipation and expectation (Skinner et al., 2000; Singer &

Gray, 1995; Tallon-Baudry, 2003; Trainor, 2012). Gamma waves can be evoked (phase-locked

to stimulus) or induced (not phase-locked to stimulus). Induced gamma activity is more

reflective of attention and top-down processes (Trainor, 2012). Band activity is broken down into

relative or absolute activity. Relative activity refers to the power of a single band relative to all

bands. Absolute activity refers to the power of an individual band alone. Each of these

parameters reveal distinct aspects of neurological activity.

Current findings of EEG activity in research involving music will be separated

according to specific neural systems. These systems include: language, emotion, preference,

motor, and auditory discrimination and expertise.

There has been significant growth in the body of literature surrounding music and

language. Several brain areas co-activate in these communicative actions, and the main EEG

finding in relation to language and music processing is the n400 effect. One second of musical

exposure is enough to elicit the n400 effect, which is the modulation of a negative ERP

component peaking at 400ms post-stimulus with a central-parietal distribution (Daltrozzo and

Schon, 2009). N400 activity signals the communication of a concept (Daltrozzo and Schon,

2009). Musical and language concepts originate from countless aspects of human life and are

communicated through various means, whether concrete or abstract. N400 activity varies only

slightly from music to language.

The majority of music and arousal literature originates from studies on emotion.

Emotion readings are communicated through a 2D plane of valence (mood) and arousal

(intensity). The plane consists of four areas of space: intense-unpleasant (i.e. fear/anger), intense-

pleasant (i.e. joy), calm-pleasant (i.e. happiness), and calm-unpleasant (i.e. sadness) (Schmidt &

Trainor, 2001). Music cognition research has revealed that asymmetrical frontal activation

distinguishes the valence of musical emotions, with the left frontal responding to positive

emotions and the right frontal responding to negative emotions (Schmidt & Trainor, 2001).

Overall frontal activation has been found to distinguish arousal in musical emotions, with greater

activation signalling greater intensity (Schmidt & Trainor, 2001). These two pieces of evidence

have significant support and point to obvious implications in emotion research.

Studies on musical preference have shown EEG activity that coincides with arousal

changes and motor links. During preferred music, enhanced beta activity, and de-synchronization

in low alpha followed by synchronization in upper alpha, is found (Holler et al., 2012). The level

of deflections is incredibly variable, which reflects the diversity of music preferences among the

population.

Using magnetoencephalography (MEG), amazing neural links between rhythm

perception and the motor system have been discovered. Researchers have found periodic beta

modulations of signal power increase and decrease that sync and de-sync according to the

periodicity of the stimulus presented (Fujioka et al., 2012). De-synchronization signals motor

change while re-synchronization signals the maintenance of motor control. In essence, beta

frequency neural oscillations entrain to the beat of a stimulus, allowing for rhythm-correlated

movement. Temporally correlated beta modulations are found in the auditory, motor-related, and

subcortical areas even in the absence of movement (Fujioka et al., 2012). This finding provides

significant insights into the nature of rhythm perception and production, and the overarching

influence music can have on neural behaviour.

The final EEG findings in music cognition research that will be presented are those in

respect to auditory discrimination and musical-expertise-related changes. N1 and P2 responses

are reliable and robust markers for electrical activity originating at primary and secondary

sensory areas (Kühnis et al., 2013). For this reason, there are often used as markers for auditory

perceptual ability. N1 is said to represent the encoding of acoustic features with P2 following

with evaluative and classification action. N1/P2 peaks are impacted by a number of factors, two

of which are expertise and sound intensity. Musical experts have a reduced N1 response and an

enhanced P2 response (Kühnis et al., 2013). This finding is intuitive, as expertise would allow

one to encode acoustic features more efficiently while evaluation and classification would be

more in depth with vast knowledge. N1/P2 peaks are also dependant on sound intensity, as peaks

differ significantly across stimuli at 40dB, 60dB, and 80dB during passive listening (Ott et al.,

2013). Interestingly, during an active task however, the difference between peaks in 40dB and

60dB conditions disappears (Ott et al., 2013). This finding could represent an attention-induced

convergence, as sounds above 60dB are encoded in a different manner than those below, and this

featured encoding is more consistent with spreading activation (Ott et al., 2013). P3a, LDN, and

RON, are other waveforms often studied in respect to attention and auditory discrimination. P3a

represents an attention shift towards a surprising auditory event (Putkinen et al., 2013). LDN has

been implicated with multiple functional roles, and RON signals the reorienting of attention after

a distracting auditory event (Putkinen et al., 2013). These aspects of EEG activity prove to be

clear indices of auditory perception.

ERP activity measured through time waveforms, mismatch negativity, and oscillatory

responses all contribute to a greater understanding of an individual’s neural scene. EEG findings

in music cognition related to language conceptualization, emotional categorization, motor

entrainment, and perceptual ability, expose significant advances in perceptual neuroscience. As

resting state oscillatory responses appear to be the best indicator of spontaneous cortical activity,

this measure will be of particular importance when exploring the effects of music on

neurological arousal.

Music Sample Classification

Selecting musical samples to be used in music cognition research is a complicated task.

Musical samples in investigations into neurological arousal optimization effects will be defined

under levels of activity. In order to create this definition, present methods of music sample

classification must be reviewed. The proceeding discussion will begin by addressing methods

used in experiments on musical preference, in which genre classification is necessary. Following,

as the most frequent method of sample classification is under the 2D emotional plane of valence

and arousal, musical characteristics that influence self-reports and physiological ratings of

emotion will be examined.

In research on musical preference among the population, genre classification has

traditionally been done through questionnaires. Rentfrow and Gosling developed the Short Test

on Musical Preferences (STOMP) in 2003, four years after Sikkema created the Music

Preference Questionnaire (MPQ) (Ferrer et al., 2012). These surveys separate musical preference

into 10-15 genres, and use a Likert scale to gauge individual’s preference of each. However,

different individuals define genres differently, and liking one song in a particular genre does not

mean you have a preference for the entire subset, so considerable confounds emerge (Lee et al.,

2009).

Recent advances in computer technology has allowed for more accurate and efficient

genre classification. Two main methods are presented. (1) Individuals can be given a list of

genres and then asked to identify particular artists or songs that occupy that genre. The list of

artists and songs generated is thus comprised of “free responses” (Ferrer et al., 2012).

Participants then rate liking for specific artists and songs, which then yields particular genre

ratings. Social tags have been found to exist as semantic extensions of genres, and thus also

provide helpful filters and improve accuracy in free response categorization (Ferrer et al., 2012).

(2) As people’s understanding of genre is enculturated, researchers have begun to develop

machine-based classification by deducing different genres into their mean musical

characteristics. Researchers group samples based on: (1) short-term features (timbral

characteristics), (2) long-term features (temporal evolution), and (3) semantic features (tempo,

rhythm, pitch, etc) (Lee et al., 2009). Once samples are classified under these guidelines,

participants rate the excerpts chosen.

The most frequently used tool for sample classification is the 2D valence-arousal plane

of emotion. When samples are classified in this plane, excerpts aim to evoke fear and anger,

happiness and content, sadness and discomfort, joy and pleasure. The general method of

determining which samples will fill each quadrant of space is to have the experimenter choose a

variety of samples and then host pre-experimental-surveys. Population ratings are then used to

confirm placement in 2D space. One slightly altered method asked participants to rate musical

excerpts given adjective pairs such as active-inactive, fast-slow, and warm-cold (Yamasaki et al.,

2013). Ratings from the adjective pairs were then separated into three categories: activation

(arousal), valence (mood), and potency (power) (Yamasaki et al., 2013). Samples were then

chosen through these categories, with final sample selection consisting of excerpts with high

power, positive valence, and either high or low activation (Yamasaki et al., 2013).

Recent studies have begun to determine the musical characteristics responsible for

emotional induction (Gomez & Danuser, 2007). Tempo and mode are two musical qualia

associated with valence and arousal (Hunter et al., 2011; Husain et al., 2002). Younger children,

before emotional maturation and enculturation, provide emotion ratings that are driven by tempo

(i.e. faster tempo equals more aroused) (Hunter et al., 2011). However by the age of 11, when

emotional maturation and enculturation become evident, mode begins to drive emotion ratings

(i.e. happier music equals more valence) (Hunter et al., 2011). These findings are supported by

evidence that tempo is the focal cue used to determine emotion when listening to foreign music

(Balkwill et al., 2004). More in-depth musical characteristics have also been implicated in the

valence-arousal plane. Mode, rhythmic articulation, and harmonic complexity have the greatest

influence on valence (Gomez & Danuser, 2007). Rhythmic articulation, tempo, accentuation, and

sound intensity have the greatest influence on arousal (Gomez & Danuser, 2007). Melodic

direction and pitch have the least clear association, however a recent hypothesis of Dimensional

Salience claims that pitch patterns are more salient than temporal patterns, which leads to a great

mental representation (Gomez & Danuser, 2007; Prince & Pfordresher, 2012). Further research

in this area is needed to illuminate true relations.

Within the emotional framework, physiological ratings such as skin conductance level,

heart rate, and oxygen intake, are used to confirm or deny affective space. Musical qualia that

influence physiological measures include: tempo, rhythm, accentuation, rhythmic articulation,

pitch level, pitch range, melodic direction, mode, harmonic complexity, consonance, and sound

level intensity (Gomez & Danuser, 2007). In particular, increased tempo, intensity, and

accentuation increases SCL, and increased tempo is associated with increased heart rate (Gomez

& Danuser, 2007). These findings appear consistent with arousal evidence.

In order to classify music samples under varying levels of activity, all previous methods

must be considered. Given the presented information, some preliminary thoughts: (1) all samples

will go through pre-experimental survey to confirm hypotheses made, (2) music samples high in

activity will have increased sound level intensity, increased tempo, and will induce positive

valence through mode, (3) music samples low in activity will have decreased sound level

intensity, decreased tempo, and will also induce positive valence through mode. There is

something to be said about the power of positivity, and perhaps measures of neurological arousal

will become more potent if music compliments the human bias for happiness.

Conclusion

Recent theories of rhythm and tonal perception lend support for the idea that music

contributes to overall neurological activity. Flaig and Large suggest that music speaks to the

brain in its own language (Flaig & Large, 2014). They point out that all conscious processes are

made up of dynamic neural events, and that intentional thought and affective experience arises

out of these dynamics as they move in synchrony. Their theory has two premises: (1)

neurodynamic synchrony with music gives rise to musical qualia such as tonal and temporal

expectancies, and (2) music synchronous responses couple into core neurodynamics enabling

music to directly modulate core affect (Flaig & Large, 2014). In essence, music taps into brain

dynamics at the right time scales to cause both brain and body to resonate with patterns (Flaig &

Large, 2014). A physiological hypothesis for the integration of rhythm and tone accent this

thinking. Mussachia et al. believe that tone and rhythm integration is a result of thalamocortical

matrix projections that modulate the phase of ongoing cortical oscillatory rhythms (Mussachia et

al., 2014). As oscillatory phase translates to an excitability state in neuronal assemblies, many

findings of increased neurological coherence under music could be explained (Mussachia et al.,

2014). This idea also supports enhanced plasticity, and direct music involvement in the motor

and language systems. In company to these two ideas, Large and Almonte issue as theory of

tonal perception in which the auditory brainstem begins a series of linear and non-linear neural

relations. In the central auditory system, time locked neural activity is carried onward by active

oscillatory circuits (Large & Almonte, 2012). These circuits give rise to non-linear

spectrotemporal receptive fields (STRF’s) and auditory brain stem responses (ABR’s) (Large &

Almonte, 2012). These responses then establish mode-locked dynamics that give off properties

of attraction and stability, which gives rise to the perception of tonality (Large & Almonte,

2012). In company, these theories detail the overarching power of music to impact one’s state of

neurological activity.

Neurological arousal, as a base state inherent to every human being, is responsible for

individual perception, cognition, and behaviour. As music is a stimulus whose influence begins

at the central nervous system, it is extremely plausible that music could influence one’s state of

neurological arousal. The present review has revealed that (a) music influences neural activity

across a diverse network, (b) distinct neurological arousal profiles can be seen in ADHD and

ASD, (c) resting-state EEG oscillatory responses provide the best measure of spontaneous

cognitive activity, (d) certain musical characteristics consistently induce particular responses in

sample classification, and (e) theories of rhythm and tonal perception lend support for music-

influenced core neurodynamics. Nearly all evidence supports the initial hypothesis. Whether

music can be used as functional tool for optimization of neural activity, resulting in enhanced

cognitive function, is still be explored.

Works Cited

Alexander, A. L., Lee, J. E., Lazar, M., Boudos, R., DuBray, M. B., Oakes, T. R., … Lainhart, J.

E. (2007). Diffusion tensor imaging of the corpus callosum in Autism. NeuroImage,

34(1), 61–73.

Arnsten, A. (2009). Toward a new understanding of attention-deficit hyperactivity disorder

pathophysiology: an important role for prefrontal cortex dysfunction. CNS Drugs, 33–41.

Azmitia, E., Singh, J., Hou, X., & Wegiel, J. (2011). Dystrophic serotonin axons in postmortem

brains from young autism patients. Anatomical Record. 294(10), 1653–62.

Balkwill, L.-L., Thompson, W. F., & Matsunaga, R. (2004). Recognition of emotion in Japanese,

Western, and Hindustani music by Japanese listeners1. Japanese Psychological

Research, 46(4), 337–349.

Banich, M., Burgess, G., Depue, B., Ruzic, L., Bidwell, L., Hitt-Laustsen, S., … Willcutt, E.

(2009). The neural basis of sustained and transient attentional control in young adults

with ADHD. Neuropsychologia, 47(14), 3095–104.

Baron-Cohen, S., Ashwin, E., Ashwin, C., Tavassoli, T., & Chakrabarti, B. (2009). Talent in

autism: hyper-systemizing, hyper-attention to detail and sensory hypersensitivity.

Philosophical Transactions of the Royal Society of London - Series B: Biological

Sciences, 364(1522), 1377–83.

Barry, R. J., Clarke, A. R., Hajos, M., McCarthy, R., Selikowitz, M., & Dupuy, F. E. (2010).

Resting-state EEG gamma activity in children with attention-deficit/hyperactivity

disorder. Clinical Neurophysiology: Official Journal of the International Federation of

Clinical Neurophysiology, 121(11), 1871–1877.

Bertone, A., Mottron, L., Jelenic, P., & Faubert, J. (2003). Motion perception in autism: a

“complex” issue. Journal of Cognitive Neuroscience, 15(2), 218–225.

Bonnel, A., Mottron, L., Peretz, I., Trudel, M., Gallun, E., & Bonnel, A.-M. (2003). Enhanced

pitch sensitivity in individuals with autism: a signal detection analysis. Journal of

Cognitive Neuroscience, 15(2), 226–235.

Brown, S., Martinez, M. J., & Parsons, L. M. (2006). Music and language side by side in the

brain: a PET study of the generation of melodies and sentences. European Journal of

Neuroscience, 23(10), 2791–2803.

Castellanos, F., Kelly, C., & Milham, M. (2009). The restless brain: attention-deficit

hyperactivity disorder, resting-state functional connectivity, and intrasubject variability.

Journal of Psychiatry - Revue Canadienne de Psychiatrie, 54(10), 665–72.

Chanda, M., & Levitin, D. (2013). The neurochemistry of music. Trends in Cognitive Sciences,

17(4), 179–93.

Chen, D. G., Huang, Y. F., Zhang F. Y., Qi G. P. (1994). Influence of prenatal music and touch

enrichment on the IQ, motor development, and behavior of infants. Clinical Journal of

Psychology. 8, 148–51.

Chen, J. L., Penhune, V. B., & Zatorre, R. J. (2008). Listening to musical rhythms recruits motor

regions of the brain. Cerebral Cortex (New York, N.Y.: 1991), 18(12), 2844–2854.

Clarke, A., Barry, R., Dupuy, F., McCarthy, R., Selikowitz, M., & Johnstone, S. (2013). Excess

beta activity in the EEG of children with attention-deficit/hyperactivity disorder: a

disorder of arousal? Journal of Psychophysiology, 89(3), 314–9.

Cohen, I., Gardner, J., Karmel, B., Phan, H., Kittler, P., Gomez, T., … Barone, A. (2013).

Neonatal brainstem function and 4-month arousal-modulated attention are jointly

associated with autism. Journal of the International Society for Autism Research, 6(1),

11–22.

Daltrozzo, J., & Schön, D. (2008). Conceptual Processing in Music as Revealed by N400 Effects

on Words and Musical Targets. Journal of Cognitive Neuroscience, 21(10), 1882–1892.

Ecker, C., Ronan, L., Feng, Y., Daly, E., Murphy, C., Ginestet, C., … Murphy, D. (2013).

Intrinsic gray-matter connectivity of the brain in adults with autism spectrum disorder.

Proceedings of the National Academy of Sciences of the United States of America,

110(32), 13222–7.

Ellis, R. J., & Thayer, J. F. (2010). Music and Autonomic Nervous System (Dys)function. Music

Perception, 27(4), 317–326.

Faraone, S. V., Perlis, R. H., Doyle, A. E., Smoller, J. W., Goralnick, J. J., Holmgren, M. A., &

Sklar, P. (2005). Molecular genetics of attention-deficit/hyperactivity disorder. Biological

Psychiatry, 57(11), 1313–1323.

Ferrer, R., Eerola, T., & Vuoskoski, J. K. (2013). Enhancing genre-based measures of music

preference by user-defined liking and social tags. Psychology of Music, 41(4), 499–518.

Ferreri, L., Bigand, E., Perrey, S., Muthalib, M., Bard, P., & Bugaiska, A. (2014). Less Effort,

Better Results: How Does Music Act on Prefrontal Cortex in Older Adults during Verbal

Encoding? An fNIRS Study. Frontiers in Human Neuroscience, 8.

Fiebelkorn, I., Foxe, J., McCourt, M., Dumas, K., & Molholm, S. (2013). Atypical category

processing and hemispheric asymmetries in high-functioning children with autism:

revealed through high-density EEG mapping. Cortex, 49(5), 1259–67.

Flaig, N. K., & Large, E. W. (2014). Dynamic musical communication of core affect. Frontiers

in Psychology, 5, 72.

Flores-Gutiérrez, E. O., Díaz, J.-L., Barrios, F. A., Favila-Humara, R., Guevara, M. Á., del Río-

Portilla, Y., & Corsi-Cabrera, M. (2007). Metabolic and electric brain patterns during

pleasant and unpleasant emotions induced by music masterpieces. International Journal

of Psychophysiology, 65(1), 69–84.

Fonseca, L., Tedrus, G., Bianchini, M., & Silva, T. (2013). Electroencephalographic alpha

reactivity on opening the eyes in children with attention-deficit hyperactivity disorder.

Journal of the EEG, 44(1), 53–7.

Foss-Feig, J., Kwakye, L., Cascio, C., Burnette, C., Kadivar, H., Stone, W., & Wallace, M.

(2010). An extended multisensory temporal binding window in autism spectrum

disorders. Experimental Brain Research, 203(2), 381–9.

Fujioka, T., Trainor, L. J., Large, E. W., & Ross, B. (2012). Internalized timing of isochronous

sounds is represented in neuromagnetic β oscillations. The Journal of Neuroscience: The

Official Journal of the Society for Neuroscience, 32(5), 1791–1802.

Gomez, P., & Danuser, B. (2007). Relationships between musical structure and

psychophysiological measures of emotion. Emotion, 7(2), 377–387.

Gomot, M., Belmonte, M., Bullmore, E., Bernard, F., & Baron-Cohen, S. (2008). Brain hyper-

reactivity to auditory novel targets in children with high-functioning autism. Brain,

2479–88.

Heaton, P., Davis, R. E., & Happé, F. G. E. (2008). Research note: exceptional absolute pitch

perception for spoken words in an able adult with autism. Neuropsychologia, 46(7),

2095–2098.

Holler, Y., Thomschewski, A., Schmid, E., Holler, P., Crone, J., & Trinka, E. (2012). Individual

brain-frequency responses to self-selected music. Journal of Psychophysiology, 86(3),

206–213.

Hunter, P. G., Glenn Schellenberg, E., & Stalinski, S. M. (2011). Liking and identifying

emotionally expressive music: age and gender differences. Journal of Experimental Child

Psychology, 110(1), 80–93.

Husain, G., Thompson, W. F., & Schellenberg, E. G. (2002). Effects of Musical Tempo and

Mode on Arousal, Mood, and Spatial Abilities. Music Perception, 20(2), 151–171.

Järvinen-Pasley, A., Wallace, G. L., Ramus, F., Happé, F., & Heaton, P. (2008). Enhanced

perceptual processing of speech in autism. Developmental Science, 11(1), 109–121.

Kim, H., Lee, M.-H., Chang, H.-K., Lee, T.-H., Lee, H.-H., Shin, M.-C., … Kim, C.-J. (2006).

Influence of prenatal noise and music on the spatial memory and neurogenesis in the

hippocampus of developing rats. Brain and Development, 28(2), 109–114.

Klimesch, W. (1996). Memory processes, brain oscillations and EEG synchronization.

International Journal of Psychophysiology: Official Journal of the International

Organization of Psychophysiology, 24(1-2), 61–100.

Klimesch, W., Sauseng, P., & Hanslmayr, S. (2007). EEG alpha oscillations: the inhibition-

timing hypothesis. Brain Research Reviews, 53(1), 63–88.

Knyazev, G. G. (2012). EEG delta oscillations as a correlate of basic homeostatic and

motivational processes. Neuroscience and Biobehavioral Reviews, 36(1), 677–695.

Kragel, P., & Labar, K. (2013). Multivariate pattern classification reveals autonomic and

experiential representations of discrete emotions. Emotion, 13(4), 681–90.

Kühnis, J., Elmer, S., & Jäncke, L. (2014). Auditory Evoked Responses in Musicians during

Passive Vowel Listening Are Modulated by Functional Connectivity between Bilateral

Auditory-related Brain Regions. Journal of Cognitive Neuroscience, 1–12.

Large, E. W., & Almonte, F. V. (2012). Neurodynamics, tonality, and the auditory brainstem

response. Annals of the New York Academy of Sciences, 1252, E1–7.

Lee, C.-H., Shih, J.-L., Yu, K.-M., & Lin, H.-S. (2009). Automatic Music Genre Classification

Based on Modulation Spectral Analysis of Spectral and Cepstral Features. IEEE

Transactions on Multimedia, 11(4), 670–682.

Lenartowicz, A., Delorme, A., Walshaw, P., Cho, A., Bilder, R., McGough, J., … Loo, S.

(2014). Electroencephalography correlates of spatial working memory deficits in

attention-deficit/hyperactivity disorder: vigilance, encoding, and maintenance. Journal of

Neuroscience, 34(4), 1171–82.

Liechti, M., Valko, L., Muller, U., Dohnert, M., Drechsler, R., Steinhausen, H., & Brandeis, D.

(2013). Diagnostic value of resting electroencephalogram in attention-

deficit/hyperactivity disorder across the lifespan. Brain Topography, 26(1), 135–51.

Lind, J. (1980). Music and the small human being. Acta Paediatrica Scandinavica, 69(2), 131–

136.

Makris, N., Buka, S. L., Biederman, J., Papadimitriou, G. M., Hodge, S. M., Valera, E. M., …

Seidman, L. J. (2008). Attention and executive systems abnormalities in adults with

childhood ADHD: A DT-MRI study of connections. Cerebral Cortex. 18(5), 1210–1220.

Moore, E., Schaefer, R. S., Bastin, M. E., Roberts, N., & Overy, K. (2014). Can Musical

Training Influence Brain Connectivity? Evidence from Diffusion Tensor MRI. Brain

Sciences, 4(2), 405–427.

Moreno, S., & Bidelman, G. M. (2014). Examining neural plasticity and cognitive benefit

through the unique lens of musical training. Hearing Research, 308, 84–97.

Moreno, S., Bialystok, E., Barac, R., Schellenberg, E. G., Cepeda, N. J., & Chau, T. (2011).

Short-term music training enhances verbal intelligence and executive function.

Psychological Science, 22(11), 1425–1433.

Mottron, L., Burack, J. A., Stauder, J. E., & Robaey, P. (1999). Perceptual processing among

high-functioning persons with autism. Journal of Child Psychology and Psychiatry, and

Allied Disciplines, 40(2), 203–211.

Murias, M., Webb, S., Greenson, J., & Dawson, G. (2007). Resting state cortical connectivity

reflected in EEG coherence in individuals with autism. Biological Psychiatry, 62(3),

270–3.

Musacchia, G., Large, E. W., & Schroeder, C. E. (2014). Thalamocortical mechanisms for

integrating musical tone and rhythm. Hearing Research, 308, 50–59.

Nikolas, M., & Nigg, J. (2013). Neuropsychological performance and attention-deficit

hyperactivity disorder subtypes and symptom dimensions. Neuropsychology, 27(1), 107–

20.

Neuper, C., & Pfurtscheller, G. (2001). Event-related dynamics of cortical rhythms: frequency-

specific features and functional correlates. International Journal of Psychophysiology:

Official Journal of the International Organization of Psychophysiology, 43(1), 41–58.

O’Riordan, M., & Passetti, F. (2006). Discrimination in autism within different sensory

modalities. Journal of Autism and Developmental Disorders, 36(5), 665–675.

Ott, C., Stier, C., Herrmann, C., & Jancke, L. (2013). Musical expertise affects attention as

reflected by auditory-evoked gamma-band activity in human EEG. Neuroreport, 24(9),

445–50.

Prince, J., & Pfordresher, P. (2012). The role of pitch and temporal diversity in the perception

and production of musical sequences. Acta Psychologica, 141(2), 184–98.

Putkinen, V., Tervaniemi, M., & Huotilainen, M. (2013). Informal musical activities are linked

to auditory discrimination and attention in 2-3-year-old children: an event-related

potential study. Journal of Neuroscience, 37(4), 654–61.

Rickard, N. S. (2004). Intense emotional responses to music: a test of the physiological arousal

hypothesis. Psychology of Music, 32(4), 371–388.

Sanyal, T., Kumar, V., Nag, T., Jain, S., Sreenivas, V., & Wadhwa, S. (2013). Prenatal loud

music and noise: differential impact on physiological arousal, hippocampal

synaptogenesis and spatial behavior in one day-old chicks. PLoS ONE, 8(7).

Schmidt, L. A., & Trainor, L. J. (2001). Frontal brain electrical activity (EEG) distinguishes

valence and intensity of musical emotions. Cognition & Amp; Emotion, 15(4), 487–500.

Shaw, P., Eckstrand, K., Sharp, W., Blumenthal, J., Lerch, J. P., Greenstein, D., … Rapoport, J.

L. (2007). Attention-deficit/hyperactivity disorder is characterized by a delay in cortical

maturation. Proceedings of the National Academy of Sciences of the United States of

America, 104(49), 19649–19654.

Singer, W., & Gray, C. M. (1995). Visual feature integration and the temporal correlation

hypothesis. Annual Review of Neuroscience, 18, 555–586.

Skinner, J. E., Molnar, M., & Kowalik, Z. J. (2000). The role of the thalamic reticular neurons in

alpha- and gamma-oscillations in neocortex: a mechanism for selective perception and

stimulus binding. Acta Neurobiologiae Experimentalis, 60(1), 123–142.

Stoner, R., Chow, M. L., Boyle, M. P., Sunkin, S. M., Mouton, P. R., Roy, S., … Courchesne, E.

(2014). Patches of disorganization in the neocortex of children with autism. The New

England Journal of Medicine, 370(13), 1209–1219.

Tallon-Baudry, C. (2003). Oscillatory synchrony and human visual cognition. Journal of

Physiology, Paris, 97(2-3), 355–363.

Tommerdahl, M., Tannan, V., Cascio, C. J., Baranek, G. T., & Whitsel, B. L. (2007).

Vibrotactile adaptation fails to enhance spatial localization in adults with autism. Brain

Research, 1154, 116–123.

Trainor, L. (2012). Musical experience, plasticity, and maturation: issues in measuring

developmental change using EEG and MEG. Annals of the New York Academy of

Sciences, 25–36.

Tye, C., McLoughlin, G., Kuntsi, J., & Asherson, P. (2011). Electrophysiological markers of

genetic risk for attention deficit hyperactivity disorder. Expert Reviews in Molecular

Medicine.

Wang, J., Barstein, J., Ethridge, L. E., Mosconi, M. W., Takarae, Y., & Sweeney, J. A. (2013).

Resting state EEG abnormalities in autism spectrum disorders. Journal of

Neurodevelopmental Disorders, 5(1), 1–14.

Yamasaki, T., Yamada, K., & Laukka, P. (2013). Viewing the world through the prism of

music: Effects of music on perceptions of the environment. Psychology of Music,

doi:10.1177/0305735613493954.

Zuk, J., Benjamin, C., Kenyon, A., & Gaab, N. (2014). Behavioral and Neural Correlates of

Executive Functioning in Musicians and Non-Musicians. PLoS ONE, 9(6), e99868.