Pharmacological modulation of subliminal learning in Parkinson's and Tourette's syndromes

8
Pharmacological modulation of subliminal learning in Parkinson’s and Tourette’s syndromes Stefano Palminteri a,b,c , Mae ¨ l Lebreton a,b,c , Yulia Worbe a,b,c , David Grabli a,b,c,d , Andreas Hartmann a,b,c,e , and Mathias Pessiglione a,b,c,1 a Institut du Cerveau et de la Moe ¨ lle e ` pinie ` re (CR-ICM), F-75013 Paris, France; b Institut de la Sante ´ et de la Recherche Me ´ dicale (INSERM), Unite ´ Mixte de Recherche (UMR 975), F-75013 Paris, France; c Universite ´ Pierre et Marie Curie (UPMC-Paris 6), F-75013 Paris, France; d Fe ´de ´ ration de Neurologie, Groupe Hospitalier Pitie ´ -Salpe ˆ trie ` re, Assistance Publique-Ho ˆ pitaux de Paris, F-75013 France; and e Centre de Re ´fe ´ rence “Syndrome Gilles de la Tourette”, F-75013 Paris, France Edited by Mortimer Mishkin, National Institutes of Health, Bethesda, MD, and approved September 16, 2009 (received for review April 12, 2009) Theories of instrumental learning aim to elucidate the mechanisms that integrate success and failure to improve future decisions. One computational solution consists of updating the value of choices in proportion to reward prediction errors, which are potentially encoded in dopamine signals. Accordingly, drugs that modulate dopamine transmission were shown to impact instrumental learn- ing performance. However, whether these drugs act on conscious or subconscious learning processes remains unclear. To address this issue, we examined the effects of dopamine-related medications in a subliminal instrumental learning paradigm. To assess generality of dopamine implication, we tested both dopamine enhancers in Parkinson’s disease (PD) and dopamine blockers in Tourette’s syndrome (TS). During the task, patients had to learn from mon- etary outcomes the expected value of a risky choice. The different outcomes (rewards and punishments) were announced by visual cues, which were masked such that patients could not consciously perceive them. Boosting dopamine transmission in PD patients improved reward learning but worsened punishment avoidance. Conversely, blocking dopamine transmission in TS patients favored punishment avoidance but impaired reward seeking. These results thus extend previous findings in PD to subliminal situations and to another pathological condition, TS. More generally, they suggest that pharmacological manipulation of dopamine transmission can subconsciously drive us to either get more rewards or avoid more punishments. dopamine instrumental learning subliminal perception reward punishment H ow we learn from success and failure is a long-standing question in neuroscience. Instrumental learning theories explain how outcomes can be used to modify the value of choices, such that better decisions are made in the future. A basic learning mechanism consists of updating the value of the chosen option according to a reward prediction error, which is the difference between the actual and the expected reward (1, 2). This learning rule, using prediction error as a teaching signal, has provided a good account of instrumental learning in a variety of species including both human and nonhuman primates (3, 4). Single-cell recordings in monkeys suggest that reward prediction errors are encoded by the phasic discharge of dopamine neurons (5, 6). In humans, dopamine-related drugs have been shown to bias prediction error encoding in the striatum to modulate reward-based learning (7). One of these drugs, levodopa (a metabolic precursor of dopamine), is used to alleviate motor symptoms in idiopathic Parkinson’s disease (PD), which is primarily caused by degeneration of nigral dopamine neurons. PD patients were shown to learn better from positive feedback when on levodopa and from negative feedback when off levo- dopa (8, 9). This double dissociation lead Frank and colleagues to propose a computational model of fronto-striatal circuits where dopamine bursts (encoding positive prediction errors) reinforce approach pathways, while dopamine dips (encoding negative prediction errors) reinforce avoidance pathways (10). Instrumental learning may involve both conscious and sub- conscious processes. We recently demonstrated that healthy subjects can learn associations between cues and choice out- comes, even if the cues are masked and hence not consciously perceived (11). During performance of this subliminal condi- tioning task, prediction errors generated with a standard rein- forcement learning algorithm were reflected in striatal activity, possibly due to dopaminergic inputs. However, the assumption that subconscious learning is actually driven by dopamine release in the striatum remains to be tested. It is noteworthy that learning is dramatically reduced in the subliminal compared to the unmasked condition, where the associations can be trivially acquired in one trial. Thus, conscious processes, notably the ability to keep in mind the cues and outcomes seen previously, seem important for a good learning performance, but are not necessary for a more limited acquisition of instrumental responses. To our knowledge, the question of whether dopamine-related drugs affect conscious or subconscious learning-related pro- cesses has not been addressed so far. Here, we examined this issue by administrating our subliminal conditioning paradigm to PD patients. The hypothesis was that the above-mentioned double dissociation, between reinforcement valence (reward or punishment) and medication status (off or on levodopa), could be replicated in subliminal conditions. To strengthen the dem- onstration, we also tested whether a reverse double dissociation could be observed in patients with Gilles de la Tourette’s syndrome (TS), which can be opposed to PD in terms of both symptoms and treatments. TS is characterized by hyperkinetic symptoms (motor and vocal tics) alleviated by neuroleptics (dopamine receptor antagonists), whereas PD is a hypokinetic syndrome alleviated by dopamine receptor agonists. Medication effects were assessed between two groups of 12 TS patients on one hand and within one group of 12 PD patients on the other. Matched healthy controls (24 young and 12 older subjects) were also administrated to the same experimental paradigm. Disease effects were assessed by comparing each group of patients off medication with their matched control group. Subjects’ demo- graphic and clinical features are displayed in Tables 1 and 2, respectively. The subliminal conditioning task used three abstract cues that were paired with different monetary outcomes (1,0, 1). Author contributions: D.G., A.H., and M.P. designed research; S.P. and M.L. performed research; Y.W., D.G., and A.H. contributed new reagents/analytic tools; S.P. and M.P. analyzed data; and S.P. and M.P. wrote the paper. The authors declare no conflict of interest. This article is a PNAS Direct Submission. 1 To whom correspondence should be addressed. E-mail: [email protected]. This article contains supporting information online at www.pnas.org/cgi/content/full/ 0904035106/DCSupplemental. www.pnas.orgcgidoi10.1073pnas.0904035106 PNAS November 10, 2009 vol. 106 no. 45 19179 –19184 NEUROSCIENCE

Transcript of Pharmacological modulation of subliminal learning in Parkinson's and Tourette's syndromes

Pharmacological modulation of subliminal learningin Parkinson’s and Tourette’s syndromesStefano Palminteria,b,c, Mael Lebretona,b,c, Yulia Worbea,b,c, David Grablia,b,c,d, Andreas Hartmanna,b,c,e,and Mathias Pessiglionea,b,c,1

aInstitut du Cerveau et de la Moelle epiniere (CR-ICM), F-75013 Paris, France; bInstitut de la Sante et de la Recherche Medicale (INSERM), Unite Mixte deRecherche (UMR 975), F-75013 Paris, France; cUniversite Pierre et Marie Curie (UPMC-Paris 6), F-75013 Paris, France; dFederation de Neurologie, GroupeHospitalier Pitie-Salpetriere, Assistance Publique-Hopitaux de Paris, F-75013 France; and eCentre de Reference “Syndrome Gilles de la Tourette”,F-75013 Paris, France

Edited by Mortimer Mishkin, National Institutes of Health, Bethesda, MD, and approved September 16, 2009 (received for review April 12, 2009)

Theories of instrumental learning aim to elucidate the mechanismsthat integrate success and failure to improve future decisions. Onecomputational solution consists of updating the value of choices inproportion to reward prediction errors, which are potentiallyencoded in dopamine signals. Accordingly, drugs that modulatedopamine transmission were shown to impact instrumental learn-ing performance. However, whether these drugs act on consciousor subconscious learning processes remains unclear. To address thisissue, we examined the effects of dopamine-related medications ina subliminal instrumental learning paradigm. To assess generalityof dopamine implication, we tested both dopamine enhancers inParkinson’s disease (PD) and dopamine blockers in Tourette’ssyndrome (TS). During the task, patients had to learn from mon-etary outcomes the expected value of a risky choice. The differentoutcomes (rewards and punishments) were announced by visualcues, which were masked such that patients could not consciouslyperceive them. Boosting dopamine transmission in PD patientsimproved reward learning but worsened punishment avoidance.Conversely, blocking dopamine transmission in TS patients favoredpunishment avoidance but impaired reward seeking. These resultsthus extend previous findings in PD to subliminal situations and toanother pathological condition, TS. More generally, they suggestthat pharmacological manipulation of dopamine transmission cansubconsciously drive us to either get more rewards or avoid morepunishments.

dopamine � instrumental learning � subliminal perception � reward �punishment

How we learn from success and failure is a long-standingquestion in neuroscience. Instrumental learning theories

explain how outcomes can be used to modify the value of choices,such that better decisions are made in the future. A basiclearning mechanism consists of updating the value of the chosenoption according to a reward prediction error, which is thedifference between the actual and the expected reward (1, 2).This learning rule, using prediction error as a teaching signal, hasprovided a good account of instrumental learning in a variety ofspecies including both human and nonhuman primates (3, 4).Single-cell recordings in monkeys suggest that reward predictionerrors are encoded by the phasic discharge of dopamine neurons(5, 6). In humans, dopamine-related drugs have been shown tobias prediction error encoding in the striatum to modulatereward-based learning (7). One of these drugs, levodopa (ametabolic precursor of dopamine), is used to alleviate motorsymptoms in idiopathic Parkinson’s disease (PD), which isprimarily caused by degeneration of nigral dopamine neurons.PD patients were shown to learn better from positive feedbackwhen on levodopa and from negative feedback when off levo-dopa (8, 9). This double dissociation lead Frank and colleaguesto propose a computational model of fronto-striatal circuitswhere dopamine bursts (encoding positive prediction errors)

reinforce approach pathways, while dopamine dips (encodingnegative prediction errors) reinforce avoidance pathways (10).

Instrumental learning may involve both conscious and sub-conscious processes. We recently demonstrated that healthysubjects can learn associations between cues and choice out-comes, even if the cues are masked and hence not consciouslyperceived (11). During performance of this subliminal condi-tioning task, prediction errors generated with a standard rein-forcement learning algorithm were reflected in striatal activity,possibly due to dopaminergic inputs. However, the assumptionthat subconscious learning is actually driven by dopamine releasein the striatum remains to be tested. It is noteworthy thatlearning is dramatically reduced in the subliminal compared tothe unmasked condition, where the associations can be triviallyacquired in one trial. Thus, conscious processes, notably theability to keep in mind the cues and outcomes seen previously,seem important for a good learning performance, but are notnecessary for a more limited acquisition of instrumentalresponses.

To our knowledge, the question of whether dopamine-relateddrugs affect conscious or subconscious learning-related pro-cesses has not been addressed so far. Here, we examined thisissue by administrating our subliminal conditioning paradigm toPD patients. The hypothesis was that the above-mentioneddouble dissociation, between reinforcement valence (reward orpunishment) and medication status (off or on levodopa), couldbe replicated in subliminal conditions. To strengthen the dem-onstration, we also tested whether a reverse double dissociationcould be observed in patients with Gilles de la Tourette’ssyndrome (TS), which can be opposed to PD in terms of bothsymptoms and treatments. TS is characterized by hyperkineticsymptoms (motor and vocal tics) alleviated by neuroleptics(dopamine receptor antagonists), whereas PD is a hypokineticsyndrome alleviated by dopamine receptor agonists. Medicationeffects were assessed between two groups of 12 TS patients onone hand and within one group of 12 PD patients on the other.Matched healthy controls (24 young and 12 older subjects) werealso administrated to the same experimental paradigm. Diseaseeffects were assessed by comparing each group of patients offmedication with their matched control group. Subjects’ demo-graphic and clinical features are displayed in Tables 1 and 2,respectively.

The subliminal conditioning task used three abstract cues thatwere paired with different monetary outcomes (�1€, 0€, �1€).

Author contributions: D.G., A.H., and M.P. designed research; S.P. and M.L. performedresearch; Y.W., D.G., and A.H. contributed new reagents/analytic tools; S.P. and M.P.analyzed data; and S.P. and M.P. wrote the paper.

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

1To whom correspondence should be addressed. E-mail: [email protected].

This article contains supporting information online at www.pnas.org/cgi/content/full/0904035106/DCSupplemental.

www.pnas.org�cgi�doi�10.1073�pnas.0904035106 PNAS � November 10, 2009 � vol. 106 � no. 45 � 19179–19184

NEU

ROSC

IEN

CE

The cues were briefly f lashed between two mask images, afterwhich subjects had to choose between safe and risky options(Fig. 1). The safe choice means a null outcome for sure: no gain,no loss. A risky choice may result in a gain (�1€), a loss (�1€),or a neutral outcome (0€), depending on the cue. As they wouldnot see the cues, subjects were encouraged to follow theirintuition: to make a risky choice if they had the feeling they werein a winning trial or to make a safe choice if they felt it was alosing trial. For half of the subjects, the risky response was a‘‘Go’’ (key press), and for the other half it was a ‘‘Nogo’’ (no keypress). Thus the experimental design allowed measuring depen-dent variables for three orthogonal dimensions: the rate of Goresponse (motor impulsivity), risky choice (cognitive impulsiv-ity), and monetary payoff (reinforcement learning). Note that ifsubjects always made the same response, or if they performed atchance, their final payoff would be zero. Hence a positive payoffindicates that some representation of cue–outcome contingen-cies had been acquired through conditioning. A separate visualdiscrimination task was subsequently conducted to assess thesubjects’ sensitivity to differences between cues, presented withthe same masking procedure as during conditioning. The ratio-nale is that if subjects are unable to discriminate between cues,then they are a fortiori unable to build conscious representationsof cue–outcome associations.

ResultsAll dependent measures in the different groups have beensummarized in Table 3. We first tested motor and cognitiveimpulsivity measures (Go response and risky choice). There wasno significant difference between PD and TS groups (all P � 0.1,two-tailed t tests) and no significant effect of medication, eitherin PD or TS (all P � 0.05, two-tailed t tests). These results werenot necessarily expected given the motor and cognitive signsassociated with the diseases and treatments, but they suggest thatperformance was not driven by a difficulty in pressing keys or apropensity to take risks.

Then we examined learning performance (monetary payoff)and discrimination sensitivity (d�). Monetary payoffs were sig-nificantly above zero, indicating a conditioning effect, in bothPD and TS patients (PD, 1.1 � 0.5€, t11 � 2.1, P � 0.05; TS, 1.8 �0.5€, t23 � 3.7, P � 0.001, one-tailed t test). In contrast,performance did not improve in the visual discrimination test,where subjects remained at chance level throughout the entireseries of trials [see Fig. S1]. As the impulsivity measures, payoffsand d� were not affected by dopamine enhancers in PD or bydopamine blockers in TS (all P � 0.1, two-tailed t test). Note,however, that d� were numerically above zero in all situations,suggesting that learning effects may have been driven by some

occasional conscious perception. To address this issue, wecalculated correlations between d� and payoffs: Pearson’s coef-ficients were around zero and nonsignificant (PD Off, r � 0.22,P � 0.5; PD On, r � 0.17, P � 0.5; TS Off, r � �0.13, P � 0.1;TS On, r � �0.29, P � 0.5), suggesting that learning effectswere not driven by patients with above-chance discriminationperformance.

After controlling for these potential confounding effects, wenext examined the hypothesized double dissociation betweenreinforcement valence and medication status. We distinguishedbetween reward and punishment learning in the calculation ofmonetary payoffs. Relative to the neutral condition, additionalcorrect choices were considered as an index of reward learningin the gain condition and as an index of punishment learning inthe loss condition. Note that subtracting the neutral conditionremoves the potential effects of motor and cognitive impulsivity.The number of correct choices was expressed as euros thatsubjects won for reward learning or avoided losing for punish-ment learning (Fig. 2A).

As expected, we observed that off-medication PD patientssignificantly learned to avoid punishments (1.3 � 0.5€, t11 � 2.8,P � 0.01, one-tailed t test) but not to get rewards (�0.3 � 0.7€,t11 � �0.5, P � 0.1, one-tailed t test). On-medication PD patientsexhibited the opposite pattern: no punishment learning (�0.3 �0.5€, t11 � �0.6, P � 0.1, one-tailed t test) but significant rewardlearning (1.5 � 0.5€, t11 � 2.9, P � 0.01, one-tailed t test). Thereverse double dissociation was observed in TS patients: Whenoff medication, they learned to obtain rewards (1.9 � 1.0€, t11 �2.0, P � 0.05, one-tailed t test) but not to avoid punishments(0.0 � 0.5€, t11 � 0.1, P � 0.5, one-tailed t test) and when onmedication, they failed to obtain rewards (0.1 � 0.4€, t11 � 0.3,P � 0.1, one-tailed t test) but successfully avoided punishments(1.6 � 0.5€, t11 � 3.0, P � 0.01, one-tailed t test). Havingidentified the combinations of medication status and reinforce-ment valence where patients did learn, we checked the correla-tions between d� and learning in these situations (Fig. 2B). Theywere again close to zero and not significant in both PD patients(Off/punishment, r � 0.01, P � 0.5; On/reward, r � 0.01, P � 0.5)and TS patients (Off/reward, r � �0.20, P � 0.5; On/punishment, r � �0.29, P � 0.5). Moreover, regression linescrossed the y axis (d� � 0) for positive payoffs in all situations,demonstrating the presence of conditioning effects in the ab-sence of visual discrimination.

To verify that the double dissociations were due to differencein learning rates, we plotted the cumulative money won (forreward learning) and not lost (for punishment learning) as afunction of trials (Fig. 3B). Linear regression coefficients(slopes) of these learning curves were extracted and tested for

Table 1. Demographic data

Demographic features PD (n � 12) Seniors (n � 12) TS Off (n � 12) TS On (n � 12) Juniors (n � 24)

Age (years) 57.0 � 3.1 60.7 � 2.7 21.3 � 2.6 19.8 � 2.6 22.3 � 0.9Sex (female/male) 1/11 5/7 3/9 2/10 12/12Education (years) 10.3 � 1.3 16.4 � 1.0 11.3 � 1.4 10.0 � 0.9 15.1 � 0.5

Table 2. Clinical data

Clinical features PD (n � 12) Clinical features TS Off (n � 12) TS On (n � 12)

Disease duration (years) 10.7 � 1.2 Disease duration (years) 13.7 � 2.9 12.3 � 2.8UPDRSIII score Off 28.7 � 4.5 YGTSS/50 score 15.9 � 1.6 18.3 � 2.1UPDRSIII score On 6.9 � 1.6 YGTSS/100 score 33.4 � 3.8 42.4 � 4.0Treatment Levodopa* Treatment — Risperidone PrimozideDaily dose (mg/day) 850 � 116 Daily dose (mg/day) — 2.3 � 0.7 3.3 � 2.3

*Dose is expressed as dopa-equivalent, taking into account both levodopa (all patients) and dopamine agonists (seven patients).

19180 � www.pnas.org�cgi�doi�10.1073�pnas.0904035106 Palminteri et al.

the different groups and medications (Fig. 3A). These slopesexhibited a profile very similar to what was obtained with payoffs(compare with Fig. 2 A). They were significantly positive (P �0.05, one-tailed t test) only in Off PD with punishments, in OnPD with rewards, and in On TS with punishments.

We then tested the effects of medication on the reward bias,defined as the difference between the money won (correctchoices following reward cues) and the money not lost (correctchoices following punishment cues). This measure can hence beconsidered an index of the difference between reward andpunishment learning performance. We found that the rewardbias was significantly increased by dopamine enhancers in PDpatients (Off, �1.7 � 0.9€; On, 1.8 � 0.8€; t11 � 3.0, P � 0.05,two-tailed t test) and significantly decreased by dopamine block-ers in TS patients (Off, 1.9 � 1.0€; On, �1.5 � 0.4€; t22 � 2.2,P � 0.05, two-tailed t test). Thus, the reward bias was the onlydependent variable sensitive to medication, with reciprocaleffects in PD and TS showing that dopamine enhancers favoredreward learning, whereas dopamine blockers favored punish-ment avoidance.

Finally, all experimental data were systematically comparedbetween patients and controls. Note that controls were matchedwith patients in terms of age but not sex or education. However,taking into account all of the 36 healthy subjects, we found nosignificant effect of sex on monetary payoff or reward bias (bothP � 0.5, two-tailed t tests) and no significant correlation betweeneducation level and monetary payoff or reward bias (r � 0.21 andr � 0.04, both P � 0.1). Thus reinforcement learning performancewas not dependent on sex or education. In both control groups ourcrucial measure, the reward bias, was found between those obtainedfor the on and off medication status in the corresponding patientgroup. In other words, the trend was that relative to healthy oldsubjects, PD patients had a lower reward bias when off medicationand a higher one when on medication. And relative to healthyyoung subjects, TS patients had a higher reward bias when offmedication and a lower one when on medication. However, thedifferences being smaller than when comparing on and off states,the comparison with control subjects was significant only for Off PD

patients (t22 � 2.6, P � 0.05; all other P � 0.1; two-tailed t test).Medication effects on the reward bias therefore appear much morereliable than disease effects.

DiscussionTo summarize, we extended the double dissociation betweenreinforcement valence and dopamine medication status, which

Fig. 1. Subliminal learning task. Successive screenshots displayed during agiven trial are shown from Left to Right, with durations in milliseconds. Afterseeing a masked contextual cue flashed on a computer screen, subjects chooseto press or not to press a response key and subsequently observe the outcome.In this example, ‘‘Go’’ appears on the screen because the subject has pressedthe key, following the cue associated with reward (winning 1€).

Table 3. Experimental data

Behavioral measures

PD (n � 12)Seniors(n � 12)

TS Off(n � 12)

TS On(n � 12)

Juniors(n � 24)Off On

Monetary payoff (€) 1.0 � 0.8 1.3 � 0.6 0.6 � 0.6 1.9 � 0.6 1.8 � 0.8 2.8 � 0.9Visual discrimination (d�) 0.14 � 0.25† 0.33 � 0.15 0.43 � 0.14 0.37 � 0.11 0.07 � 0.14 0.05 � 0.12Payoff/d� correlation (r) 0.22 0.17 �0.29 �0.13 0.29 0.22Go responses (%) 50.9 � 6.4 47.5 � 6.2 48.1 � 2.5 51.2 � 5.1 49.6 � 3.8 46.7 � 3.8Risky choices (%) 70.1 � 2.4 55.6 � 6.0 67.7 � 2.2 65.7 � 1.9 58.8 � 2.8 63.4 � 2.6Reward obtained (€) �0.3 � 0.7 1.5 � 0.5 0.9 � 0.4 1.9 � 1.0 0.1 � 0.4 1.5 � 0.8Punishment avoided (€) 1.3 � 0.5* �0.3 � 0.5 �0.2 � 0.4 0.0 � 0.5 1.6 � 0.5 1.3 � 0.7

*P � 0.05, significant difference with the control group (two-tailed t test)†Data were collected in 11 patients only.

Fig. 2. Monetary payoffs. (Left) Idiopathic Parkinson’s disease (PD) patients.(Right) Gilles de la Tourette’s syndrome (TS) patients. (A) Reward bias. Histo-grams in each graph show additional correct choices (in euros) in the gain(Left) and loss (Right) condition relative to the neutral condition. Solid histo-grams represent medicated patients (on dopamine enhancers or blockers)whereas open histograms represent unmedicated patients. Error bars are plusor minus between-subjects standard errors of the mean. (B) Learning vs.discrimination performance. Graphs represent for each individual the euroswon (reward learning) or not lost (punishment learning) as a function ofdiscrimination sensitivity (d�). Medicated patients (on dopamine enhancers orblockers) are represented by solid squares and solid regression lines andunmedicated patients by open squares and dashed lines. Only situationswhere learning was significant are shown: On PD and Off TS patients forrewards, Off PD and On TS patients for punishments.

Palminteri et al. PNAS � November 10, 2009 � vol. 106 � no. 45 � 19181

NEU

ROSC

IEN

CE

was originally demonstrated in PD patients by Frank and col-leagues (8), to the subliminal case and to TS patients. In short,reinforcement learning was biased toward reward seeking whenboosting dopamine transmission and toward punishment avoid-ance when blocking dopamine transmission. The effects wereindependent from factors such as discrimination sensitivity andmotor or cognitive impulsivity, which were orthogonal to thereinforcement valence in our design. Moreover, these factorswere not significantly affected by medication, suggesting thatpatients did not perceive the cues, press the button, or choose therisky response any more in the on- than in the off-medicationstate.

Despite the use of short duration and backward masking, wecannot formally ensure that all cues remained subliminal in alltrials, as there is no direct window to the conscious mind. Wenonetheless provide standard criteria that are generally consid-ered as indirect evidence for nonconscious perception (12, 13).Verbal reports were recorded to assess the subjective criterion:When shown the unmasked cues, all subjects reported not havingseen them previously. Discrimination performance was mea-sured to assess the objective criterion: Learning effects wereobtained even for a null d�, which indicates that subjects wereunable to correctly decide whether two consecutive cues were thesame or different. We therefore conclude that the learningprocesses affected by medications were largely subconscious.Masking was undoubtedly helped by the fact that subjects had noprior representation to guide visual search, since, contrary to

most subliminal perception studies, the cues were never shownuntil the debriefing at the end of the experiment. Although theydid not provide the above criteria for absence of awareness, someprevious studies in PD reported deficits in implicit learning (8,9, 14, 15). In these paradigms the cues are consciously perceived,but subjects fail to report explicitly the cue–outcome contin-gencies at debriefing, even if they previously expressed someknowledge of these contingencies in their motor responses.Debriefing tests have, however, been criticized as confounded bymemory decay (16–18), so masking cues serves as a morestringent approach to limit conscious associations between cuesand outcomes. Compared to implicit learning paradigms, such asprobabilistic classification or transitive inference tasks, the Go/Nogo mode of response used here makes reinforcement learningmore direct, with no need for building high-level representationsof cue–outcome contingencies.

Our findings are in line with a growing body of evidence thatreinforcement learning can operate subconsciously (19–23).More specifically, they extend a previous functional neuroimag-ing study using the same subliminal conditioning paradigm (11),which showed that reward prediction errors were reflected in theventral striatum. A parsimonious explanation may be that do-pamine enhancers and blockers, because they interfere withdopamine transmission, modulate the magnitude of predictionerror signals, as was previously demonstrated during consciousinstrumental learning (7). This would be compatible withFrank’s model (10), if we assume that dopamine enhancers andblockers have opposite effects both on positive prediction errorsfollowing rewards and on negative prediction errors followingpunishments. The drugs may impact the reinforcement offronto-striatal synapses, which allegedly underlies the formalprocess of using prediction error as a teaching signal to updatethe value of the current cue, according to Rescorla and Wagner’srule (1). At a lower level, the underlying mechanisms remainspeculative, however, as it is unclear which dopamine receptors(D1, D2, or others) and which component of dopamine release(tonic, phasic, or a combination of both) are impacted bymedications. Although we argue that the reinforcement processmodulated by medications was subconscious, we do not implythat conscious feelings, when seeing the masks or the outcomes,remained unaffected. It remains, for instance, possible thatsubjects, even if not perceiving the cue itself, had a consciouspositive feeling following a reward-predicting cue or a negativeone after a punishment-predicting cue. Further experiments areneeded to determine whether we can develop a conscious accessto the value of cues that we do not consciously perceive.

The replication of the double dissociation in a second patho-logical condition (TS) suggests that our manipulation tappedinto general dopamine-related mechanisms and not into peculiardysfunction restricted to PD. Our findings potentially facilitateunderstanding not only dopamine-related drug effects but alsodopamine-related disorders. The case for dopamine neurondegeneration in PD is well established (24), so from Frank’smodel (10) it could be predicted that off-medication PD patientswere impaired in reward learning but not in punishment avoid-ance. A lack of positive reinforcement following rewards mightexplain action selection deficits that are frequently reported inPD (14, 15, 25). Indeed, if an action is not reinforced whenrewarded, selection of that action will not be facilitated in thefuture. A deficit in movement selection could also account forsome motor symptoms, such as akinesia and rigidity, that are thehallmarks of PD. The double dissociation evidenced in PD mayalso provide insight into compulsive behaviors, such as patho-logical gambling, induced in these patients by dopamine agonists(26, 27). The explanation would be that due to dopamineagonists, repetitive behaviors would be more reinforced byrewarding outcomes than impeded by punishing consequences.

Fig. 3. Learning rates. (Left) Idiopathic Parkinson’s disease (PD) patients.(Right) Gilles de la Tourette’s syndrome (TS) patients. (A) Accumulation rates.Histograms in each graph show linear regression coefficients of correspondinglearning curves below. Solid histograms represent medicated patients (ondopamine enhancers or blockers) whereas open histograms show unmedi-cated patients. Error bars are plus or minus between-subjects standard errorsof the mean. (B) Accumulation curves. Graphs represent for each individualthe cumulative sum of euros won (reward learning) or not lost (punishmentlearning) as a function of trials. The curves have been averaged across sessionsand subjects. Medicated patients (on dopamine enhancers or blockers) arerepresented by solid squares and solid regression lines and unmedicatedpatients by open squares and dashed lines.

19182 � www.pnas.org�cgi�doi�10.1073�pnas.0904035106 Palminteri et al.

In contrast, the case for an overactive dopamine transmissionin TS has not reached general agreement (28–30), despitesupporting evidence from both genetic and neuroimaging studies(31–34). That TS patients mirrored PD patients would furthersupport the idea of underlying dopaminergic hyperactivity. Ofcourse this does not necessarily imply that dopaminergic hyper-activity is causal to the pathology of TS. It is nonethelesstempting to speculate that tics may come from excessive rein-forcement of certain cortico-striatal pathways. We must remaincautious however, because we observed only a trend and not asignificant difference between off-medication TS patients andmatched healthy controls.

More generally, because they eliminate conscious strategiesthat could confound potential deficits, subliminal stimulationsmay allow targeting more specific cognitive processes, just as wasdone here for reinforcement learning, and hence provide insightinto a variety of neurological or psychiatric conditions. For thesame reasons, subliminal conditions might also prove useful inidentifying specific effects of drugs, other than those of dopa-mine enhancers and blockers on reinforcement learning. To ourknowledge, pharmacological studies have not intended so far todistinguish between drug effects on conscious and subconsciousprocesses. Indeed, a huge literature is devoted to understandinghow drugs modify conscious experience, but little is known abouthow drugs play on processes occurring outside conscious aware-ness. We believe that the present study opens the door toresearch on the pharmacology of subconscious processing.

Experimental ProceduresSubjects. The study was approved by the Ethics Committee for BiomedicalResearch of the Pitie-Salpetriere Hospital, where the study was conducted. Atotal of 72 subjects, including 36 patients and 36 controls, were included in thestudy. All subjects gave written informed consent before their participation.They were not paid for their voluntary participation and were told that themoney won in the task was purely virtual. Previous studies have shown thatusing real money is not mandatory to obtain robust motivational or condi-tioning effects (8, 35). In our case using real money would be unethical sinceit would mean paying patients according to their handicap or treatment. Intotal, 12 patients with idiopathic PD and 24 patients with TS were included inthe study. We also tested 12 old (seniors) and 24 young (juniors) healthycontrols, who were screened out for any history of neurological or psychiatricconditions and selected for age to match that of either PD or TS patients. Wechecked that age was not significantly different between old subjects and PDpatients (t22 � 0.9, P � 0.1, two-tailed t test) or between young subjects andTS patients (t46 � �0.9, P � 0.1, two-tailed t test).

PD patients were consecutive candidates for deep brain stimulation, hos-pitalized for a clinical preoperative examination. Inclusion criteria were adiagnosis of idiopathic PD, with a good response to levodopa [�50% improve-ment on the Unified Parkinson’s Disease Rating (UPDRSIII) Scale], in theabsence of dementia [Mini Mental State (MMS) score �25] and depression[Montgomery and Asberg Depression Rating Scale (MADRS) score �20]. Con-sequently, average MMS score was 27.7 � 0.3, average MADRS score was 4.3 �0.8, and Hoenh and Yahr stage was 2.46 � 0.10 in the ‘‘off’’ state and 2.17 �0.15 in the ‘‘on’’ state. Among the 12 patients, 5 were on levodopa alone, and7 were also taking dopamine receptor agonists. For the sake of simplicity, weconverted all medications as levodopa equivalents (Table 3) and we used theterm dopamine enhancers to designate both levodopa and receptor agonists.Every patient was assessed twice, on the morning of 2 different days: once inthe off state, after overnight (�12 h) withdrawal of levodopa and a full day(24 h) withdrawal of dopamine agonists, and once in the on state, 1 h afterintake of habitual medication dose (levodopa in all patients � dopamineagonists in 7 of them). One patient included in the study could not completethe visual discrimination task in the off state due to excessive motor fatigue.Three patients were unable to perform the conditioning task in the off stateand were therefore not included in the study.

TS patients were consecutive candidates screened for the French ReferenceCenter for Gilles de la Tourette’s syndrome. Patients were at least 10 years oldand did not present relevant comorbid conditions (depression, obsessive-compulsive disorder, and/or attention deficit with hyperactivity disorder).Treatment usually cannot be stopped in these patients for ethical reasons: Itwould leave patients in discomfort for too long during washout. However,some patients diagnosed with TS remain unmedicated, because their tics do

not represent a huge handicap. We included medicated and unmedicatedpatients in equal numbers, such that we could make on–off comparisons withthe same number of data points (n � 24) as in Parkinson’s disease. Thedifference was that comparisons were made within patients in PD and be-tween patients in TS. All On TS patients were treated with neuroleptics only:eight with risperidone, four with primozide. For the sake of simplicity, wereferred to neuroleptics as dopamine blockers. There was no significantdifference between on-medication (treated) and off-medication (untreated)TS patients regarding age (t22 � 0.4, P � 0.5, two-tailed t test), sex (�2 � 0.25;P � 0.5, chi-square test), disease duration (t22 � 0.4, P � 0.5, two-tailed t test),and education (t22 � 0.7, P � 0.1, two-tailed t test). The Yale Global Tic SeverityScale (YGTSS) showed no significant difference between On and Off patients,either with the 50-items (motor tics) or with the 100-items (complex tics)version (respectively t22 � 1.6, P � 0.1; t22 � 0.9, P � 0.1, two-tailed t test).

Experimental Task and Design. The behavioral tasks used in our previous study(11) were slightly shortened and translated into French and euros. Subjectsfirst read the instructions (see SI Text), which were later explained again stepby step. They were first trained to perform the conditioning task on a 16-trialpractice version. Then, they had to perform three sessions of this conditioningtask, each containing 90 trials and lasting 10 min, and one session of theperception task, containing 60 trials and lasting �5 min. The abstract cueswere letters taken from the Agathodaimon font. The same two maskingpatterns, one displayed before and the other after the cue, were used in alltask sessions (Fig. 1). Assignment of cues to the different task sessions, andassociations of cues with the different outcomes, was fixed for all subjects toundergo the exact same experimental procedure. For similar purposes, dura-tion of cue display was fixed at 50 ms and not adapted to each individual, suchthat subliminal stimulations were identical for all subjects.

As 50 ms is near the threshold for conscious perception, however, somesubjects (three TS patients and three junior and one senior controls) could notbe included because they managed to discriminate some part of the cues.Indeed they reported having spotted discriminative parts (both during taskperformance and at debriefing), had abnormally high discrimination sensi-tivity (d� � 1.5), and won unusually high amounts of money (payoff �10€).Note that without excluding the TS patients who saw the cues, the doubledissociation reported in this condition would fail to reach significance. Indeed,these TS patients were on medication and nonetheless learned to get rewards,consistent with the intuitive idea that the task gets trivial as soon as subjectscan discriminate the cues.

The instrumental conditioning task involved choosing between pressing ornot pressing a key, in response to masked cues. After showing the fixationcross and the masked cue, the response interval was indicated on the com-puter screen by a question mark. The interval was fixed to 3 s and the responsewas taken at the end: Go if the key was being pressed, and Nogo if the key wasreleased. The response was written on the screen as soon as the delay hadelapsed. Subjects were told that one response was safe (you do not win or loseanything) while the other was risky (you can win 1€, lose 1€, or get nothing).Subjects were also told that the outcome of the risky response would dependon the cue that was displayed between the mask images. In fact, three cueswere used: One was rewarding (�1€), one was punishing (�1€), and the lastwas neutral (0€). Because subjects were not informed about the associations,they could learn them only by observing the outcome, which was displayed atthe end of the trial. This was a circled coin image (meaning �1€), a barred coinimage (meaning �1€), or a gray square (meaning 0€).

The risky response was assigned to Go for half of task completions and toNogo for the other half, such that motor aspects were counterbalancedbetween reward and punishment conditions. TS patients and junior controlswere assessed only once and hence performed either the Go or the Nogoversion of the task. Junior controls were randomly assigned to either theGo version for one half or the Nogo version for the other half. In TS, the taskversion was balanced with respect to the medication status, such that each ofthe four combinations (Off/Nogo, Off/Go, On/Nogo, and On/Go) was admin-istrated in the same number of patients (n � 6). PD patients and senior controlswere assessed twice, once on the Go version and once on the Nogo version. Forsenior controls the order of Go and Nogo task versions was simply alternated.In PD, the order was balanced with respect to the medication status, such thateach of the four combinations (Off/Nogo–On/Go, Off/Go–On/Nogo, On/Nogo–Off/Go, and On/Go–Off/Nogo) was administrated in the same numberof patients (n � 3).

The perceptual discrimination task was used as a control for awareness atthe end of conditioning sessions. Hence it was administrated once in TSpatients and junior controls and twice in PD patients and senior controls. Inthis task, subjects were flashed two masked cues, 3 s apart, displayed on thecenter of a computer screen, each following a fixation cross. As there were 60

Palminteri et al. PNAS � November 10, 2009 � vol. 106 � no. 45 � 19183

NEU

ROSC

IEN

CE

trials, each cue was presented 40 times, which is more than in conditioningsessions (30 times). Subjects had to report whether or not they perceived anydifference between the two visual stimulations. The response was givenmanually, by pressing one of two keys assigned to ‘‘same’’ and ‘‘different’’choices. Importantly, subjects had no opportunity to see the cues unmasked,so they could not get any prior information about what these cues look like.Note that the three cues used in the perceptual discrimination control weredifferent from those used in instrumental learning sessions, to avoid subjectsdistinguishing cues on the basis of their learned values. At the end of theexperiment, subjects were debriefed about whether or not they could per-ceive some piece of cues. They were also shown the cues unmasked one by oneand asked whether or not they had seen them before. No included subjectreported having seen any cue.

Statistical Analysis. From the conditioning task we extracted the percentagesof Go and risky responses, which can be taken as indirect measures of motorand cognitive impulsivity, respectively. We also extracted the number ofcorrect choices, which is equivalent to the monetary payoff. The payoff canthen be split into euros won for the reward condition and euros not lost forthe punishment condition. To correct for motor and cognitive bias, we sub-tracted the correct choices made in the neutral condition, which captures thepropensity to make a Go response and a risky choice. To display learningprogression, we plotted the cumulative money won (reward learning) or notlost (punishment learning) across trials. A linear regression was fitted on theselearning curves, and coefficients (betas) were considered as an index oflearning rates. From the visual discrimination task we calculated a sensitivity

index (d�), as the difference between normalized rates of hits (correct differ-ent responses) and false alarms (incorrect different responses).

All data (demographic, clinical, or experimental) are reported as mean �

between-subjects standard error of the mean (SEM). To assess instrumentalconditioning, we used one-tailed paired t tests comparing individual perfor-mances with chance level (which corresponds to a zero payoff). Similarly, toassess visual discrimination, we compared individual d� with chance level(which is also zero), using one-tailed paired t tests. Within each pathologicalcondition (PD or TS), we assessed medication effects by comparing dependentvariables between On and Off states. We used within-group comparisons(paired two-tailed t tests) for PD patients, who were tested in the twomedication states, and between-group comparisons (unpaired two-tailed ttests) for TS patients, who were either medicated or not. To assess diseaseeffects relative to controls we performed between-group comparisons (un-paired two-tailed t tests). Finally, to assess significance of linear correlationbetween learning (payoff) and discrimination (d�) measures, we calculatedPearson’s coefficients. For all statistical tests the threshold for significance wasset at P � 0.05.

ACKNOWLEDGMENTS. We are grateful to Helen Bates for helping withbehavioral task administration and to Virginie Czernecki and Priscilla VanMeerbeeck for providing clinical data. We also thank Arlette Welaratne andall of the staff of the Centre d’Investigation Clinique for taking care ofpatients. Aman Saleem, Shadia Kawa, and Beth Pavlicek checked the English.S.P. received a Ph.D. fellowship from the Neuropole de Recherche Francilien.The study was funded by the Ecole de Neurosciences de Paris.

1. Rescorla RA, Wagner AR (1972) A theory of Pavlovian conditioning: Variations in theeffectiveness of reinforcement and nonreinforcement. Classical Conditioning II: Cur-rent Research and Theory, eds Black AH, Prokasy WF (Appleton-Century-Crofts, NewYork), pp 64–99.

2. Sutton RS, Barto AG (1998) Reinforcement Learning. (MIT Press, Cambridge, MA).3. Daw ND, Doya K (2006) The computational neurobiology of learning and reward. Curr

Opin Neurobiol 16(2):199–204.4. O’Doherty JP, Hampton A, Kim H (2007) Model-based fMRI and its application to

reward learning and decision making. Ann N Y Acad Sci 1104:35–53.5. Schultz W, Dayan P, Montague PR (1997) A neural substrate of prediction and reward.

Science 275(5306):1593–1599.6. Waelti P, Dickinson A, Schultz W (2001) Dopamine responses comply with basic

assumptions of formal learning theory. Nature 412(6842):43–48.7. Pessiglione M, Seymour B, Flandin G, Dolan RJ, Frith CD (2006) Dopamine-dependent

prediction errors underpin reward-seeking behaviour in humans. Nature442(7106):1042–1045.

8. Frank MJ, Seeberger LC, O’Reilly RC (2004) By carrot or by stick: Cognitive reinforce-ment learning in parkinsonism. Science 306(5703):1940–1943.

9. Cools R, Altamirano L, D’Esposito M (2006) Reversal learning in Parkinson’s diseasedepends on medication status and outcome valence. Neuropsychologia 44(10):1663–1673.

10. Frank MJ (2005) Dynamic dopamine modulation in the basal ganglia: A neurocompu-tational account of cognitive deficits in medicated and nonmedicated Parkinsonism. JCogn Neurosci 17(1):51–72.

11. Pessiglione M, et al. (2008) Subliminal instrumental conditioning demonstrated in thehuman brain. Neuron 59(4):561–567.

12. Kouider S, Dehaene S (2007) Levels of processing during non-conscious perception: Acritical review of visual masking. Philos Trans R Soc Lond B Biol Sci 362(1481):857–875.

13. Dehaene S, Changeux JP, Naccache L, Sackur J, Sergent C (2006) Conscious, precon-scious, and subliminal processing: A testable taxonomy. Trends Cogn Sci 10(5):204–211.

14. Knowlton BJ, Mangels JA, Squire LR (1996) A neostriatal habit learning system inhumans. Science 273(5280):1399–1402.

15. Shohamy D, et al. (2004) Cortico-striatal contributions to feedback-based learning:Converging data from neuroimaging and neuropsychology. Brain 127(Pt 4):851–859.

16. Lagnado DA, Newell BR, Kahan S, Shanks DR (2006) Insight and strategy in multiple-cuelearning. J Exp Psychol Gen 135(2):162–183.

17. Lovibond PF, Shanks DR (2002) The role of awareness in Pavlovian conditioning:Empirical evidence and theoretical implications. J Exp Psychol Anim Behav Process28(1):3–26.

18. Wilkinson L, Shanks DR (2004) Intentional control and implicit sequence learning. J ExpPsychol Learn Mem Cogn 30(2):354–369.

19. Morris JS, Ohman A, Dolan RJ (1998) Conscious and unconscious emotional learning inthe human amygdala. Nature 393(6684):467–470.

20. Olsson A, Phelps EA (2004) Learned fear of ‘‘unseen’’ faces after Pavlovian, observa-tional, and instructed fear. Psychol Sci 15(12):822–828.

21. Knight DC, Nguyen HT, Bandettini PA (2003) Expression of conditional fear with andwithout awareness. Proc Natl Acad Sci USA 100(25):15280–15283.

22. Seitz AR, Kim D, Watanabe T (2009) Rewards evoke learning of unconsciously pro-cessed visual stimuli in adult humans. Neuron 61(5):700–707.

23. Li W, Howard JD, Parrish TB, Gottfried JA (2008) Aversive learning enhances perceptualand cortical discrimination of indiscriminable odor cues. Science 319(5871):1842–1845.

24. Braak H, Del Tredici K (2008) Invited article: Nervous system pathology in sporadicParkinson disease. Neurology 70(20):1916–1925.

25. Pessiglione M, et al. (2005) An effect of dopamine depletion on decision-making: Thetemporal coupling of deliberation and execution. J Cogn Neurosci 17(12):1886–1896.

26. Voon V, Potenza MN, Thomsen T (2007) Medication-related impulse control andrepetitive behaviors in Parkinson’s disease. Curr Opin Neurol 20(4):484–492.

27. Lawrence AD, Evans AH, Lees AJ (2003) Compulsive use of dopamine replacementtherapy in Parkinson’s disease: Reward systems gone awry? Lancet Neurol 2(10):595–604.

28. Singer HS (2005) Tourette’s syndrome: From behaviour to biology. Lancet Neurol4(3):149–159.

29. Albin RL, Mink JW (2006) Recent advances in Tourette syndrome research. TrendsNeurosci 29(3):175–182.

30. Leckman JF (2002) Tourette’s syndrome. Lancet 360(9345):1577–1586.31. Wong DF, et al. (2008) Mechanisms of dopaminergic and serotonergic neurotransmis-

sion in Tourette syndrome: Clues from an in vivo neurochemistry study with PET.Neuropsychopharmacology 33(6):1239–1251.

32. Tarnok Z, et al. (2007) Dopaminergic candidate genes in Tourette syndrome: Associa-tion between tic severity and 3� UTR polymorphism of the dopamine transporter gene.Am J Med Genet B Neuropsychiatr Genet 144B(7):900–905.

33. Gilbert DL, et al. (2006) Altered mesolimbocortical and thalamic dopamine in Tourettesyndrome. Neurology 67(9):1695–1697.

34. Yoon DY, et al. (2007) Dopaminergic polymorphisms in Tourette syndrome: Associa-tion with the DAT gene (SLC6A3). Am J Med Genet B Neuropsychiatr Genet144B(5):605–610.

35. Schmidt L, et al. (2008) Disconnecting force from money: Effects of basal gangliadamage on incentive motivation. Brain 131(Pt 5):1303–1310.

19184 � www.pnas.org�cgi�doi�10.1073�pnas.0904035106 Palminteri et al.

Supporting InformationPalminteri et al. 10.1073/pnas.0904035106SI TextTask Instructions 1: ‘‘Go-Risky’’ Version. The aim of the game is towin money, by guessing the outcome of a key press.

At the beginning of each trial you must orient your gazetoward the central cross and pay attention to the masked cue.You will not be able to perceive the cue that is hidden behind themask.

When the question mark appears you have 3 seconds to makeyour choice between

—holding the key down—leaving the key up.If you change your mind you can still release or press the key

until the 3 seconds have elapsed.‘‘GO!’’ will be written in red if, at the end of the 3-seconds

delay, the key is being pressed.Then we display the outcome of your choice. Not pressing the key

is safe: You will always get a neutral outcome (0€). Pressing the keyis of interest but risky: You can equally win 1€, get nil (0€), or lose1€. This depends on which cue was hidden behind the mask.

There is no logical rule to find in this game. If you never pressthe key, or if you press it every trial, your overall payoff will benil. To win money you must guess if the ongoing trial is a winningor a losing trial. Your choices should be improved trial after trialby your unconscious emotional reactions. Just follow your gutfeelings and you will win, and avoid losing, a lot of euros!

Task Instructions 2: ‘‘Nogo-Risky’’ Version. The aim of the game isto win money, by guessing the outcome of a key press.

At the beginning of each trial you must orient your gazetoward the central cross and pay attention to the masked cue.You will not be able to perceive the cue that is hidden behind themask.

When the question mark appears you have 3 seconds to makeyour choice between

—holding the key down—leaving the key up.If you change your mind you can still release or press the key

until the 3 seconds have elapsed.‘‘NO!’’ will be written in red if, at the end of the 3-seconds

delay, the key is being released.Then, we display the outcome of your choice. Pressing the key

is safe: You will always get a neutral outcome (0€). Releasing thekey is of interest but risky: You can equally win 1€, get nil (0€),or lose 1€. This depends on which cue was hidden behind themask.

There is no logical rule to find in this game. If you never pressthe key, or if you press it every trial, your overall payoff will benil. To win money you must guess if the ongoing trial is a winningor a losing trial. Your choices should be improved trial after trialby your unconscious emotional reactions. Just follow your gutfeelings and you will win, and avoid losing, a lot of euros!

Palminteri et al. www.pnas.org/cgi/content/short/0904035106 1 of 2

Fig. S1. Visual discrimination across trials. Graphs represent (solid squares) performance in the visual discrimination task (percentage of correct response)plotted against trials. Dashed lines represent chance level behavior (50% correct). Error bars are between-subjects standard errors of the mean. To formally testthe presence of perceptual learning we tested whether performance slopes were significantly positive across subjects. Mean slopes were found close to zero andnonsignificant (PD, �0.06 � 0.10; TS, 0.14 � 0.09; Juniors, �0.09 � 0.06; Seniors, 0.07 � 0.09; all P � 0.05, one-tailed t test).

Palminteri et al. www.pnas.org/cgi/content/short/0904035106 2 of 2